Reliable Channel Regions for Good Binary Codes Transmitted Over ...

1 downloads 0 Views 722KB Size Report
good binary codes transmitted over a set of parallel binary-input ... largest MSF reliable channel region is the union of regions cor- responding to all possible ...
IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006

1405

Reliable Channel Regions for Good Binary Codes Transmitted Over Parallel Channels Ruoheng Liu, Student Member, IEEE, Predrag Spasojevic´, Member, IEEE, and Emina Soljanin, Senior Member, IEEE

Abstract—We study the average error probability performance of binary linear code ensembles when each codeword is divided into subcodewords with each being transmitted over one of parallel channels. This model is widely accepted for a number of important practical channels and signaling schemes including block-fading channels, incremental redundancy retransmission schemes, and multicarrier communication techniques for frequency-selective channels. Our focus is on ensembles of good codes whose performance in a single channel model is characterized by a threshold behavior, e.g., turbo and low-density parity-check (LDPC) codes. For a given good code ensemble, we investigate reliable channel regions which ensure reliable communications over parallel channels under maximum-likelihood (ML) decoding. To construct reliable regions, we study a modifed 1961 Gallager bound for parallel channels. By allowing codeword bits to be randomly assigned to each component channel, the average parallel-channel Gallager bound is simplified to be a function of code weight enumerators and channel assignment rates. Special cases of this bound, average unionBhattacharyya (UB), Shulman–Feder (SF), simplified-sphere (SS), and modified Shulman–Feder (MSF) parallel-channel bounds, allow for describing reliable channel regions using simple functions of channel and code spectrum parameters. Parameters describing the channel are the average parallel-channel Bhattacharyya noise parameter, the average channel mutual information, and parallel Gaussian channel signal-to-noise ratios (SNRs). Code parameters include the union-Bhattacharyya noise threshold and the weight spectrum distance to the random binary code ensemble. Reliable channel regions of repeat–accumulate (RA) codes for parallel binary erasure channels (BECs) and of turbo codes for parallel additive white Gaussian noise (AWGN) channels are numerically computed and compared with simulation results based on iterative decoding. In addition, an example transmission over a block-fading Gaussian channel is considered. Index Terms—Block-fading channel, coding theorems, hybrid automatic repeat request (HARQ), maximum-likelihood (ML) decoding, parallel channels, repeat–accumulate (RA) codes, turbo codes.

I. INTRODUCTION E study the average error probability performance of good binary code ensembles when each codeword is divided into subsets of symbols and each subset is transmitted

W

Manuscript received October 14, 2004; revised November 11, 2005. This work was supported by the National Science Foundation under Grants CCR0205362 and SPN-0338805. The material in this paper was presented in part at the DIMACS Workshop on Network Information Theory, Piscataway, NJ, March 2003 and the IEEE International Symposium on Information Theory, Chicago, IL, June/July 2004. R. Liu and P. Spasojevic´ are with WINLAB, Department of Electrical and Computer Engineering, Rutgers University, North Brunswick, NJ 08902 USA (e-mail: [email protected]; [email protected]). E. Soljanin is with Mathematical Sciences Research Center, Bell Labs, Lucent, Murray Hill, NJ 07974 USA (e-mail: [email protected]). Communicated by Ø. Ytrehus, Associate Editor for Coding Techniques. Digital Object Identifier 10.1109/TIT.2006.871615

over (assigned to) one of (independent) parallel channels. Following MacKay [1], we say that a binary code (sequence) is good if it achieves arbitrarily small word error probability when transmitted over a noisy channel at or below a nonzero threshold rate that may be lower than the channel capacity. Parallel-channel coding is highly pertinent to a number of important communication schemes including transmission over block-fading channels [2], [3],1 incremental redundancy retransmission schemes [4], [5], communication over block interference channels [6], [7], coding for slow-fading packet-based transmissions [8]–[10], user cooperation diversity coding [11], [12], and multicarrier communications. In practice, codes are often designed based on the average or the worst case condition of an otherwise unknown or timevarying channel. Given a code structure (and the corresponding code ensemble), an important practical question is: Under what channel conditions will the communication be reliable? For (single) channel models described by a single parameter, the answer requires evaluation of a noise threshold defined as the worst case channel parameter value at which the word error probability decays to zero as the codeword length increases. Recent work studies noise thresholds associated with good codes and the corresponding maximum-likelihood (ML), “typical pair,” and iterative decoding algorithm [13]–[18]. In fact (e.g., for applications listed above), a transmitted codeword can experience a number of different (parallel/independent) channels and, thus, instead of noise thresholds and the corresponding reliable communication channel intervals, we here consider noise boundaries and the corresponding reliable channel regions. More generally, for a given ensemble of good codes and a codeword symbol-to-channel assignment rule, a reliable channel region is the union of channel transition probabilities for which the decoding word error probability approaches zero as the codeword length approaches infinity.2 In particular, if each component channel is described by a single parameter, a reliable -parallel channel region is a set of reliable parallel-channel parameter -tuples. The shape of the parallel channel region is a function of the channel model, the code structure, the codeword symbol to channel assignment rule, and the decoding algorithm. In this paper, we study reliable channel regions for a class of good binary codes transmitted over a set of parallel binary-input symmetric-output (BISO) memoryless channels. Prescribed 1The channel models with dependent (e.g., quasi-static) fading across channels can be considered within the independent channel coding framework by first assuming that the fading is known for each channel and, next, averaging over the fading distribution. 2Note that this definition includes all regions based on upper bounds on codeword error probability and their union.

0018-9448/$20.00 © 2006 IEEE

1406

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006

Fig. 1. System model.

regions correspond to sufficient conditions for reliable communication when ML decoding is employed. The parallel-channel Gallager bound on ML decoding word error probability averaged over all possible code symbol to channel assignments is a function of only code weight enumerators and channel assignment rates. In order to illustrate reliable channel regions with a small number of channel and code descriptors, we focus on the following special cases of the average parallel-channel Gallager bound: union-Bhattacharyya (UB), simplified-sphere (SS), Shulman–Feder (SF), and modified Shulman–Feder (MSF) bound. The UB reliable channel region3 is characterized by a linear combination of the parallel-channel Bhattacharyya noise parameter and the UB noise threshold of the code ensemble. For the parallel-Gaussian channel, the SS reliable channel region is characterized by channel signal-to-noise ratios (SNRs), and the weight spectrum of the code ensemble. In general, the SF reliable channel region is characterized by a linear combination of the parallel-channel mutual information and the maximum weight spectrum distance to the random binary code ensemble, termed SF distance. In addition, we study a modified Shulman–Feder (MSF) reliable channel region characterized by a two-set code weight partition, and both the UB noise threshold and the SF distance of the respective code restrictions. The restriction code UB noise threshold is computed over a set of “small” and “large” code weights, whereas, the restriction SF distance is computed over the complementary “medium” set of code weights. The largest MSF reliable channel region is the union of regions corresponding to all possible partitions. Reliable channel regions of repeat–accumulate (RA) codes for parallel binary erasure channels (BECs) and of turbo codes for parallel additive white Gaussian noise (AWGN) channels are numerically computed and compared with simulation results based on iterative decoding. An application of results to the analysis of communications over a block-fading Gaussian channel is also described. The remainder of this paper is organized as follows: we introduce and discuss the system model and the notation in Section II. Based on the random assignment argument, an average parallel-channel Gallager bound is derived in Section III. A number of special cases of the average parallel-channel Gallager bound and the corresponding reliable channel regions are characterized in Section IV. Examples and an application are given in Section V. We summarize our results in Section VI. II. SYSTEM MODEL In this section, we introduce and discuss the channel model and basic assumptions used throughout this paper. We begin 3Here and hereafter, for simplicity, we say UB, SS, SF, and MSF reliable channel regions for the region derived based on average UB, SS, SF, and MSF parallel-channel bounds, respectively. Similar definitions apply to the singlechannel noise threshold.

with a description of the parallel-channel model. Next, we introduce a random channel assignment technique which plays an important role in evaluating the average error probability performance of a code ensemble transmitted over parallel channels. Finally, we define the linear binary code ensemble of interest, and review the spectral characterization of good ensembles. A. Parallel Channel Model The channel model consists of parallel discrete memoryless channels (DMCs). As shown in Fig. 1, input symbols are binary , output symbols belong to and belong to alphabet alphabet which may be either finite or continuous. Under the assumption that the channel is known at the receiver, parallel channels are independent and the transition probability charac, , . For the derivation terizing Channel is of all error-rate bounds except for the union bound, it is necessary to assume that each component channel is output sym. The following two channel metric, i.e., characteristics are used in our analysis. The Bhattacharyya noise parameter of Channel is (1) The mutual information of Channel equiprobable binary inputs is

under the assumption of (2)

where

Although, the analysis is quite general, in this work, we frequently focus on parallel-channel models in which each component channel is described by a single parameter as, for example, the BEC described by its erasure rate or the AWGN channel described by its SNR. Example 1 (Parallel BECs): For parallel BECs with a set , the Bhattacharyya of erasure probabilities , for noise parameter and the mutual information of Channel are and (3) Example 2 (Parallel AWGN Channels): We here consider parallel binary-input AWGN channels. The transition probability of Channel is

for

(4)

LIU et al.: RELIABLE CHANNEL REGIONS FOR GOOD BINARY CODES TRANSMITTED OVER PARALLEL CHANNELS

where denotes the received SNR of Channel . The Bhattacharyya noise parameter and the mutual information of Channel are

(5)

B. Random Channel Assignment binary linear code transmitted over parConsider an index allel channels. Let elements of the set bit positions in a codeword . As shown in Fig. 1, , for a mapping device partitions the set into subsets . A bit at position is transmitted on Channel with the transition probability , , , and . , We note (and argue later) that, for any given assignment the error probability performance analysis would be exceedingly complicated. To overcome this problem, we introduce a probabilistic mapping called random assignment [5], [19], [20] described as follows. For a given codeword , we assume that a bit of is randomly assigned to Channel independently and with probability , for , i.e., . , as assignment rates. We will also refer to , Several applications which can be analyzed based on the parallel-channel model with random assignments are discussed below. Example 3 (Block-Fading Channel): The block-fading channel model is especially suitable for wireless communication systems with low mobility [2], [3], [6]. The Doppler of the channel and the slot duration are assumed spread to satisfy , which implies that the fading coefficient is essentially invariant during one slot. A typical assumption is that it is also different from a single slot to another. Each source message is encoded, interleaved, and partitioned into slots. For a given fading channel realization, this problem can be described using a -parallel channel model. Example 4 (Multicarrier System): In the case of a frequencyselective channel, a multicarrier signaling scheme allows for dividing the available spectrum into several nonoverlapping and orthogonal channels, in such a way that each channel experiences an almost-flat fading. Hence, for a given channel realization, we can describe the multicarrier transmission system as a set of parallel channels in frequency domain.

1407

We shall use the symbol to denote a binary code, to denote a set of binary codes of length , and to denote a binary code ensemble. For example, turbo and RA code ensembles can be defined as follows. Example 5 (Turbo Code Ensemble): For a given set of returbo code [21] alcursive convolutional encoders, an possible choices for the each of random interlows for different turbo codes is denoted leavers. The set of . And, a turbo code ensemble will refer to a set by of turbo code sequences with a common rate. RA code Example 6 (RA Code Ensemble): An [22] is defined as follows. The information block of length is repeated times, reordered by a random interleaver of size , and, finally, encoded by a rate one accumulator, i.e., a truncated rate one recursive convolutional encoder with a transfer function . is a set of possible RA codes of length , each corresponding to a permutation of the interleaver. An is a sequence of RA code sets with a RA code ensemble common rate . linear code , let denote the number For a given of codewords of Hamming weight , termed the weight enumerator (WE), and denote the number of codewords with Hamming weight over the (assignment) index set , , termed split weight enumerator (SWE). Then, for the average weight enumerator (AWE) and average split weight are given by enumerator (ASWE) for the code set and (6) Let for be the set of normalized weight (relative distances); the normalized weight spectrum is defined as (7) The asymptotic normalized weight spectrum can be written as (8) Example 7 (Weight Spectrum of Random Binary Codes): For the random binary code ensemble , a closed-form AWE, normalized weight spectrum, and asymptotic normalized weight spectrum expressions exist as follows:

C. Code Ensemble and its Weight Spectrum Since the performance of a given code is hard to analyze, the performance of an appropriate code ensemble is usually considered. Definition 1: A binary linear code ensemble is a sequence , where is of binary linear code sets codes with common rate for all . a set of

where function.

is the entropy

1408

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006

D. Good Binary Code Ensembles

is finite for . In related research on the and that minimum distance of turbo codes with two parallel branches, Breiling in [26] has shown that the minimum distance of ; moreover, the auturbo codes is upper-bounded by thors in [27]–[29] showed that, for some specially constructed interleavers, the minimum distance of turbo codes grows as . Hence, we conclude that the ensemble of two-branch turbo codes with good interleavers satisfies Condition 1. The rest of this paper will consider only such good binary code ensembles.

A good code ensemble as defined by MacKay [1] achieves arbitrarily small average word error probability up to some maximum rate that may be lower than the channel capacity. In particular, we consider a family of good code ensembles whose weight spectra satisfy the following condition. Condition 1: For a given binary code ensemble a. there exists a sequence of integers ( ) such that and

III. PARALLEL-CHANNEL GALLAGER BOUNDS

(9) b. for all sequences

such that

and (10)

is finite. This condition was introduced in [16] (see also [23]), where is the authors showed that a binary linear code ensemble satisfies Condition 1. Condition 1a illustrates that good if the probability that a code in the ensemble has minimum approaches zero as the code length distance smaller than , and Condition 1b implies that the “low” normalized weight spectrum has the order of . The proof and discussion regarding the necessity and sufficiency conditions on code and code ensemble goodnss can be found in [23]. We observe that most binary random-like code ensembles satisfy Condition 1. In his thesis [24], Gallager has shown that the minimum distance of most codes in the low-density paritycheck (LDPC) code ensemble increases linearly with the block length . Divsalar et al. showed in [22] that for an RA code ensemble with code rate (11)

, and is bounded. For an ensemble where parallel branches, Kahale and Urof turbo codes with banke [25] proved that the minimum distance of a turbo code with with a randomly chosen interleaver is at least high probability. For the same ensemble, Jin and McEliece have such that shown [16] that there exists a sequence for any

(12)

In 1961 [24], Gallager introduced a general upper bound on the ML decoding error probability for block codes transmitted over a BISO channel as a function of the weight spectrum of the code (ensemble). Recently, Shamai and Sason proposed a series of variations on the Gallager bound [30]. In this section, we consider the parallel-channel Gallager bound. We first summarize the general parallel-channel Gallager bound based on ASWEs for a particular assignment rule. Next, for simplicity of performance analysis, we derive a parallel-channel Gallager bound averaged over the random channel assignment. Lemma 1: For BISO memoryless parallel channels and a , the Galcodeword symbol to channel assignment rule lager bound can be written as in (13) at the bottom of the page, where

(14) and

is an arbitrary even nonnegative function of , i.e., for all and . Proof: See Appendix A.

Further analysis of (13) is difficult and ASWEs are not available in the general case. Hence, instead of considering a given assignment rule, we resort to the random assignment technique, and find the average performance over all possible assignments where each bit of a codeword is independently assigned , where . The to Channel with probability expected (and asymptotic as ) number of bits assigned . The (average) parallel channel Gallager to Channel is bound is given in the following theorem. Theorem 1: For BISO memoryless parallel channels with a set of assignment rates and a code ensemble , the Gallager bound averaged over all possible channel assignments

(13)

LIU et al.: RELIABLE CHANNEL REGIONS FOR GOOD BINARY CODES TRANSMITTED OVER PARALLEL CHANNELS

is as in (15) at the bottom of the page, where

1409

channel region. The UB parallel-channel bound is a special , namely case of the bound (15) for

Proof: See Appendix B. Compared with the bound (13), the average Gallager bound (15) is not dependent on ASWEs, but only on AWEs of the code , which significantly reduces the computational comset plexity. Inequality (15) is a powerful bound and many other simple upper bounds can be derived based on this bound. In the next section, we will address several bounds which induce simple descriptions of respective reliable channel regions. IV. RELIABLE CHANNEL REGIONS In this section, we study the asymptotic parallel-channel error . The perforprobability performance of a code ensemble mance analysis is based on special cases of the average parallel-channel Gallager bound (15) and the corresponding reliable channel regions. We define the reliable channel region and the noise boundary as follows. Definition 2: Consider a code ensemble and a -tuple of . codeword-symbol to channel assignment rates • A parallel-channel transition probability -tuple is said to be reliable if the corresponding average ML decoding word error probability (bound) approaches zero with the codeword length. • A reliable channel region of is a set of reliable -tuple of parallel channel transition probabilities. • The boundary of the reliable channel region is called the . For the special single-channel noise boundary for case, the noise boundary is reduced to the noise threshold of . In particular, if each component channel is described by a single parameter, a reliable channel region is defined as a set of reliable channel parameter values. Note that the reliable channel region, as defined here, is an achievable region under ML decoding. This implies that one can enlarge the reliable channel region by tightening the parallel-channel bound. However, most tight bounds are complicated. In the following, we focus on a set of simple and instructive reliable channel regions based on average UB, SS, SF, and MSF parallel-channel bounds.

(16) where (17) denotes the Bhattacharyya noise parameter averaged over parallel channel assignments. This bound leads to the characterization of the UB reliable channel region as illustrated in the following theorem. Theorem 2: Let symbols of a good binary code ensemble be randomly assigned to binary-input arbitrary-output parallel where assignDMCs with Bhattacharyya noise parameters ment rates are . If (18) where (19) is the UB noise threshold of , then the average ML decoding . word error probability satisfies Proof: The UB noise threshold definition (19) implies that the AWE of weight in can be bounded by a function of as (20) implies that the inequality holds for large enough . where Thus, the bound (16) can be rewritten as

(21) where

is a sequence of integers defined in Condition 1. Since , the average error probability can be upper-bounded as

A. Union-Bhattacharyya (UB) Reliable Channel Region We first study the reliable channel region based on the UB parallel-channel bound. Although there are more powerful bounding technique, we start with the simple UB bound to illustrate main ideas and present a simple form of a reliable

(22)

(15)

1410

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006

where obtain

. By combining (9) and (22), we .

Inequality (18) describes the UB reliable channel region of the code ensemble , where the average Bhattacharyya noise is a function of both channel transiparameter tion probabilities and assignment rates; and UB noise threshold characterizes weight spectrum properties of the code ensemble .

Thus, average ML word error probability decays to zero with . The the codeword length whenever the received SNR reliable channel region (26) contains the UB reliable channel . region (18), and two regions are equal when we set C. Shulman–Feder (SF) Reliable Channel Region The SF parallel-channel bound (see [32] for the singlechannel counterpart) is a special case of the average parallel-channel Gallager bound (15) for

B. Simplified-Sphere (SS) Reliable Channel Region for Parallel AWGN Channels We consider binary-input parallel AWGN channels. The channel transition probabilities and related channel characteristics are described in Example 2. We describe how the parallelAWGN-channel SS bound (see [13] for the single-channel version) follows from the average parallel-channel Gallager bound. and using (4), (5), and (14), we By setting have that

and

(28)

Lemma 2: For BISO memoryless parallel channels, a set of assignment rates , and a code ensemble of rate , the average SF parallel-channel bound can be written as for (29)

and

where

(23) Let , now (23) and (15) imply that a parallel-AWGNchannel SS bound is (30) is a Gallager function with uniform input, and (31) and

(24)

where the second inequality holds since (see [31, Appendix 3A])

is the SF distance for the code ensemble Proof: See Appendix D.

.

It can be easily verified that for

and

(25)

(32)

where The following theorem describes the SS reliable channel region for parallel binary-input AWGN channels. of Theorem 3: Let symbols of a good binary code ensemble be randomly assigned to parallel binary-input AWGN channels where the set of assignment rates is . If

(26) then the average ML decoding word error probability satisfies . Proof: See Appendix C. Note that by setting in (26), the noise boundary reduces to the simplified-sphere noise threshold for the single-channel case for the code ensemble (27)

(33) denotes the average binary channel mutual information for a set of parallel channels. Following a parallel approach to the one in [33, pp. 141–143], we obtain the following theorem. Theorem 4: For BISO memoryless parallel channels with and a code ensemble of rate a set of assignment rates , if (34) where (35) then the average ML decoding word error probability satisfies .

LIU et al.: RELIABLE CHANNEL REGIONS FOR GOOD BINARY CODES TRANSMITTED OVER PARALLEL CHANNELS

1411

random binary code ensemble of the same rate (also see [34, Fig. 4] for the asymptotic weight spectrum of turbo code ensembles). A simple way to tighten this bound has been proposed in [35], where the authors considered the LDPC code ensemble transmitted over a single channel and suggested to first partition the code weights into two subsets, and then to employ the UB bound for the “large” and “small” code weight subsets, and the SF bound for the “medium” code weight subset. We denote the using a pair of partition of the Hamming weight set and , i.e., disjoint subsets and Thus, by combining UB and SF bounding techniques, we obtain the following lemma which describes a tighter parallel-channel bound.

Fig. 2. Comparison of normalized weight spectrum r ensemble versus random binary code ensemble (R = 1=5).

( ): RA code

Lemma 3: For BISO memoryless parallel channels with a set of assignment rates and a code ensemble of rate , a MSF bound is given by

for

(36)

where (37) Proof: See Appendix E. Let and (38) We define (39) Fig. 3. Comparison of normalized weight spectrum r ( ): turbo code ensemble versus random binary code ensemble (R = 1=3).

and

Proof: Following a parallel approach to the one in [33, pp. 141–143].

(40)

We note that the asymptotic SF distance, , measures the weight spectrum distance between the random binary code enand the code ensemble . Hence, (34) shows that semble we can achieve the average binary-input parallel-channel capacity by employing the random binary code ensemble and the is the gap between random assignment rule. The parameter the binary-input parallel-channel capacity and the (achievable) rate of .

as the restriction code UB noise threshold and SF distance correand , respectively. Based sponding to the weight subsets on the MSF parallel-channel bound, we can derive the following reliable channel region. of rate Theorem 5: Let symbols of a good code ensemble be randomly assigned to BISO parallel memoryless channels using a set of assignment rates . If and

(41)

D. Modified Shulman–Feder (MSF) Reliable Channel Region The SF bound is not tight for some practical codes for which there is a large gap between the weight spectrum of the code and that of the random binary code ensemble for “small” or “large” code weights. See Figs. 2 and 3 which, respectively, compare the spectrum of RA and turbo codes with the spectrum of the

then the average ML decoding word error probability satisfies . Proof: See Appendix E. Theorem 5 suggests partitioning the set of normalized into a pair of mutually exclusive subsets and weights

1412

Fig. 4.

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006

0 ln  versus H() + (I 0 1) ln 2.

Fig. 5.

. Restriction code parameters and , corresponding to and , respectively, characterize an MSF reliable channel region in (41). Thus, for a given fixed-weight partition, we can find the corresponding MSF reliable channel region. Moreover, the UB reliable channel region is a special case of the MSF and partition. The largest region for the MSF (LMSF) reliable channel region is the union of regions corresponding to all possible partitions. More precisely, we can write the LMSF reliable channel region as

Thus, for a given pair

(42) , the optimum weight partition is otherwise.

To illustrate the LMSF reliable channel region and the optimumweight partition, we consider the following example. Example 8: Consider a rate RA code ensemble transmitted over three parallel-AWGN channels using the assignment rates

Optimum weight partition versus r

().

The asymptotic normalized weight spectrum of the RA code is (see [36]) given (45) at the bottom of the page. ensemble As shown in Fig. 5, since

the SNR set is in the LMSF reliable channel RA code ensemble, the region, i.e., by employing the rate transmission over three parallel-AWGN channels is reliable under ML decoding. E. Numerical Calculation of Noise Boundaries for Parallel-AWGN Channels We now focus on the computation of the UB, SS, MSF, and the LMSF noise boundary for parallel-AWGN channel with . As described in Example 2, both the Bhattacharyya noise parameter and the channel mutual information are functions of received SNRs of component AWGN channels. Thus, for a given -tuple of assignment rates , a boundary point corresponds . to a received SNR -tuple First, we consider a class of boundary points at which only one channel, say Channel 1 (without loss of generality), has a nonzero received SNR and

and for

(46)

for receiver SNRs and Using (5), (17), and (33), we can calculate channel parameters and . As shown in the Fig. 4, there exist two solutions, , such that (43) Thus, the optimum-weight partition is given by and

(44)

Hence, and , for . Our goal is for Channel 1 such that to find the required received SNR . Inequalities (18), (26), (41), and (42) is imply (47)–(50) at the bottom of the next page, where . Clearly, the set is the inverse function of a noise boundary point of the UB, SS, MSF, and the LMSF reliis well able channel region whenever the corresponding , the SNR -tuple defined. Furthermore, for any is in the reliable channel region. Condition (46) is (randomly) is equivalent to assuming that the ensemble punctured, and that the punctured code ensemble (with expected

(45)

LIU et al.: RELIABLE CHANNEL REGIONS FOR GOOD BINARY CODES TRANSMITTED OVER PARALLEL CHANNELS

TABLE I ALGORITHM FOR COMPUTING NOISE BOUNDARIES

1413

and calculate the required re. ceived SNR for Channel to guarantee Thus, the value of can be written as (51)–(54) at the bottom of the page. Clearly, the received SNR set is a boundary point of the UB, SS, MSF, and the LMSF reliis well able channel region whenever the corresponding defined. V. EXAMPLES AND APPLICATION We first consider RA codes transmitted over parallel BECs and turbo codes transmitted over parallel AWGN channels, and compare derived analytical noise boundaries for ML decoding with simulation results employing iterative decoding. Next, we consider an application to block-fading channels. Another important application of parallel channel coding scheme is incremental redundancy hybrid automatic retransmission request (IR HARQ), which we have studied in [5], [37].

rate ) is transmitted over Channel 1. Thus, the value is the noise threshold of a (randomly) punctured code ensemble. In Table I, we generalize these ideas and propose a recursive algorithm for computing the set of boundary points in SNR (where corresponds to the SNR range is the SNR search step). The basic (51)–(54) below, and idea of this algorithm is to fix (arbitrary) channel SNRs

A. RA Code Ensemble Over Parallel BECs We consider the performance of an RA code ensemble transmitted over a parallel BEC (see Example 1). Note that the mutual information of the th BEC can be rewritten as a function of the Bhattacharyya parameter as (55)

UB

(47)

SS

(48)

MSF

(49)

LMSF

(50)

(51)

(52)

(53)

(54)

1414

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006

0r

 =) versus [H () 0 r

Fig. 6. exp( ( ) 1 5 RA code ensemble.

=



1

( )] log

e for the R =

TABLE II THE AVERAGE ERASURE PROBABILITY BOUND OF RA CODE ENSEMBLE OVER PARALLEL BECS

Fig. 7. RA code ensemble transmitted over parallel BECs: analytic bound versus simulation FER (code rate R = 1=4; 1=5; code length n = 16 384; 100 000; the average erasure probability p = p + p ; assignment rates = 0:7, = 0:3; and the erasure probability of Channel 1 is p = 0:8.).

bound and Shannon limit for different code rates. In Fig. 7, we compare the simulated frame-error rate (FER) performance of RA codes transmitted over two parallel BECs with the LMSF and boundaries and UB boundaries for code rates . For this simulation example, we employ iterative decoding and , the and assume the assignment rate of of Channel 1, the average erasure erasure probability probability of , and code lengths and . B. Turbo Code Ensemble Over Parallel AWGN Channels

Thus, after some simple derivations, the MSF region (41) can be denotes rewritten as (56) at the bottom of the page, where the average erasure probability bound.4 It is easy to see that the is given by largest

We study an example turbo code transmitted over two parallel AWGN channels. The turbo encoder consists of recursive convolutional encoders with rates and connected in parallel through a pseudorandom interleaver. The component code transfer functions are

and (57) Fig. 6 depicts curves and as functions of for . Based on (57), we numerically compute the largest average erasure probability bound as shown in the Table II and compare it to both UB reliable channel region 4Here, it is not a surprise that, the parallel BECs are characterized by a signal parameter, the average erasure probability, due to the simplicity of BEC. In the more general case, e.g., the AWGN channel, parallel channels cannot be characterized by such simple average channel parameter as illustrated in Section V-B.

We have computed the AWE by applying the technique . of [38] for code length Fig. 8 compares the UB, SS, MSF, and the LMSF reliable channel region for two parallel AWGN channels with assign. Noise boundaries are a funcment rates obtained using the approach tion of received SNRs from Section IV-F. Fig. 8 illustrates the relative size of four regions. Figs. 9 and 10 compare the UB and the LMSF reliable channel region with simulation-based results (i.e., the received and where we SNR region) corresponding to FER

(56)

LIU et al.: RELIABLE CHANNEL REGIONS FOR GOOD BINARY CODES TRANSMITTED OVER PARALLEL CHANNELS

Fig. 8. Noise boundary behavior of turbo codes over two AWGN channels (dash line: UB boundary, dash–dot line: SS boundary, solid-cycle line: MSF = 0:4409, y = 0:0786, and solid boundary with a fixed partition c y line: the LMSF boundary; rate R = 1=3 turbo codes with two-component recursive systematic convolutional codes.).

1415

Fig. 10. Noise boundaries versus FER performance of turbo codes transmitted over two parallel AWGN channels (rate R = 1=3 turbo codes with two-component recursive systematic convolutional codes and code length n = 11520).

that the fading channel realization is known at the receiver. Hence, one can, equivalently, study communication over parallel channels. The discrete-time channel model for Channel is given by for

(58)

where is a sequence of independent additive is the average SNR for Channel Gaussian noise variables, , and are channel fading coefficients modeled Rayleigh random variables. The instantaneous received symbol SNR for Channel is (59)

Fig. 9. Noise boundaries versus FER performance of turbo codes transmitted over two parallel AWGN channels (rate R = 1=3 turbo codes with two-component recursive systematic convolutional codes and code length n = 1152).

employ iterative decoding and assume assignment rates and codeword lengths and . These two examples illustrate the case where the derived ML reliable channel regions agree well with simulation results based on iterative decoding. Because iterative decoding is suboptimal, simulation curves fall between the UB and the LMSF noise boundary in most of cases.

is the “channel power.” Note that is a where random variable invariant during each transmission slot, and the word error probability is a function of the set of channel powers . The expected decoding error probability averaged over channel states can now be bounded as follows : (60) error

error error

(61) (62)

C. Application to Block-Fading Gaussian Channels

where and are the instantaneous Bhattacharyya noise parameter and the channel mutual information averaged over slots, and

We study the error performance of good code ensembles transmitted over block-fading channels based on MSF reliable channel regions. Under the block-fading channel model (see Example 3), each source message is encoded, (randomly) interleaved, and equally partitioned into slots. We assume

denotes the MSF reliable channel region with a given weight partition of the code ensemble . Note that the first term of

1416

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006

VI. CONCLUSION

Fig. 11. Error performance of turbo codes over block-fading Gaussian channel (R = 1=5 turbo codes with two-component recursive systematic convolutional codes; the upper bound is derived based on a MSF boundary = 0:234, y = 0:0513). with a fixed partition c y

(62) denotes the (nonnegligible) probability that the set of instantaneous channel parameters is outside the reliable channel region. Theorem 5 implies that the second term approaches zero when the code length increases. Thus, for large , the typical error is caused by the outage event that the instantaneous channel “cannot support” the transmission of the code ensemble , i.e., (63) , the boundary of For the random binary code ensemble the largest reliable channel region is exactly the average instantaneous channel mutual information. It is not a surprise that the is equal to the expected error probability for the ensemble outage probability when the code length is large enough. Fig. 11 illustrates the average word error decoding probability of turbo codes transmitted over block-fading Gaussian channel. recursive convolutional The turbo encoder consists of and connected in encoders with rates parallel through a pseudorandom interleaver. The component code transfer functions are

In this paper, we study binary linear codes whose transmission takes place over a set of (independent) parallel channels. To evaluate the word error probability performance, we modify the 1961 Gallager bound [24] for parallel channels. In order to make the calculation of SWEs feasible, we assume that bits of codewords are randomly assigned to parallel channels. The random assignment technique allows for deriving the average parallel-channel Gallager bound based on solely code WEs. As particular cases of this bound, we derive a generalization of UB, SF, SS, and MSF bounds for parallel channels, where the latter bound combines the first two. Average parallel-channel bounds are applied to ensembles of good codes [1] whose performance in a single-channel model is characterized by a threshold behavior described by a single parameter (e.g., RA codes and turbo codes). For a given good code ensemble, we investigate reliable channel regions which ensure reliable communications under ML decoding. Average parallel-channel bounds allow for describing reliable channel regions using simple constraint functions of channel and code parameters. In particular, the UB reliable channel region is characterized by the average Bhattacharyya noise parameter of the parallel channels and the UB noise threshold of the code ensemble; the SS reliable region for the parallel-Gaussian channel is characterized by the component channel SNRs and the weight spectrum of the code ensemble; the SF region is characterized by the average channel mutual information and the code SF distance; the MSF reliable channel region is described by the average Bhattacharyya noise parameter and the average mutual information of the parallel channels as well as the UB noise threshold and the SF distance of the respective code restrictions defined on a two-set code weight partition. We study two examples: RA codes for parallel BECs and turbo codes for parallel AWGN channels. In both cases, reliable channel regions are numerically computed. Compared with simulation results based on iterative decoding, derived reliable channel regions show a good match. Finally, we apply the MSF reliable channel region to communications over a block-fading Gaussian channel and derive an ML outage upper bound. Again, this bound predicts well the simulation-based FERs under iterative decoding. APPENDIX A DERIVATION OF LEMMA 1

and

and the interleaver length is .5 Again, we have computed the AWE . We compare the simulation results with the upper bound (63) based on an MSF reliable channel region by choosing the weight spectrum partition such that and by using (39) and (40). We observe a good match between simulations based on iterative decoding and ML outage upper bound (63). 5Here, this interleaver block length is typically specified by wireless standard, e.g., see [39].

In this appendix, we derive the general parallel-channel Gallager bound (Lemma 1) based on ASWEs and for a given assignment rule. The proof follows the approach of the Gallager’s bounding technique [24]. Proof Consider a binary linear block code of length . Let be the transmitted codeword, be the received sequence, and define the discrepancy [24] between an and the received sequence as arbitrary codeword (64)

LIU et al.: RELIABLE CHANNEL REGIONS FOR GOOD BINARY CODES TRANSMITTED OVER PARALLEL CHANNELS

where is the conditional transition probability of the is an arbitrary function, which is positive if channel, and is positive. The ML decoding word error probability, , can be bounded as

(65) is the ML decoding word error probability when where the codeword is transmitted. Let denote the subset of such that

1417

to have a similar product form

We restrict the function

(73)

is a function of the channel index , and symwhere . metric as a function of the channel output allows By (67)–(73), the optimized parameter to rewrite (70) as

(66) where is an arbitrary real number. We split the right-hand side of (65) into two terms as

(74)

(67) where

where

(68) and

can be bounded as

Thus, we can bound

as

(69) The Chernoff bound implies that

and (70)

(75)

where Hence, the average ML decoding word error probability over the code ensemble is (71) Now, consider the codeword of transmitted over parallel, binary-input, output-symmetric, and memoryless channels. The conditional transition probability of the channel is given by

and (72)

(76)

1418

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006

APPENDIX B DERIVATION OF THEOREM 1

. Note that, for given and (where and ), the ASWE is binomially distributed as shown in (79) at the bottom of the page, and

where

Here, we derive the parallel channel Gallager bound averaged over random codeword symbol to channel assignment.

(80)

Proof of Hamming weight We consider a codeword transmitted over parallel channels. A random assignment rule assigns each bit of the codeword independently to Channel with probability , for . Hence, we have subcodewords corresponding to each of parallel channels. Let and be random variables representing the length and the Hamming weight of the th subcodeword. Let

Hence, the average parallel channel Gallager bound over all possible assignments can be written as (81), also at the bottom of for , then the page. Let now

and the joint probability mass function of the random vector now given by

for

and

is

(77)

where . For a given , the conditional probability mass function of can be written as (82)

for

and

(78)

After optimizing the parameter in (82), we finally obtain the bound (83), also at the bottom of the page, where .

(79) otherwise

(81)

and (83)

LIU et al.: RELIABLE CHANNEL REGIONS FOR GOOD BINARY CODES TRANSMITTED OVER PARALLEL CHANNELS

APPENDIX C DERIVATION OF THEOREM 3

4) Let

1419

denote the optimal which maximizes the function , then

Proof

(89)

We can rewrite the SS parallel-channel bound (24) as (90)

and

and

(84)

where

where and are given by Proposition 2.3. Proof: (Proposition 2) is concave in , (86) and 1) Since

imply that (85) 2) Again, since is concave in , if (88) has a solution, it is the unique solution that maximizes . Otherwise, if (88) does not have any solutions, the optimum must be since

Let

and

(86)

3) Note that we claim that there exists a set

. By continuity, and

such that

Theorem 3 is based on the following propositions. Proposition 1: of . Proof: (Proposition 1) Note that

is a concave function

4) Now

(91) (87)

. Hence,

Note that

is a concave function of .

Thus,

Proposition 2: If the followng. 1) 2) The optimal which maximizes is either or the solution of

then we have .

for i.e., (92)

for 3) There exist a set of Bhattacharyya noise parameters and such that

(88)

where the final step is due to Proposition 2 item 1. By or is the solution applying Proposition 2 item 2, to

for

(93)

1420

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006

Now, since

Let Using the assumption bounded as

denote the right-hand side of (96). , is

(97) and is increasing in

since

we can rewrite (93) as

(98)

(94) where is due to have the desired result.

for

. Hence, we

Whereas, the left-hand side of (96) is a continues decreasing when . Therefunction of , with value in fore, (96) is feasible. Moreover, since is concave in , the solution is unique, and it maximizes the func. Thus, (97) implies the desired result. tion Proposition 4: For any

Proposition 3: Let

, if (99)

and

there exist , for

such that

, denote the received SNR of (100)

Channel ,

Proof: (Proposition 4) We consider the following two and . cases: Case 1 ( ): Proposition 2 items 3 and 4 imply such that that there exists

and

If

, then (95)

Proof: (Proposition 3) By applying the Cauchy–Schwarz inequality, we have

(101) The condition

implies that (102)

where

Hence, we claim that is a concave function of . Now, we check the following equation:

Case 2

(

): In this case, we let . Propositions 3 implies (103)

i.e., Let (96)

. Clearly

LIU et al.: RELIABLE CHANNEL REGIONS FOR GOOD BINARY CODES TRANSMITTED OVER PARALLEL CHANNELS

Now, items 3 and 4 of Proposition 2 imply the folsuch that lowing result: there exists

1421

APPENDIX D DERIVATION OF LEMMA 2 Here, we derive the SF parallel-channel bound as a special case of the average parallel-channel Gallager bound (15).

(104)

Proof The bound (15) can be rewritten as (109) at the bottom of the page. Let

Note that

and

(105)

Hence (104) becomes

and

(110)

(106) we now have

where

Finally, let

(111) (107)

and . Combining (102) and (106), we prove Proposition 4. Now Proposition 4 implies that there exists such that

and (112), also at the bottom of the page. Substituting (112) and (111) in (109), we obtain

for Hence, we can bound (84) as

(113)

(108) where we get (114), also at the bottom of the page, and where we obtain the desired result

. Combining (9) and (108),

(115)

(109)

(112)

(114)

1422

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006

APPENDIX E DERIVATIONS OF LEMMA 3 AND THEOREM 5 In this appendix, we derive MSF parallel-channel bounds and reliable channel regions. First, we prove Lemma 3.

Following the approach from Appendix B, we have (120) at the bottom of the page, and (121), also at the bottom of the page, and where

Proof of Lemma 3 By using a similar approach to one used in the proof of Lemma 1, we consider a binary linear block code . Let be the transmitted codeword, be the received sequence. The ML decoding word error probability for can be bounded as event that

(116)

for

. By setting

, we have (122)

Let now

where is the estimate of the ML decoder when is transmitted. denote the Hamming distance between sequences Let and , and and be the two corresponding ML metrics defined as

and

(123)

then

and

(124)

(117) where

Then, (116) can be rewritten as (118) event that

(125) Thus, by combining (122) and (124) we have the MSF bound

event that (119) where

and are any two mutually exclusive subsets of such that .

for

(126)

Now, we prove Theorem 5 as follows.

(120)

(121)

LIU et al.: RELIABLE CHANNEL REGIONS FOR GOOD BINARY CODES TRANSMITTED OVER PARALLEL CHANNELS

Proof of Theorem 5 Let Following definitions of

REFERENCES and and , we have that

.

(127) Hence, (36) implies that the average of the ML decoding error can be bounded as probability for a code ensemble

for Since as

(128)

, (128) can further be upper-bounded

(129) . Condition 1 implies that

where

(130) Since

when

, we have that (131)

Note that and Following a parallel approach to the one in [33 (pp. 141–143)], we obtain the following result: if

then, there exists a

1423

such that

and thus, (132) Combining (130)–(132), we have

[1] D. J. C. MacKay, “Good error-correcting codes based on very sparse matrices,” IEEE Trans. Inf. Theory, vol. 45, no. 2, pp. 399–431, Mar. 1999. [2] L. H. Ozarow, S. Shamai (Shitz), and A. D. Wyner, “Information theoretic considerations for cellular mobile radio,” IEEE Trans. Veh. Technol., vol. 43, no. 3, pp. 359–378, May 1994. [3] E. Biglieri, J. Proakis, and S. Shamai (Shitz), “Fading channels: Information-theoretic and communications aspects,” IEEE Trans. Inf. Theory, vol. 44, no. 6, pp. 1895–1911, Oct. 1998. [4] G. Caire and D. Tuninetti, “The throughput of hybrid-ARQ protocols for the Gaussion collision channel,” IEEE Trans. Inf. Theory, vol. 47, no. 4, pp. 1971–1988, Jul. 2001. [5] E. Soljanin, R. Liu, and P. Spasojevic´ , “Hybrid ARQ with random transmission assignments,” in Proc. DIMACS Workshop on Network Information Theory, Piscataway, NJ, Mar. 2003, pp. 321–334. [6] R. McEliece and W. E. Stark, “Channels with block interference,” IEEE Trans. Inf. Theory, vol. IT-30, no. 1, pp. 44–53, Jan. 1984. [7] M. Chiani, “Error probability for block codes over channels with block interference,” IEEE Trans. Inf. Theory, vol. 44, no. 7, pp. 2998–3008, Nov. 1998. [8] E. Malkamäki and H. Leib, “Coded diversity on block-fading channels,” IEEE Trans. Inf. Theory, vol. 45, no. 2, pp. 771–781, Mar. 1999. [9] R. Knopp and P. A. Humblet, “On coding for block fading channels,” IEEE Trans. Inf. Theory, vol. 46, no. 1, pp. 189–205, Jan. 2000. [10] G. Caire, G. Taricco, and E. Biglieri, “Optimum power control over fading channels,” IEEE Trans. Inf. Theory, vol. 45, no. 4, pp. 1468–1489, Jul. 1998. [11] A. Stefanov and E. Erkip, “Cooperative coding for wireless networks,” in Proc. IEEE Conf. Mobile and Wireless Communications Networks 2002, Stockholm, Sweden, Sep. 2002, pp. 90–93. [12] R. Liu, P. Spasojevic´ , and E. Soljanin, “User cooperation with punctured turbo codes,” in Proc. 2003 Allerton Conf. Communication, Control, and Computing, Monticello, IL, Oct. 2003, pp. 1690–1699. [13] D. Divsalar, “A Simple Tight Bound on Error Probability of Block Codes with Application to Turbo Codes,” Jet Propulsion Lab, CIT, Pasadena, CA, Tech. Rep. TMO 42-139, 1999. [14] S. Aji, H. Jin, A. Khandekar, R. J. McEliece, and D. J. C. Mackay, “BSC thresholds for code ensembles based on ’typical pairs’ decoding,” in Proc. IMA Workshop on Codes, Systems and Graphical Models, New York, Aug. 1999, pp. 195–210. [15] T. Richardson and R. Urbanke, “Thresholds for turbo codes,” in Proc. IEEE Int. Symp. Information Theory 2000, Sorrento, Italy, Jun. 2000, p. 317. [16] H. Jin and R. J. McEliece, “Coding theorems for turbo code ensembles,” IEEE Trans. Inf. Theory, vol. 48, no. 6, pp. 1451–1461, Jun. 2002. [17] L. Bazzi, T. Richardson, and R. Urbanke, “Exact thresholds and optimal codes for the binary symmetric channel and Gallagers decoding algorithm A,” IEEE Trans. Inf. Theory, vol. 50, no. 9, pp. 2010–2021, Sep. 2004. [18] I. Sason and S. Shamai (Shitz), “On improved bounds on the decoding error probability of block codes over interleaved fading channels, with applications to turbo-like codes,” IEEE Trans. Inf. Theory, vol. 47, no. 6, pp. 2275–2299, Sep. 2001. [19] R. Liu, P. Spasojevic´ , and E. Soljanin, “Punctured turbo code ensembles,” in Proc. IEEE Information Theory Workshop 2003, Paris, France, Mar./Apr. 2003, pp. 249–252. [20] , “On the role of puncturing in hybrid ARQ schemes,” in Proc. IEEE Int. Symp. Information Theory , Yokohama, Japan, Jun./Jul. 2003, p. 449. [21] C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit error-correcting coding and decoding: Turbo-codes,” in Proc. IEEE Int. Conf. Communications, Geneva, Switzerland, May 1993, pp. 1064–1070. [22] D. Divsalar, H. Jin, and R. McEliece, “Coding theorems for ’turbo-like’ codes,” in Proc. 1998 Allerton Conf. Communication, Control, and Computing, Monticello, IL, Sep. 1998, pp. 201–210. [23] R. Liu, P. Spasojevic´ , and E. Soljanin, “On the weight spectrum of good linear binary codes,” IEEE Trans. Inf. Theory, vol. 51, no. 12, pp. 4369–4373, Dec. 2005. [24] R. G. Gallager, Low-Density Parity-Check Codes. Cambridge, MA: MIT Press, 1963. [25] N. Kahale and R. Urbanke, “On the minimum distance of parallel and serially concatenated codes,” in Proc. IEEE Int. Symp. Information Theory, Cambridge, MA, Aug. 1998, p. 31. [26] M. Breiling, “A logarithmic upper bound on the minimum distance of turbo codes,” IEEE Trans. Inf. Theory, vol. 50, no. 8, pp. 1692–1710, Aug. 2004.

1424

[27] D. Truhachev, M. Lentmaier, and K. S. Zigangirov, “Some results concerning design and decoding of turbo-codes,” Probl. Pered. Inform., vol. 37, pp. 190–205, Jul./Sep. 2001. [28] L. Bazzi, M. Mahdian, and D. Spielman, “The minimum distance of turbo-like codes,” IEEE Trans. Inf. Theory. [Online]. Available: http://math.mit.edu/ spielman, submitted for publication. [29] J. Boutros and G. Zémor, “Interleavers for turbo codes that yield a minimum distance growing with blocklength,” in Proc. IEEE Int. Symp. Information Theory, Chicago, IL, Jun./Jul. 2004, p. 55. [30] S. Shamai (Shitz) and I. Sason, “Variations on the Gallager bounds, connections, and applications,” IEEE Trans. Inf. Theory, vol. 48, no. 12, pp. 3029–3051, Dec. 2002. [31] A. J. Viterbi and J. K. Omura, Principles of Digital Communication and Coding. New York: McGraw-Hill, 1979. [32] N. Shulman and M. Feder, “Random coding techniques for nonrandom codes,” IEEE Trans. Inf. Theory, vol. 45, no. 6, pp. 2101–2104, Sep. 1999. [33] R. G. Gallager, Information Theory and Reliable Communication. New York: Wiley, 1968.

IEEE TRANSACTIONS ON INFORMATION THEORY, VOL. 52, NO. 4, APRIL 2006

[34] I. Sason, ˙I. E. Telatar, and R. Urbanke, “On the asymptotic input-output weight distributions and thresholds of convolutional and turbo-like encoders,” IEEE Trans. Inf. Theory, vol. 48, no. 12, pp. 3052–3061, Dec. 2002. [35] G. Miller and D. Burshtein, “Bounds on the maximum-likelihood decoding error probability of low-density parity-check codes,” IEEE Trans. Inf. Theory, vol. 47, no. 7, pp. 2696–2710, Nov. 2001. [36] H. Jin and R. McEliece, “RA codes achieve AWGN channel capacity,” in Proc. 1999 Conf. Applied Algebra, Algebraic Algorithms and ErrorCorrecting Codes (AAECC-13), Honolulu, HI, Nov. 1999, pp. 10–18. [37] R. Liu, P. Spasojevic´ , and E. Soljanin, “A throughput analysis of incremental redundancy hybrid ARQ schemes with turbo codes,” in Proc. Conf. Information Sciences and Systems, Princeton, NJ, Mar. 17–19, 2004, pp. 726–731. [38] D. Divsalar and F. Pollara, “Weight Distributions for Turbo Codes using Random and Nonrandom Permutations,” Jet Propulsion Lab, CIT, Pasadena, CA, Tech. Rep. TMO 42-122, 1995. [39] Physical Layer Standard for cdma2000 Spread Spectrum Systems (Revision C), 3GPP2 Std. C.S0002-C, 2004.