On the distribution of low-weight codewords for

0 downloads 0 Views 114KB Size Report
of low weight for standard parallel concatenated (turbo) code ensembles. .... of maximum-likelihood decoders and we only cite Sason and Shamai [16] .... D(x,y) = x2 y2 (x2 +yy2). (1y)2 −x2. = x. 2 y3. 1y. +x. 4 y2. (1y)3. +O(x. 6). D .... ¯P(x) = 1+2x. 8 +4x. 9 +6x. 10 +8x. 11 +O(x. 12) = ˜P(x). Note that the first few terms of ...
On the Distribution of Low-Weight Codewords for Turbo Codes Tom Richardson∗

R¨udiger Urbanke†

We determine the generating function counting the asymptotic number of (minimal) codewords of low weight for standard parallel concatenated (turbo) code ensembles. We then show that the number of minimal codewords of weight up to some fixed constant follows in the limit of large block lengths a Poisson distribution. In analogy to the standard random graph model, we conjecture that this statement remains true as long as we look at codewords of weight O ((log n))1/4 , where n is the blocklength. We then show that parallel concatenated turbo code ensembles have the same error-floor behavior as standard low-density parity-check ensembles LDPC(n, λ, ρ) with λ0 (0)ρ0 (1) > 1. Using this analogy we conjecture a compact form of the stability condition for parallel concatenated turbo code ensembles. This condition agrees with the few examples known in the literature. It has the pleasing interpretation to characterize that channel value at which the union bound applied to the asymptotic error floor expression diverges.

1 Introduction As introduced by Berrou et al. in their landmark paper [4], a standard parallel concatenated turbo code is defined as follows. Fix a binary rational function G(D) = p(D)/q(D) of degree m with q0 = 1 and a length n. Further, let π = (π [1] , π [2] ), where π [i] : {1, · · · , n + m} → {1, · · · , n + m}, 1 ≤ i ≤ 1, is a permutation on n +m letters which fixes the last m letters. Let us denote the encoding map associated to a rational function G(D) by γ. More precisely, if x = (x1 , · · · , xn , 0, · · ·0) denotes the input of length n + m to the filter G(D) then the corresponding output is γ(x), where we assume that the feedback is removed in the last m steps. Using this notation, the code P = P(G, n, π) can be defined as P(G, n, π) := {(|{z} x , γ(π [1] (x)), γ(π [2] (x))) : x = (x1 , · · · , xn , 0, · · · , 0), xi ∈ GF(2)}. | {z } | {z } | {z } x[s]

x[p1 ]

m times

x[p2 ]

In the sequel we call x[s] the systematic branch and x[pi ] , 1 ≤ i ≤ 2, the i-th parity branch. For fixed G and n, let P = P (G, n) denote the ensemble of codes generated by letting each component of π vary over all permutations on n + m letters and endowing this set with a uniform probability distribution. In the sequel we let P = P(G, n, π) be a random code chosen uniformly from P (G, n). The “natural” rate of such a code is one-third, since there is one information stream but there are in total three transmitted streams (we ignore here the effect of the m appended zeros on the rate which vanishes like Θ(1/n)). Often one is interested in punctured turbo codes in order to adjust the rate. E.g., if we puncture every second bit in both x[p1 ] and x[p2 ] then we get a rate one-half ∗ Flarion † EPFL

Technologies, Bedminster, NJ, USA-07921, email:[email protected] (Lausanne), CH-1015, email:[email protected]

492

1/3 ≤ r ≤ 1. From the point of view of analysis random puncturing is particularly appealing: to achieve rate r, puncture each bit of each parity stream with probability 3r−1 2r . This gives rise to an ensemble of punctured codes whose average rate is r. Further, by √ standard concentration results we know that most elements in this ensemble have rate r ± O(1/ n). We will denote a punctured code of design rate r by P(G, n, π, r), where the exact nature of the puncturing will be made clear from the context. Note also that we can always puncture the m extra systematic bits appended to the n information bits since they are known to be zero. The corresponding ensemble is denoted by P (G, n, π, r). In this short note we are interested in the following two questions: (i) what is the distribution of the number of codewords of small weight and (ii) how is this distribution connected to the stability of the system. The question of the weight distribution has of course been studied in depth by a large set of authors. To mention only a small subset of the developments. Probably the most important step was the realization by Benedetto and Montorsi that, although the weight distribution of individual codes is hard to determine, the average weight distribution of the ensemble is relatively easy to compute, [3]. Much of what we discuss in this paper has been the topic of investigation in [3]. We will be able to add somewhat to the understanding of the weight distribution by not only giving a compact representation of the average but by also describing the actual distribution. Similar concepts were also discussed around the same time by Perez, Seghers and Costello in [14]. The average weight distribution was then used by a large number of authors to derive upper bounds on the performance of maximum-likelihood decoders and we only cite Sason and Shamai [16] and the extensive list of references therein. The question relating to the minimum distance was first addressed by √ Kahale and Urbanke in a probabilistic setting [10]. The first worst case upper bound was the O( n) bound proposed by Breiling and Huber [6]. It has since been improved to a O(log n) bound by several authors [1, 2, 7].

2 A Short Detour about Weight Distributions and Detours We start with a reminder of how to compute generating functions of the (input-output) weight distribution as well as the detour generating function of convolutional codes. Let C(G, n) denote the convolutional code of input length n defined by the binary rational function G. Let ci,o,n count the number of codewords of input weight i and output weight o. I.e., i counts the weight of the information bits and o counts the weight of the parity bits, so that i + o is the (regular) weight of a codeword. Define the input-output generating function as C(x, y, z) :=

∑ ci,o,nxiyo zn.

i,o,n

Let us also define the regular weight distribution of the code C(G, n), C(x, z) := C(x, y = x, z) = ∑ cw,n xw zn , w,n

with coefficients cw,n := ∑i,o:i+o=w ci,o,n . Example 1 [Running Examples] In the sequel we will use as running examples the unpunctured ensemble P (G = 7/5, n, r = 1/3) as well as the punctured ensemble P (G = 17/31, n, r = 1/2) with alternating puncturing. Here, G = 7/5 refers to the rational function G(D) = (1+D+D2)/(1+D2) and G = 17/31 is a shorthand for G(D) = (1 + D4 )/(1 + D + D2 + D3 + D4 ). 

493

method. Encode the effect of the state transitions at each step in matrix form as (00)

M(x, y) =

(10) (01) (11)

(00) 1  0   xy 0

(10) (01) (11)

xy 0 1 0

0 y 0 x

 0 x  . 0 y

More precisely, the rows of the matrix are associated to the current state whereas the columns are associated to the state after the transition. E.g., for our example M(00),(10) = xy, since the transition from state (00) to state (10) requires an input of one and results in a parity bit of one as well. This matrix encodes the transitions corresponding to paths of length one. More generally, it is not very difficult to verify that paths of length n are encoded by M n (x, y). In the last m steps we eliminate the feedback. For these m steps the transition matrix, call it ¯ M(x, y), corresponds to the one of the encoder p/1 (instead of p/q). Codewords start and end in the zero state. Therefore, the codewords of C(G, n) are encoded by [M n (x, y)M¯ m (x, y)]0,0 . It follows that   C(x, y, z) = ∑ [M n (x, y)M¯ m (x, y)]0,0 zn = (I − zM(x, y))−1 M¯ m (x, y) 0,0 , n

1 where in the last line we have used the matrix equivalent of the well-known formula ∑n≥0 xn = 1−x . To accomplish the above calculations it is convenient to use a standard symbolic algebra system like e.g. Mathematica.

Example 2 [Weight Distribution of C(G = 7/5, n)] After some algebra, we get that C(x, y, z) is equal to   1 + z − x2 z2 − y (1 + z) + x y3 (1 + z) − x y4 z (1 + z) + y2 z x2 + z + x3 z . 1 − (1 + y) z + (y (1 + y) − x2 (1 + y3 )) z3 + (x2 − (1 + x4 ) y2 + x2 y4 ) z4  It is not much harder to deal with punctured codes. Let us start with alternating puncturing patterns. Example 3 [Weight Distribution of C(G, n) and Alternating Puncturing] Assume that we employ an alternating puncturing pattern. More precisely, we will puncture all even parity bit positions (except for the last m positions). It follows that for all odd time instants the transitions are described by M(x, y) but for all even time instants the transitions are given by M(x, y = 1). Therefore (assuming that we only look at codes with even n) the generating function of the input-output weight distribution is   C(x, y, z) = (I − z2 M(x, y)M(x, 1))−1M¯ m (x, y) 0,0 . 

The case of random puncturing is even easier. Lemma 4 [Weight Distribution of C(G, n) and Random Puncturing] Let C(x, y, z) be the generating function of the input-output weight distribution of the code C(G, n). Let Cα (x, y, z) denote the corresponding generating function for the punctured ensemble in which each parity bit is punctured independently with probability α. Then Cα (x, y, z) = C(x, yα¯ + α, z).

494

Lemma 5 [Detour Generating Function of G(D)] Consider the binary rational function G(D) and let M(x, y, z) be the corresponding transfer matrix where x encodes the input weight, y the output weight and z the length. Let M • (x, y) be equal to M(x, y, 1) except for entry (0, 0) which we set equal to zero. Let D(x, y) be the generating function counting detours. Then we have D(x, y) = 1 −

1 . [(I − M • (x, y))−1 ]0,0

Example 6 [Detour Generating Function for G = 7/5] Using Lemma 5 we get  3 x2 y2 x2 + y − y2 y2 4 2 y D(x, y) = +x =x + O(x6 ). 2 3 2 1−y (1 − y) − x (1 − y)  It is easy to include puncturing. For random puncturing we have Dα (x, y) = D(x, yα¯ + α)). For alternating puncturing on the other hand we have to take into account that odd and even positions have different transition matrices and we get 2(1 − D(x, y)) =

1 + [(I − M • (x, y)M • (x, y = 1))−1 (I + M • (x, y))]0,0 1 . • • [(I − M (x, y = 1)M (x, y))−1 (I + M • (x, y = 1))]0,0

(1)

We are mostly interested in codewords of small weight. Therefore write D(x, y) := ∑ di (y)xi . i

In particular we are interested in d2 (y). Using first equation (1) and then extracting the term which is quadratic in x2 we can get d2 (y). Example 7 [Detour Generating Function for G = 17/31 With Alternating Puncturing] In this case some algebra reveals that  y2 3 + y2 d2 (y) = . 2 − 2 y2  Note that in many cases when there is no puncturing d2 (y) has the form d2 (y) :=

yα , 1 − yβ

where α is the weight of the lowest weight detour due to inputs of weight two and β is the additional weight picked up in case the two input ones are spaced one extra period further apart. 2 Unfortunately this is not always true. The simplest case is G = 1+D+D 1+D for which we have d2 (y) :=

y2 (1 + (1 − y) y) . 1−y

495

It was shown in [3] how to compute the expected weight distribution of the concatenated code from the weight distribution of the component codes. Lemma 8 [Expected Weight Distribution of P (G, n, r)] Let P(x, y, z) := ∑i,o,n ci,o,n xi xo zn denote the generating function of the input-output weight distribution of C(G, n) under puncturing of rate 3r−1 i o n 2r . Let P(x, y, z) := ∑i,o,n pi,o,n x x z denote the generating function of the expected input-output weight distribution of P (G, n, r), where the expectation is taken over all elements of the ensemble (i.e., all interleaves and all puncturing patterns if the ensemble is punctured). Then pi,o,n :=

∑ j ci, j,nci,o− j,n . n

(2)

i

Example 9 [Expected Weight Distribution for P (G = 7/5, n, r = 1/3)] Consider the ensemble P (G = 7/5, n, r = 1/3). In Example 2 we computed C(x, y, z) for C(G = 7/5, n). Let us compute the first few terms of the associated regular weight distribution for n = 64, 128 and 256 via equation (2): P(x, z) ≈ ·· ·  1 + 0.0005 x6 + 0.126 x7 + 2.2255 x8 + 4.2853 x9 + 6.34 x10 + · · · z64 + ·· ·  1 + 0.0001 x6 + 0.0627 x7 + 2.111 x8 + 4.1337 x9 + 6.1395 x10 + · · · z128 + ···  1 + 0.0313 x7 + 2.0551 x8 + 4.0647 x9 + 6.0622 x10 + 8.0525 x11 + · · · z256 + O(z257 ).

 The above example suggests that the expected number of low weight codewords converges to a fixed limit as the block length increases. This is indeed true and was shown in [3, 14]. We state their result in a somewhat more compact form. Lemma 10 [Expected Number of Codewords of Fixed Weight in Asymptotic Limit: Parallel Case] Consider the ensemble P (G, n, r). Let d2 (y) count detours of G(D) with input-weight two. Let P(x, y, z) := ∑i,o,n pi,o,n xi yo zn denote the generating function of the input-output weight distribu¯ y) := ∑i,o b¯ i,o xi yo be the generating function of the tion of the ensemble P (G, n, r) and let P(x, asymptotic input-output weight distribution. More precisely, p¯i,o = limn→∞ pi,o,n . Then 1 ¯ y) = q P(x, . 1 − 4x2 d22 (y)

(3)

So far our presentation follows a well-known path. We now come to new results. Let us define minimal codewords. As we will see, minimal codewords play a special role. Definition 1 [Minimal Codewords] Consider a binary linear code C. We say that a codeword x ∈ C is minimal if its support set does not contain any other (nonzero) codeword.

496

¯ y) denote the generating function of the allel Case] Consider the ensemble P (G, n, r). Let P(x, ˜ y) denote the corresponding asymptotic input-output weight distribution of P (G, n, r) and let P(x, generating function counting minimal codewords. Then  ˜ y) = log(P(x, ¯ y)) = − 1 log 1 − 4x2 d22 (y) . P(x, 2

(4)

˜ ˜ y = x) = ∑w p˜w xw . For any d > 0, d ∈ N, and if P denotes a random element of Let P(x) = P(x, P (G, n, r) then lim P {dmin (P) > d} = e− ∑w=1 p˜w . d

n→∞

Further, if (W1 ,W2 , · · · ,Wd ) denotes the random variables counting the number of codewords of weight up to d for a random sample P then the distribution of this vector converges to a vector of independent Poisson distributed random variables with the given mean. ¯ y) and P(x, ˜ y) for P (G = 7/5, n, r = 1/3)] In Example 6 we determined d2 (y) Example 12 [P(x, for this case. Inserting this into equations (3) and (4) we get !  3 2 1 1 2xy ¯ y) = r ˜ P(x, .  3 2 , P(x, y) = − 2 log 1 − 1 − y 2xy 1 − 1−y

Let us expand out the first few terms. We get

¯ ˜ P(x) = 1 + 2x8 + 4x9 + 6x10 + 8x11 + O(x12 ) = P(x). ¯ y) and P(x) ˜ Note that the first few terms of P(x, are the same but they differ as soon as we look at weights starting from twice the minimum distance. If we compare this to the finite-length weight distribution which we computed in Example 9 we see that the convergence to this asymptotic limit is quite fast and so the asymptotic quantities which can be computed quite trivially are of practical value. From Lemma 11 we know that for randomly chosen elements from this ensemble we have lim P {dmin (P) ≥ 8} = 1, lim P {dmin (P) ≥ 9} = e−2 ≈ 0.135335,

n→∞

n→∞

−6

lim P {dmin (P) ≥ 10} = e

n→∞

≈ 0.00247, lim P {dmin (P) ≥ 11} = e−12 ≈ 6.14 · 10−6. n→∞

 Where does Lemma 11 come from and how is it proved? The idea is basically the same as in the case of LDPC ensembles, see [9]. Let us quickly retrace the necessary steps in this case. In the realm of LDPC codes and large blocklengths, codewords/stopping sets of small weight are due to cycles in the bipartite graph which involve exclusively degree-two nodes. Consider therefore the residual graph induced by the set of degree-two variable nodes. Convert this graph into a standard (non bipartite) graph as follows: The standard graph has vertices associated to the check nodes. Two check nodes in the standard graph are connected if and only if in the bipartite graph there is a variable which connects the two check nodes via its two emanating edges. Further note that a cycle of length s in the standard graph gives rise to a cycle of length 2s in the bipartite graph. Finally, the induced standard graph is again a random graph, albeit it has a specific prescribed

497

a standard graph is a well-studied problem, see [5]. In particular it is known that the number of cycles of small length converges in the limit of large graphs to a Poisson distribution with a specific mean. This gives rise to the result stated in [9]. For turbo codes we can not reduce the problem to a well-known problem in standard graph theory. But we can use exactly the same proof techniques. In particular we make use of the following standard fact, see e.g. [5, Theorem 1.23]. Theorem 13 Let µ1 , · · · , µm be non-negative numbers. For each n ∈ N let X1 (n), · · · , Xm (n) be nonnegative integer-valued random variables defined on the same space. If for all (r1 , · · · , rm ) ∈ Nm we have " # m

lim E (X1 (n))r1 · · · (Xm (n))rm − ∏ µri i = 0,

n→∞

(5)

i=1

where (x)r := x(x − 1) · · · (x − r + 1), then the random vector (X1 (n), · · · , Xm (n)) converges in distribution to a vector of independent Poisson random variables with mean (µ1 , · · · , µm ). To check the factorial moments in (5) for our purpose we proceed as follows. We first check that if we place the various codewords at non-overlapping positions then the total contribution to the ri factorial moment of such constellations converges to the desired value ∏m i=1 µi . Finally we check that the contributions of constellations which (partially) overlap vanishes (like O(1/n)). For the second step to hold it is important that we look at minimal codewords and it would not hold for codewords themselves. In our derivation we found it easier to first compute the expected number of codewords, or ¯ y). From it we can trivially pass to the generating funcequivalently, the generating function P(x, ˜ y) by taking the logarithm. Why is this true? Consider the converse statement. We know tion P(x, ˜ y) be the correthat the number of minimal codewords follows a Poisson distribution and let P(x, sponding generating function. A codeword is now composed of an arbitrary number of minimal codewords and because we are looking at fixed-sized structures within a large (asymptotically infinite) blocklength these minimal codewords do not overlap. Therefore the weight of a codeword is the sum of the weights of its minimal components. We conclude that ¯ y) = exp P(x, ˜ y) = ∑ P(x, 

k

 ˜ y) k P(x, , k!

 ˜ y) k counts all the possibilities of building a codeword from k non-overlapping where the term P(x, minimal codewords and we have the factor 1/k! since the order in which we pick the components is immaterial. In the containment problem of standard graph theory the  convergence to the Poisson 1/4 . We expect therefore that distribution has actually been proven for weights of order O( log(n) such a stronger result should also hold in our case.

3.1 Stability Condition For LDPC ensembles the stability condition, see [15], played a fundamental role in their analysis. A corresponding condition can be derived for turbo codes. Conjecture 1 [Stability Condition for Parallel Concatenated Ensemble] Consider the ensemble

P (G, n, r) and let D(x, y) = ∑i di (y)xi denote the generating function associated to G counting

498

over a BMS channel with L-density a(z) and associated Battacharya constant B(a) :=

Z ∞

−∞

z

a(z)e− 2 dz.

Then the desired fixed-point corresponding to correct decoding is stable if and only if 2 B(a)d2(B(a)) < 1.

(6)

We stated the stability condition as conjecture since we will not present a proof of the above result but take a considerable shortcut by making use of an analogy with respect to standard LDPC ensembles. Let us see an alternative way of how one can interpret the stability condition in the setting of LDPC ensembles. It should be emphasized that this derivation is not the one used in [15] to prove the stability condition and so this derivation is currently heuristic in nature. For LDPC ensembles the generating function counting the number of minimal codewords/stopping sets in the asymptotic limit is equal to  ˜ = − 1 log 1 − λ0 (0)ρ0 (1)x . P(x) 2 The first investigation into the expected number of low-weight codewords and the related union bound on the error floor was done by Di, Richardson and Urbanke in [8]. A more sophisticated second moment approach was then employed by Orlitsky, Viswanathan and Zhang in [13], where it was shown that the union bound applied to the above generating function gives the correct errorfloor expression at least up to some critical value. Finally, it was shown in [9] that the distribution of the number of minimal codewords converges to a Poisson distribution in the limit of large blocklengths. This result is the exact equivalent to the result stated in equation (4) for the turbo case. Now assume we apply the union bound to this generating function. To be concrete, assume we transmit over the BAWGNC(σ). We then get  n o r 1 Eb d Pb = + O(1/n2 ) 2rd coef q(x), x Q n∑ N o d   Eb 2r Z π 2 1 − N2 o 2sin (θ)  dθ + O(1/n2 ),  q e = πn 0 0 where q(x) = x − 12 log (1 − λ0 (0)ρ0 (1)x) . Where does this union bound diverge? It is easy to check that it diverges at the point where B(a)λ0 (0)ρ0 (1) < 1, which is exactly the stability condition! That the above union bound has apparently meaning up to the stability condition is maybe at first somewhat surprising. In this respect it is interesting to mention the following interpretation. Consider an LDPC ensemble with degree-two variable nodes. If λ0 (0)ρ0 (1) > 1, as it must be if we want to have a non-trivial stability condition, then for large enough blocklengths with high probability a linear fraction of the degree-two nodes form a large connected component. Consider a degree-two variable node which takes part in this component and consider the support tree starting from this variable node. Then (λ0 (0)ρ0 (1))` is the expected value of the cardinality of the leave nodes of the support tree corresponding to ` iterations. A very

499

due to weight-two inputs. The variable node at hand can be the “first” or the “second” such nonzero input, i.e., we can grow out the tree “to the right” or “to the left.” If we now map the bit which we did not use via the the permutation again we have a choice of how to continue. This gives the factor 2 which we pick up in each half-iteration. Further we can decide which detour of weight two we choose at each step. This is described by d2 (y). If we apply with respect to the stability condition the same reasoning to the turbo code case and see where the union applied to the generating function diverges, it is easy to check that this point is described by condition (3.1). Therefore we conjecture this to give the stability condition also in the case of turbo code ensembles. Example 14 [Stability of P (G = 7/5, r = 1/3)] For the ensemble P (G = 7/5, r = 1/3) we have y3 d2 (y) = 1−y . Let xStab be the unique positive solution to the equation 2xd2 (x) = 2x

x3 = 1. 1−x

We have xStab ≈ 0.647799. If we assume transmission over BEC() we have B() =  so that Stab ≈ 0.647799. It turns out that for this code the stability condition determines the threshold. −

The same is true if we consider transmission over the BAWGNC(σ) so that B(σ) = e we have σ Stab ≈ 1.07313 and again σ Stab = σ IT .

1 2σ2

. Then 

Example 15 [Stability of P (G = 17/31, r = 1/2) and Alternating Puncturing] For the ensemble P (G = 17/31, r = 1/2) with alternating puncturing we know from Example 7 that d2 (y) = y2 (3+y2 ) . Let xStab be the unique positive solution to the equation 2−2 y2 2xd2 (x) = 1. We have xStab ≈ 0.582645. If we assume transmission over BEC() we have B() =  so that Stab ≈ 0.582645. If we consider transmission over the BAWGNC(σ) then we have σ Stab ≈ 0.962093. In both cases the stability condition differs from the threshold.  The stability condition of some specific turbo codes was first computed by Montanari and Sourlas [12]. The results listed in [12] are in perfect agreement with our conjecture.

4 Outlook It is important to note that the same ideas which we applied in this paper to determine the distribution of low weight codewords for turbo code ensembles are applicable also for other structures, as long we are looking for constant-sized structures which are contained in large graphs. E.g. it would be natural and potentially fruitful to investigate the equivalent statement for pseudocodewords.

References [1] L. BAZZI , M. M AHDIAN , AND D. J. S PIELMAN, The minimum distance of turbo-like codes, IEEE Trans. Inform. Theory, (2004). accepted for publication.

500

(2004), p. ?? [3] S. B ENEDETTO AND G. M ONTORSI, Unveiling turbo codes: some results on parallel concatenated coding schemes, IEEE Trans. Inform. Theory, 42 (1996), pp. 409–428. [4] C. B ERROU , A. G LAVIEUX , AND P. T HITIMAJSHIMA, Near Shannon limit error-correcting coding and decoding, in Proceedings of ICC’93, Geneve, Switzerland, May 1993, pp. 1064– 1070. ´ , Random Graphs, Cambridge University Press‘, 2001. [5] B. B OLLAB AS [6] M. B REILING AND J. B. H UBER, Combinatorial analysis of the minimum distance of turbo codes, IEEE Trans. Inform. Theory, 47 (2001), pp. 2737–2750. [7]

, A logarithmic upper bound on the minimum distance of turbo codes, IEEE Trans. Inform. Theory, 50 (2004), pp. 1692–1710.

[8] C. D I , T. R ICHARDSON , AND R. U RBANKE, Weight distribution of iterative coding systems: How deviant can you be?, in IEEE International Symposium on Information Theory, Washington, D.C., June 2001, p. 50. [9]

, Weight distribution of low-density parity-check codes, IEEE Trans. Inform. Theory, (2004). submitted.

[10] N. K AHALE AND R. U RBANKE, On the minimum distance of parallel and serially concatenated codes, in IEEE International Symposium on Information Theory, Boston, MA, Aug. 16–21 1998, p. 31. [11] R. M C E LIECE, How to compute weight enumerators for convolutional codes, in Communications and Coding. (P.G. Farrell 60th birthday celebration), M. Darnell and B. Honoray, eds., John Wiley & Sons, New York, 1998, pp. 121–141. [12] A. M ONTANARI AND N. S OURLAS, Statistical mechanics and turbo codes, in Proceedings of the International Symposium on Turbo Codes and Related Topics, Brest, France, Sept. 2000, pp. 63–66. available at arXiv:cond-mat/9909018v2. [13] A. O RLITSKY, K. V ISWANATHAN , AND J. Z HANG, Stopping set distribution of ldpc code ensembles, IEEE Trans. Inform. Theory, (2004). submitted. [14] L. P EREZ , J. S EGHERS , AND D. C OSTELLO, A distance spectrum interpretation of turbo codes, IEEE Trans. Inform. Theory, 42 (1996), pp. 1698–1709. [15] T. R ICHARDSON AND R. U RBANKE, Fixed points and stability of density evolution, Communications in information and systems, 4 (2004), pp. 103–116. [16] I. S ASON AND S. S. (S HITZ ), Improved upper bounds on the ml decoding error probability of parallel and serial concatenated turbo codes via their ensemble distance spectrum, IEEE Trans. Inform. Theory, 46 (2000), pp. 24–47.

501

Suggest Documents