A Two-Stage Iterative Decoding of LDPC Codes ... - Semantic Scholar

6 downloads 125 Views 119KB Size Report
In this work, we propose a two-stage iterative decoding to break trapping sets. Simulation results show that the error floor performance can be significantly.
A Two-Stage Iterative Decoding of LDPC Codes for Lowering Error Floors Jingyu Kang, Li Zhang, Zhi Ding and Shu Lin Department of Electrical and Computer Engineering University of California Davis, CA 95616 Email: {jykang, liszhang, zding, shulin}@ece.ucdavis.edu

Abstract— In iterative decoding of LDPC codes, trapping sets often lead to high error floors. In this work, we propose a two-stage iterative decoding to break trapping sets. Simulation results show that the error floor performance can be significantly improved with this decoding scheme.

I. I NTRODUCTION LDPC codes form a class of Shannon-limit [1] approaching error control codes [2]–[4]. They perform very well under iterative decoding such as the sum-product algorithm (SPA) [3], [4]. However, with iterative decoding, most LDPC codes have a weakness known as error floor [5], [6]. In many applications, the error floor is required to be lower than a particular level so as not to adversely affect the performance of the system. In some communication and data storage systems, this requirement is as low as 10−12 to 10−15 . To lower the error floor, a great deal of research effort has been expended in finding its causes and constructing codes to avoid error-floor-inducing structures in the code’s Tanner graph [7]. Small stopping sets in the Tanner graph are first found to cause error floors over binary erasure channel (BEC) [8], and have similar effect over additive white Gaussian noise (AWGN) channel. Accordingly, some code construction methods have been proposed to avoid small stopping sets, thereby lowering the error floors to some extent [9], [10]. More recently, trapping sets (or near-codewords) are found to be the culprit of error floors of many LDPC codes over AWGN channel [5], [6]. For codes with small minimum distances, undetected errors also contribute considerably to high error floors. Since trapping sets generally have complicated combinatorial properties, it is very difficult to cope with them directly in code construction. Instead, some decoder-based strategies have been proposed to lower the error floor due to trapping sets [11]. However, with these mothods, a prior knowledge of the dominant trapping sets in a particular code is required at the decoder, which is often difficult to obtain. In this work, we propose a two-stage iterative decoding to break trapping sets. The first stage is a slightly modified iterative decoding. If the first-stage decoding fails due to a trapping set, selective bit-flipping (SBF) along with iterative decoding is used in the second stage. Simulation results show that this decoder can

break most of the trapping sets, thus push the error floor down to the level limited by the minimum distance of the code. More importantly, this decoding scheme does not need prior information of the trapping sets, thus can be used for codes of a wide range of constructions. II. T RAPPING S ETS IN I TERATIVE D ECODING In [5], it is observed that when decoded with SPA, the error floors of some LDPC codes are dominated by block errors where only a small number of check-sums are not satisfied, which are called “near-codewords”. Near-codewords are more commonly refered to as trapping sets [6]. A trapping set is a set with a relatively small number of variable nodes such that the induced subgraph has only a small number of odddegree check nodes. A trapping set with a variable nodes and b odd-degree check nodes is called an (a, b) trapping set. For example, a (6, 2) trapping set is shown in Fig. 1, where the two odd-degree check nodes are c1 and c7 . Clearly, if the code bits associated with the a variable nodes are all the wrong bits, then the check-sums corresponding to the b odd-degree check nodes will not be satisfied. For convenience, we refer to these nodes as unsatisfied check nodes. 







 Fig. 1.















A (6, 2) trapping set.

In iterative decoding, the messages from the satisfied check nodes tend to reinforce the current decoder decisions, while the messages from the unsatisfied check nodes try to change some of them. In a decoding iteration, if the wrong bits coincide with the variable nodes of a trapping set, the messages from the small number of unsatisfied check nodes are often not sufficient to correct the current wrong bits, and the decoder can not converge to a codeword even if many more iterations are performed. In other words, the decoder gets trapped.

978-1-4244-2324-8/08/$25.00 © 2008 IEEE. This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2008 proceedings.

A conventional iterative decoder declares a decoding failure if it does not converge after a maximum number of iterations. However, if a decoding failure is caused by a trapping set, then there are only a relatively small number of wrong bits and unsatisfied check nodes. In this case, there may be a better way to deal with this failure than merely declaring it. In fact, if we can locate some of the wrong bits, flip them and decode this block again, it is very likely that the decoder will not get trapped again thus converge to a correct codeword. This idea leads to our proposed two-stage iterative decoding. III. T HE T WO -S TAGE I TERATIVE D ECODING The proposed decoding consists of two stages. In the first stage, the conventional iterative decoding is performed. If decoding fails, we decide whether it is caused by a trapping set. To do so, an intuitive idea is to locate all the unsatisfied check nodes to see if there are only a relatively small number of them. Usually, the number of unsatisfied check nodes changes from iteration to iteration. We observe in simulations that this number does not necessarily decrease with more iterations. In fact, in many cases the number of unsatisfied check nodes still oscillate even after hundreds of iterations. Therefore, in the decoding of each block, we keep track of the smallest number nc of unsatisfied check nodes throughout the iterations. We also record the set C of these nc unsatisfied ˆ of the check nodes and the corresponding hard decision x codeword. If decoding fails and nc is less than a predetermined threshold t, then we consider that this decoding failure is caused by a trapping set S, in which C is the set of oddˆ correspond to the degree check nodes. The wrong bits in x set of variable nodes in S, which is denoted by V . To correct the decoding error caused by S, the decoder enters into the second stage. In the second stage, if we can locate all the variable nodes in V , the decoding is completed. Clearly, only knowing the set C of unsatisfied check nodes, it is impossible to do so in a combinatorial way. However, C does contain some information about V . That is, each check node in C is adjacent to at least one variable node in V . To determine V , we can define a matching set as follows: for a set of check nodes C in a Tanner graph, a matching set of C is a set of variable nodes such that each of them is adjacent to one and only one check node in C. According to the definition, a matching set of C contains |C| variable nodes, where |C| denotes the cardinality of C. Assuming that no two check nodes in C have any common neighboring variable nodes, then we can generate the set L of all the different matching sets of C. Clearly, |L| is in the |C| order of dc , where dc is the average degree of the check nodes in C. Note that at least one of the matching sets in L is a subset of V . For example, for the trapping set in Fig. 1, the set {v1 , v6 } is such a matching set of {c1 , c7 }. Our goal is to identify such a matching set from L and use it to correct ˆ . To this end, we use SBF together all the wrong bits in x with iterative decoding. Specifically, for each matching set M in L, we assume M ⊆ V , then the corresponding wrong ˆ can be identified. We flip these bits by setting bits in x

their initial log-likelihood-ratios (LLR’s) to the maximum possible value with opposite signs, and perform the iterative decoding over again. If M ⊆ V , the decoding will most likely fail again; otherwise (M ⊆ V ), the decoder will in most cases not get into the same trapping set and converge to a codeword. This decoding stage is described below. inputs: L: set of all matching sets of C; ˆ = (ˆ ˆn ): hard decision of the codeword correx x1 , . . . , x sponding to trapping set S, where x ˆi = ±1; µ = (µ1 , . . . , µn ): initial LLR’s of the received block; γ: the largest possible positive LLR value. for each M = {v1 , . . . , v|C| } ∈ L, 1 ≤ vi ≤ n do µ ← µ; xvj · γ, for 1 ≤ j ≤ |C|; µvj ← −ˆ perform iterative decoding using µ as input LLR’s; if decoding is successful then stop and exit; end if end for Clearly, the second decoding stage is a trial-and-error procedure. If iterative decoding is successful for some M0 ∈ L, we claim M0 ⊆ V and meanwhile we have obtained a codeword. The implementation complexity of this decoding stage largely lies in the generation of the set L and the decoding algorithm in each trial. Since L can be easily generated from nodes adjacency, and conventional iterative decoding is used in each trial, only moderate complexity is added to that of a simple iterative decoder. Another concern about this two-stage decoding is that the decoding throughput will decrease because the average number of iterations to decode a single block increases due to multiple trials in the second stage. More precisely, at a particular SNR, if the block error rate (BLER) of iterative decoding is PB , the average iteration number is I and the maximum iteration number is Imax , then the average iteration number of the proposed two-stage iterative decoding is upper bounded by  = Iub

I + Imax · |L| · PB , 1 + PB

(1)

which is an increasing function of |L|. As mentioned earlier, |L| is exponential in |C|, hence |L| can get very large for a moderate |C|. Therefore, instead of listing the matching sets of the whole set C, we only do it for a small subset of C. For example, if we only list the matching sets of a single 2-node subset of C, the size of L is only in the order of d2c . Simulation results show that by doing this, the decoding performance is almost not affected. This is not surprising because a trapping set can be viewed as a state of unstable equilibrium, thus can be broken by a small perturbation (flipping 1 or 2 bits). From  decreases when PB gets smaller, (1), we can also see that Iub which is a good property because the second decoding stage mainly operates at very low PB .

978-1-4244-2324-8/08/$25.00 © 2008 IEEE. This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2008 proceedings.

In all our simulation, BPSK signaling over AWGN channel is assumed, and a maximum of Imax = 50 iterations are performed in iterative decoding. We set the threshold number t = 15, i.e., if iterative decoding fails and |C| ≤ 15, the decoder enters the second decoding stage. In the second stage, as described in last section, when |C| > 2 we only list the matching sets of a 2-node subset of C which is arbitrarily chosen for convenience. However, there may be better ways to choose this 2-node subset for better performance or less average number of iterations, which will be explored in further research. The simulation results and some remarks are given in the following two examples. Example 1: A rate-1/2, (1024, 512) irregular LDPC code is constructed using the improved progressive-edge-growth (PEG) algorithm [10]. The numbers of degree-2, 3 and 10 variable nodes are 340, 554 and 130, respectively. The performances of this code under SPA and the proposed two-stage decoding are shown in Fig. 2. It can be seen that the error floor of two-stage decoding is significantly lower than that of the SPA. More importantly, the error floor of two-stage decoding is caused by undetected errors because most of the trappingset-induced errors have been corrected in the second decoding stage, and the second stage has not increased the undetected error probability. The minimum distance of this code is found to be 15 with multiplicity 1, and the corresponding union bound is also shown in Fig. 2. Since SPA is only suboptimum, the union bound is slightly below the actual undetected error probability. However, it serves well as a lower bound on the error floor of the two-stage decoding.

proposed two-stage decoding are shown in Fig. 3. The error floor improvement of the proposed decoding is evident. At the BER of 10−6 , the performance of the two-stage decoding is 1.5 dB away from the Shannon limit and there is no error floor down to the BER of 10−8 . All decoding errors in this example are detected errors, indicating that this code has a relatively large minimum distance, so the error floor caused by undetected errors is too low to be observed in our software simulation. At the SNR of 3.8 dB, the average iteration number of SPA is 4.4, while that of the two-stage decoding is 4.5, so the decoding throughput is only slightly affected. 0

10

SPA BLER SPA BER Two−Stage BLER Two−Stage BER Shannon Limit

−1

10

−2

10

−3

10 Block/Bit Error Rate

IV. S IMULATION R ESULTS

−4

10

−5

10

−6

10

−7

10

−8

10

−9

10

2

Fig. 3.

10

−3

10

4.5

5

5.5

6

0

In this paper, we propose a simple two-stage decoding to lower the error floors of LDPC codes under iterative decoding, and has obtained some good results. This work shows that aside from good code design methods, proper decoding strategies also have great potential in addressing the error floor problem. In our future research, we will try to further improve the performance and reduce the complexity of this decoding method and examine its effectiveness with a wider range of codes.

−4

10

−5

10

−6

10

−7

10

−8

R EFERENCES

10

−9

10

−10

10

4 E /N (dB)

C ONCLUSIONS

SPA BLER SPA BER Two−Stage BLER Two−Stage BER Two−Stage Undetected BLER Two−Stage Undetected BER BLER Union Bound BER Union Bound Shannon Limit

−2

3.5

Performances of the (3150, 2520) rate-0.8 QC-LDPC code.

0

−1

3

b

10

10

2.5

0

Fig. 2.

0.5

1

1.5

2

2.5

3

3.5

4

4.5

Performances of the (1024, 512) rate-0.5 irregular code.

Example 2: A rate-0.8 (3150, 2520) regular quasi-cyclic LDPC code is constructed using array masking proposed in [12]. The parity check matrix is a 10 × 50 array of 63 × 63 circulants or zero matrices, which has column weight 3 and row weight 15. The code’s performances under SPA and the

[1] C. E. Shannon, “A mathematical theory of communication,” Bell Syst. Tech. J., vol. 27, pp. 379–423 and 623–656, July and October 1948. [2] R. G. Gallager, “Low-density parity-check codes,” IRE Trans. Inform. Theory, vol. IT-8, pp. 21–28, January 1962. [3] D. MacKay, “Good error-correcting codes based on very sparse matrices,” Information Theory, IEEE Transactions on, vol. 45, no. 2, pp. 399–431, March 1999. [4] T. Richardson and R. Urbanke, “The capacity of low-density paritycheck codes under message-passing decoding,” Information Theory, IEEE Transactions on, vol. 47, no. 2, pp. 599–618, Feb 2001. [5] D. MacKay and M. Postol, “Weakness of margulis and ramanujanmargulis low-density parity-check codes,” Electronic Notes in Theoretical Computer Science, vol. 74, 2003. [6] T. Richardson, “Error floors of ldpc codes,” in Proc. of the 41st Annual Allerton Conference, Oct. 2003.

978-1-4244-2324-8/08/$25.00 © 2008 IEEE. This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2008 proceedings.

[7] R. Tanner, “A recursive approach to low complexity codes,” Information Theory, IEEE Transactions on, vol. 27, no. 5, pp. 533–547, Sep 1981. [8] C. Di, D. Proietti, I. Telatar, T. Richardson, and R. Urbanke, “Finitelength analysis of low-density parity-check codes on the binary erasure channel,” Information Theory, IEEE Transactions on, vol. 48, no. 6, pp. 1570–1579, June 2002. [9] T. Tian, C. Jones, J. Villasenor, and R. Wesel, “Construction of irregular ldpc codes with low error floors,” in Proc. of IEEE ICC ’03., vol. 5, 11-15 May 2003, pp. 3125–3129 vol.5. [10] H. Xiao and A. Banihashemi, “Improved progressive-edge-growth (peg) construction of irregular ldpc codes,” IEEE Communications Letters, vol. 8, no. 12, pp. 715–717, Dec. 2004. [11] Y. Han and W. Ryan, “Low-floor decoders for ldpc codes,” in Proc. of the 45th Annual Allerton Conference, 26-28 Sep. 2007. [12] L. Lan, L. Zeng, Y. Y. Tai, L. Chen, S. Lin, and K. Abdel-Ghaffar, “Construction of quasi-cyclic ldpc codes for awgn and binary erasure channels: A finite field approach,” Information Theory, IEEE Transactions on, vol. 53, no. 7, pp. 2429–2458, July 2007.

978-1-4244-2324-8/08/$25.00 © 2008 IEEE. This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the IEEE "GLOBECOM" 2008 proceedings.