2038
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 52, NO. 12, DECEMBER 2004
Reliability-Based Schedule for Bit-Flipping Decoding of Low-Density Parity-Check Codes Ahmed Nouh, Student Member, and Amir H. Banihashemi, Member, IEEE
Abstract—A reliability-based message-passing schedule for iterative decoding of low-density parity-check codes is proposed. Simulation results for bit-flipping algorithms (with binary messages) show that a reliability-based schedule can provide considerable improvement in performance and decoding speed over the so-called flooding (parallel) schedule, as well as the existing graph-based schedules. The cost associated with this improvement is negligible and is equivalent to having a two-bit representation for initial messages, instead of the standard one bit for hard-decision algorithms, only at the first iteration (all the exchanged messages are still binary). Index Terms—Bit-flipping (BF) algorithms, hard-decision algorithms, iterative decoding, low-density parity-check (LDPC) codes, message-passing algorithms, message-passing schedule, reliabilitybased schedule (RBS), schedules for iterative decoding.
I. INTRODUCTION
L
OW-DENSITY parity-check (LDPC) codes [2] and iterative decoding algorithms have been the subject of much recent research. In particular, one topic of practical interest has been to improve the performance of iterative decoding of LDPC codes at shorter block lengths [1], [5], [10]–[12]. To decode an LDPC code, iterative algorithms, also referred to as “message-passing algorithms,” exchange messages between the bit nodes and the check nodes of the Tanner graph (TG) [8] of the code in both directions and iteratively. Most commonly, the message passing is implemented following the so-called “flooding (parallel) schedule,” i.e., all the bit nodes send messages to check nodes by processing the messages coming from check nodes and the channel; check nodes process these messages and then send back new messages to bit nodes, and this is repeated in a synchronous manner. Recently, and motivated by the question of whether it would be possible to improve the performance of an iterative coding scheme while the TG and the decoding algorithm (identified by the operations in bit and check nodes) are fixed, new message-
Paper approved by A. K. Khandani, the Editor for Coding and Information Theory of the IEEE Communications Society. Manuscript received September 29, 2003; revised March 3, 2004 and May 28, 2004. This work was supported in part by Zarlink Semiconductor Corp. (formerly Mitel Semiconductor Corp.), and in part by the National Capital Institute of Telecommunications (NCIT). This paper was presented in part at the IEEE International Conference on Communications, Paris, France, June 20–24, 2004. A. Nouh was with the Broadband Communications and Wireless Systems (BCWS) Centre, Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada. He is now with the School of Information Technology and Engineering (SITE), University of Ottawa, Ottawa, ON K1N 6N5, Canada (e-mail:
[email protected]). A. H. Banihashemi is with the Broadband Communications and Wireless Systems (BCWS) Centre, Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada (e-mail:
[email protected]). Digital Object Identifier 10.1109/TCOMM.2004.838704
passing schedules have been proposed [5], [10]. These schedules control the exchange of messages between bit nodes and check nodes at each iteration according to some graph parameters, such as the girths of the nodes in the graph. In this letter, we propose a reliability-based schedule (RBS), which unlike the schedules introduced in [5] and [10], does not directly depend on the TG of the code. Instead, it controls the message passing based on the reliability of information available at each bit node. At each iteration, less reliable (or “unreliable”) bits are identified and are prevented from propagating their messages through the graph in that iteration. This can be thought of as adjusting the timing for the participation of different nodes in iterative decoding according to the reliability of their information. The idea can be applied to any iterative decoding algorithm, even ones with binary messages, as long as some measure of reliability for bits (soft information) is available at the input of the decoder. In the case of hard-decision algorithms, the soft information would only be used at the beginning to partition the bits into two subsets of reliable and unreliable. (This is equivalent to using two bits for the initial messages, where one bit indicates whether the node is reliable or not, and the other bit is just the initial estimate for the bit value. In our simulations, the former bit is only used in the first iteration.) The messages passed between the check and the bit nodes are still binary. Simulation results show that, despite its deceptive simplicity, the proposed schedule can improve the performance and decoding speed significantly (in one of our examples, by about 1 dB, and by up to more than a factor of 2, respectively). This letter is organized as follows. In Section II, we present the general framework for reliability-based scheduling. We note, however, that optimal scheduling in the general framework is too complex to derive. We thus limit our scope to the simplest scenario, where the scheduling is only applied in the first iteration, and where all the coded bits have the same reliability threshold. This simplifies not only the optimization problem by reducing its dimension to one, but also the implementation of the schedule for practical purposes. In Section III, simulation results are given. These results indicate that using the simplified schedule is most beneficial for bit-flipping (BF) algorithms, where significant improvements in performance and decoding speed are obtained with the minimal cost of just adding an extra bit of memory per coded bit to represent the reliability of the initial messages. Section IV is devoted to some concluding remarks. II. RELIABILITY-BASED SCHEDULE (RBS) Consider a binary LDPC code of length and rate used over a binary-input additive white Gaussian noise (AWGN) channel using a binary phase-shift keying (BPSK) modulation
0090-6778/04$20.00 © 2004 IEEE
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 52, NO. 12, DECEMBER 2004
. Suppose that the average energy per information bit and and the power spectral density of AWGN are denoted by , respectively, and the received vector is processed by an iterative decoder to estimate the transmitted message. The main idea in RBS is to prevent unreliable information from propagating in the TG of the code in early stages of iterative decoding. To identify unreliable information, we define of “reliability threshold veca sequence in the sequence is associated with the iteration tors.” Vector of the decoding algorithm, and has real elements. The th element corresponds to bit in the code. Also suppose that the nonnegative number is an estimate of the reliability of information about at iteration of the decoding algorithm (e.g., can be the magnitude of the estimate for the log-likeis labeled lihood ratio (LLR) of ). At each iteration , bit , and as reliable, otherwise. A check as unreliable if node with respect to a connected bit node is considered reliable, if all the other bit nodes connected to are reliable. (In this case, the message sent to along the edge is considered reliable.) Otherwise, the check node is unreliable with respect to . At each iteration, RBS operates by allowing only the reliable bits to send out messages, and by enforcing that only reliable check nodes generate and send back messages (note that the operations in bit and check nodes remain the same as those in flooding). At the beginning of each iteration , the reliable messages coming from check nodes, along with the channel mes, are used to compute the reliability at each bit node sage (for the first iteration, is determined only by ). In a general framework, for a given code, a given decoding , vectors can be optialgorithm and a given value of mized to achieve the minimum error rate. Such a multivariable optimization, however, is very complex. Intuitively, as the iteration process goes on, on average, we expect more bits to be designated reliable. In this letter, to simplify the optimization and the implementation of the schedule, we consider the case where , for , i.e., starting from the second iteration, all the bits are considered reliable (flooding schedule). Moreover, to further simplify the optimization, we assume that is independent of .1 We thus use the notation to denote the constant reliability threshold for . As the reliability measure, . we use the magnitude of the received values, i.e., In general, for a given code, optimal depends on the TG of , and the decoding algorithm. Finding the opthe code, timal value of analytically appears to be a difficult task. In , this letter, for a given code (TG) and a given value of we use simulation to find the optimal . Our simulations, howdoes not change ever, show that the optimal normalized much with (parameter is the standard deviation of noise). III. SIMULATION RESULTS In this letter, we apply RBS to belief propagation (BP) [2], [8], [9], Gallager’s algorithm A (GA) [2], [9], and a hard-decision al-
1This
is particularly a valid assumption for regular LDPC codes.
2039
Fig. 1. BER (—) and MER (- - -) for (273, 191) regular LDPC code (C2) decoded by BP (AWGN), WBF (AWGN), GA, and RBS-GA algorithms.
gorithm described in [7, Fig. 2].2 The latter appears to perform particularly well when applied to LDPC codes constructed by finite geometries [3]. The BP algorithm is implemented in the LLR domain [9], and our simulations show that not much improvement in performance or convergence speed is achieved by applying the (simplified) RBS to BP. (Note that asymptotically, at very large block lengths, BP is optimal and we do not expect RBS-BP to provide any improvement in performance over BP for very long LDPC codes. For shorter codes, whose TGs contain many short cycles, BP is suboptimal and can be improved [1], [5], [10]–[12].) For our simulations, we apply BP to an optimized (1000, 500) irregular code [6] (C1), GA to a (273, 191) regular code [4] (C2), and SS to a (273, 191) projective geometry (PG) code [3] (C3). For every code, the received vectors are decoded simultaneously using two decoders, one with flooding schedule, and the other one with (simplified) RBS. The maximum number of iterations , enough codewords are is chosen to be 200, and at each simulated to generate 100 codeword errors. In general, for the RBS, the optimal value of for each code . We and a given decoding algorithm is a function of have, however, observed that the optimal normalized value of is rather insensitive to , and is almost equal to 0.3, 0.7, and 0.4 for the combinations {C1, BP}, {C2, GA}, and {C3, SS}, respectively. It is also observed that in general, , the error rate is less sensitive to the at lower values of normalized value of . We have plotted the bit-error rate (BER) and the message-error rate (MER) curves of GA and RBS-GA for C2 in Fig. 1, and those of SS and RBS-SS for C3 in Fig. 2. The curves of BP and RBS-BP for C1 are very close, and thus have not been plotted. For all three codes, the statistics of the number of decoding iterations for converged cases is shown in Table I. In Figs. 1 and 2, we have also given the curves for BP and weighted bit-flipping (WBF) on AWGN [3] as reference. These curves are all based on flooding. 2We use a parallel version of this algorithm, i.e., we flip the value of every variable v in the set S [7], where i is the greatest index for which S is not empty. We refer to this algorithm as Sipser–Spielman’s (SS) algorithm. To fit the SS algorithm into the message-passing framework of previous sections, one can think of a check-node message as the parity bit of the incoming extrinsic messages to the check node, and a bit-node message as the value of the corresponding bit.
2040
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 52, NO. 12, DECEMBER 2004
values of 3.5, 4, 4.5, and 5 dB are, respectively, 11.5, 9.3, 7.4, and 5.8, which are more than three times larger than the corresponding values for RBS-SS given in Table I. For RBS-BP, although the decrease in the average number of iterations required for convergence, compared with BP with flooding, is negligible, the standard deviation has been reduced considerably. IV. CONCLUDING REMARKS
Fig. 2. BER (—) and MER (- - -) for (273, 191) PG-LDPC code (C3) decoded by BP (AWGN), WBF (AWGN), SS, and RBS-SS algorithms. TABLE I STATISTICS OF THE NUMBER OF DECODING ITERATIONS FOR CONVERGED CASES
In this letter, we presented an RBS for iterative decoding of LDPC codes. To simplify the optimization and the implementation of the schedule, we focused on only applying the schedule to the first iteration and with a single reliability threshold for all the coded bits. This simplified schedule, which appears to be particularly effective for BF algorithms, outperforms the conventional flooding schedule in both error performance and decoding speed. These advantages for BF algorithms come at the very low cost of calculating, and perhaps storing, one-bit reliability information per coded bit for initial messages. In many cases, where RBS provides a better performance/decoding-time tradeoff, compared with flooding, graph-based schedules simply fail to provide any improvement due to uniform distribution of girths in the graph. For soft-decision algorithms such as belief propagation, our results show that the simplified version of RBS does not provide much improvement. Whether the application of more complex RBSs would provide nonnegligibly larger improvements is to be investigated. ACKNOWLEDGMENT The authors wish to thank Y. Kou, M. Fossorier, and T. Richardson for providing them with the parity-check matrices of the simulated codes, and H. Xiao for simulating the (1000, 500) code with probabilistic scheduling. They also wish to thank the anonymous reviewers for their helpful comments.
It can be seen that, in general, RBS provides significant im, provement in performance over flooding. In fact, at BER RBS-GA outperforms GA with flooding by about 1 dB. It is also interesting to note that RBS algorithms close a large part of the gap between BF and WBF algorithms. For C2, RBS-GA is only about . This is 0.2 dB inferior to WBF for BER at larger values of while for MER, RBS-GA even outperforms WBF at high values. For C3, the difference between RBS-SS and WBF is almost the same across the board, and RBS-SS is inferior to WBF by only about 0.25 and 0.1 dB for BER and MER, respectively. The significance of this would become more evident when one notices that, unlike RBS-BF algorithms, WBF passes soft information throughout the iteration process. We tested the probabilistic schedule of [5] on C1 with BP. The improvement in performance is also small in this case, and very close to what is obtained by RBS-BP. For the other two codes, since all the bit nodes in the TG have the same girth, the probabilistic schedule, unlike RBS, performs on average the same as flooding. In terms of speed of convergence (average decoding time) also, as Table I indicates, RBS provides improvement over flooding. It is also important to note that RBS-BF algorithms have a much higher speed of convergence, compared with WBF algorithm. For C3, for example, the average number of iterations for WBF at
REFERENCES [1] M. P. C. Fossorier, “Iterative reliability-based decoding of low-density parity-check code,” IEEE J. Select. Areas Commun., vol. 19, pp. 908–917, May 2001. [2] R. G. Gallager, Low-Density Parity-Check Codes. Cambridge, MA: MIT Press, 1963. [3] Y. Kou, S. Lin, and M. P. C. Fossorier, “Low-density parity-check codes based on finite geometries: A rediscovery and new results,” IEEE Trans. Inform. Theory, vol. 47, pp. 2711–2736, Nov. 2001. [4] D. J. MacKay. Encyclopedia of Sparse Graph Codes. [Online]. Available: http://www.inference.phy.cam.ac.uk/mackay/codes/data.html [5] Y. Mao and A. H. Banihashemi, “Decoding low-density parity-check codes with probabilistic scheduling,” IEEE Commun. Lett., vol. 5, pp. 414–416, Oct. 2001. [6] T. J. Richardson, private communication. [7] M. Sipser and D. A. Spielman, “Expander codes,” IEEE Trans. Inform. Theory, vol. 42, pp. 1710–1722, Nov. 1996. [8] R. M. Tanner, “A recursive approach to low-complexity codes,” IEEE Trans. Inform. Theory, vol. IT-27, pp. 533–547, Sept. 1981. [9] T. Richardson and R. Urbanke, “The capacity of low-density parity-check codes under message-passing decoding,” IEEE Trans. Inform. Theory, vol. 47, pp. 599–618, Feb. 2001. [10] H. Xiao and A. H. Banihashemi, “Graph-based message-passing schedules for decoding LDPC codes,” IEEE Trans. Commun., vol. 52, pp. 2098–2105, Dec. 2004. [11] M. R. Yazdani, S. Hemati, and A. H. Banihashemi, “Improving belief propagation on graphs with cycles,” IEEE Commun. Lett., vol. 8, pp. 57–59, Jan. 2004. [12] J. S. Yedidia, W. T. Freeman, and Y. Weiss, “Constructing free energy approximations and generalized belief propagation algorithms,” IEEE Trans. Inform. Theory, submitted for publication.