2098
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 52, NO. 12, DECEMBER 2004
Graph-Based Message-Passing Schedules for Decoding LDPC Codes Hua Xiao and Amir H. Banihashemi, Member, IEEE
Abstract—In this paper, we study a wide range of graph-based message-passing schedules for iterative decoding of low-density parity-check (LDPC) codes. Using the Tanner graph (TG) of the code and for different nodes and edges of the graph, we relate the first iteration in which the corresponding messages deviate from their optimal value (corresponding to a cycle-free graph) to the girths and the lengths of the shortest closed walks in the graph. Using this result, we propose schedules, which are designed based on the distribution of girths and closed walks in the TG of the code, and categorize them as node based versus edge based, unidirectional versus bidirectional, and deterministic versus probabilistic. These schedules, in some cases, outperform the previously known schedules, and in other cases, provide less complex alternatives with more or less the same performance. The performance/complexity tradeoff and the best choice of schedule appear to depend not only on the girth and closed-walk distributions of the TG, but also on the iterative decoding algorithm and channel characteristics. We examine the application of schedules to belief propagation (sum–product) over additive white Gaussian noise (AWGN) and Rayleigh fading channels, min-sum (max-sum) over an AWGN channel, and Gallager’s algorithm A over a binary symmetric channel. Index Terms—Coding, decoding algorithms, iterative decoding, low-density parity-check (LDPC) codes, message-passing schedules, Tanner graph (TG).
I. INTRODUCTION
L
OW-DENSITY parity-check (LDPC) codes were first introduced by Gallager [1], in the 1960s. They are constructed based on sparse parity-check matrices (or sparse bipartite graphs). LDPC codes were rediscovered in the mid-to-late 1990s, and have since received a great deal of attention due to their capacity-achieving error performance and the low complexity of the associated iterative decoding algorithms (see [2]–[5], and the references therein). An LDPC code , like any other linear block code, can be fully described by its parity-check matrix , through the parity. For LDPC codes, however, check equations the parity-check matrix is sparse (only a small fraction of elements are nonzero). This makes LDPC codes attractive for iterative decoding. Iterative decoding algorithms are naturally described using a graph of the code constructed based on . Such Paper approved by T.-K. Truong, the Editor for Coding Theory and Techniques of the IEEE Communications Society. Manuscript received July 28, 2003; revised April 13, 2004 and June 17, 2004. This paper was presented in part at the 21st Biennial Symposium on Communications, Kingston, ON, Canada, June 2002. The authors are with the Broadband Communications and Wireless Systems (BCWS) Center, Department of Systems and Computer Engineering, Carleton University, Ottawa, ON K1S 5B6, Canada (e-mail:
[email protected]; ahashemi@sce. carleton.ca). Digital Object Identifier 10.1109/TCOMM.2004.838730
a graph is called a Tanner graph (TG) [6]. To describe TGs and the associated decoding algorithms, we assume that the codes are binary. A TG is a bipartite graph that contains two types of nodes, symbol nodes and check nodes, corresponding to the columns and the rows of the parity check matrix , respectively. of corresponds to an edge beA one located at position tween symbol node and check node . Iterative decoding algorithms, also called “message-passing algorithms,” are performed by exchanging messages between check nodes and symbol nodes through the edges of the graph, in both directions and iteratively. The complexity per iteration of the algorithms is then proportional to the number of edges, and is relatively low due to the sparseness of . The messages could be “hard” and/or “soft” information, and the operations performed in symbol and check nodes to generate messages depend on the nature of the decoding algorithm and the message type(s). Generally speaking, an iterative decoding algorithm starts by creating initial (local) messages (weights) for each symbol node from the observations at the output of the channel and channel characteristics. As the first step, it then passes these initial messages to check nodes through the edges of the TG. Iterations then follow, with each iteration consisting of two steps: 1) check nodes processing the incoming information and passing new (extrinsic) messages to the symbol nodes. These messages usually measure the inference made by check nodes of the value and reliability of symbol nodes. 2) Symbol nodes processing the incoming messages sent by check nodes and sending updated (extrinsic) information about their value and the associated reliability to check nodes. Here, “extrinsic” means that the outgoing message sent along one edge does not depend on the incoming message along the same edge. At each iteration, a hard decision on the value of each bit (0 or 1) is made at symbol nodes. The algorithm stops if the hard-decision assignment for bits satisfies all the check equations, or a maximum number of iterations is reached. There are a wide variety of iterative algorithms for decoding LDPC codes, each offering a particular tradeoff between error performance and decoding complexity. There are hard-decision algorithms such as Gallager’s algorithm A (GA) [1], and soft-decision algorithms such as the belief propagation (BP) or sum–product algorithm, and max-sum or min-sum (MS) [6], [7]. BP is known to result in the best error performance among iterative decoding algorithms. It is, however, the most complex to implement. Min-sum, on the other hand, has lower complexity and performs only slightly inferior to BP, particularly at high signal-to-noise ratios (SNRs). Hard-decision algorithms are of interest mainly due to their very low complexity and simple implementation.
0090-6778/04$20.00 © 2004 IEEE
XIAO AND BANIHASHEMI: GRAPH-BASED MESSAGE-PASSING SCHEDULES FOR DECODING LDPC CODES
A message-passing schedule or schedule, in brief, is the order of message passing between check nodes and symbol nodes in a TG, also called the “updating rule.” Conventionally, in the context of decoding LDPC codes, the message-passing schedule for iterative decoding algorithms is the so-called parallel or flooding schedule [7]. In flooding, at each iteration, all the symbol nodes, and subsequently, all the check nodes, pass new messages to all their neighboring nodes. For a cycle-free TG, BP and MS algorithms with flooding schedule result in the optimal a posteriori probability (APP) decoding and maximum-likelihood sequence decoding, respectively [7]. However, for an LDPC code with short block length (less than several thousand bits), since the TG often contains many small cycles, these algorithms presumably perform far from optimal [8]. In [9], Mao and Banihashemi presented the probabilistic schedule as an alternate for the flooding schedule. The idea is to randomly adjust the frequency of message passing in each symbol node in accordance with the length of the shortest cycle (girth) passing through the node, i.e., each symbol node updates its outgoing messages with a probability proportional to its girth. Simulation results based on BP for an additive white Gaussian noise (AWGN) channel, which demonstrate the performance improvement of probabilistic schedule over flooding, were also presented in [9]. In this paper, which is mainly based on the results of [10], we present several new scheduling schemes based on the distribution of girths and closed walks in the TG, some outperforming the probabilistic schedule of [9]. We also examine the application of these schemes not only to BP, but also to MS and GA algorithms. For the channel model, we consider AWGN, Rayleigh fading, and binary symmetric channels (BSCs). Motivated by the fact that the messages are, in fact, passed along the edges of the graph, we introduce “edge-basedschedules,” which in some cases appear to outperform the corresponding “node-based schedules.” As one example, we apply an edge-based probabilistic schedule to the GA algorithm for decoding a (1200, 600) regular LDPC code, and show an improvement of more than 0.3 dB in over flooding at the bit-error rate (BER) of . This is achieved while the node-based probabilistic schedule of [9] performs almost the same as flooding in this case. We also introduce bidirectional schedules which control the flow of messages in both directionsontheTG,fromsymbolnodestochecknodes,andviceversa. We provide an example of the application of an edge-based bidirectional probabilistic schedule to the BP algorithm for decoding an (8000, 4000) regular LDPC code over an AWGN channel. For this example, while flooding demonstrates an early error floor, at dB, the new schedule and a BER of achieves BER at the same (more than two orders of magnitude improvement in BER), and shows no sign of error floor. We also present deterministic schedules that perform very close to probabilistic ones, but are much simpler to implement. Our results show that a proper choice of schedule is not only a function of the girth and closed-walk distributions of the TG, but also depends on the iterative decoding algorithm, the channel model, and the desirable performance/complexity tradeoff. In the rest of this paper, we first study the relationship between the structure of the TG and the suboptimality of iterative decoding algorithms in Section II. We then present mes-
2099
sage-passing schedules in Section III, based on the analysis of Section II. In Section IV, simulation results are presented and discussed. Section V is devoted to some concluding remarks. II. TG STRUCTURE AND SUBOPTIMALITY OF ITERATIVE ALGORITHMS A. Suboptimality of Iterative Algorithms Suppose that information bits are independent and an iterative decoding algorithm is used to decode an LDPC code over a memoryless channel. If the TG of the code is cycle-free, throughout the iteration process, for each node, all the incoming messages are independent of each other. Moreover, for a cycle-free TG, all the incoming messages are also independent of the local weight, in the case of symbol nodes. Indeed, it is precisely these independencies that guarantee the convergence of BP or MS to the optimal solution for cycle-free TGs [7]. In the case of graphs with cycles, dependencies are created when messages pass through cycles. The first time that the independence is violated, the algorithm begins to perform suboptimally.1 There are two cases in which an iterative algorithm fails to maintain its optimality: 1) when there are dependencies between at least one of the incoming messages to a symbol node and the symbol node’s initial weight(s) and 2) when there are dependencies among the incoming messages to a node (symbol or check). Case 1) for a symbol node happens when the local weight(s) of are passed back to through some paths in the TG. Upon the occurrence of this, the optimality of at least one of the outgoing messages of , say the one along edge , and thus the optimality of iterative decoding at , is violated. Similarly, from the point of view of edge , the optimality is also violated. Case 2) for a node happens when there are at least two incoming messages to which are statistically dependent (all the incoming messages to node may still be independent of the local message of node , in the case that is a symbol node). As a result, at least one of the outgoing messages of node , say the one along edge , is not optimal. So, from the point of view of node and edge , the optimality is violated. The idea of using schedules other than flooding is to preserve the optimality of iterative decoding algorithms in as many iterations as possible. One possible approach would be for each edge or node to stop updating its outgoing message(s) as soon as the optimality at this edge or node is violated. This corresponds to what we call a “deterministic schedule.”2 Then, depending on whether the schedule is controlled by nodes or by edges, we have node-based or edge-based schedules, respectively. If the updating is performed randomly, we call it “probabilistic,” following the nomenclature of [9]. In fact, the schedule of [9] is a probabilistic node-based schedule. To implement the schedules, we first need to answer the following question: “At which iteration is the optimality of the 1One should note that before the iteration in which dependencies are created is reached, the iterative algorithm is optimal in the sense that, for example, for BP, at iteration l, all the messages reflect the APPs of bits, given the observed values of all nodes in a neighborhood of depth 2l. 2Deterministic schedule was mentioned as an option in [9], but no simulation result or discussion on its performance and complexity was provided.
2100
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 52, NO. 12, DECEMBER 2004
Fig. 1. TGs of Examples 1, 2, 3 and 4.
outgoing message(s) of an edge or a node violated for the first time?” As we will see later, the answer to this question is related to the distribution of cycles and closed walks in the graph. The following definitions are required for our analysis. B. Definitions A walk in a graph is defined as a sequence of directed edges such that , for . Such a walk is said to start from and to end at , and to have length . A walk is called a closed walk . if the start node is the same as the end node, i.e., A nontrivial walk is a walk that has at least one edge that is traversed only once (in this definition, the edges are considered undirected). A cycle is defined as a closed walk in which all the nodes except for the start and the end nodes are distinct. In this paper, the direction of a cycle is not important, so the word “cycle” is used for an undirected closed walk with the same start and end nodes and no other repeated nodes. We also for an undirected edge between use the notation nodes and . In the rest of the paper, this notation, depending on the context, may mean that the direction of the edge is not important, or it may imply that both directions are important. Example 1: In the TG shown in Fig. 1(A) (we use empty circles to represent symbol nodes and shaded circles to represent check nodes), path {S1, C1, S5, C6, S6, C1, S1} is a nontrivial closed walk of length six, and path {C1, S5, C6, S6, C1} is a cycle of length four. , respecThe neighborhood of depth of an edge tively , denoted by , respectively , is defined as the subgraph consisting of all the walks of length starting from that do not contain edge , respectively (note that in is important both definitions, the direction of edge in the sense that all the walks start from ). In this paper, we and as collections of undirected edges. are interested in , as cannot contain edge . The It is clear that , is defined neighborhood of depth of a node , denoted by as the subgraph consisting of all the walks of length starting from . Again, in this paper, we are interested in this subgraph as a collection of undirected edges. Example 2: In the graph shown in Fig. 1(A), the subgraph on (S1, C1) is the neighborhood of depth the left side of edge (S1, C1) (in this example, is the four of the directed edge , and is a subset of , which is, in fact, the whole same as (S1, C1) is graph). The subgraph on the right side of edge
(C1, the neighborhood of depth two of the directed edge ). It is also part of the neighborhood S1) (again here, , which also includes the edges {C1, S1}, {S1, C5}, and S1, C2 is the whole graph, but {S1, C2}. For edge is the whole graph excluding edge {S1, C2}. ), denoted by , Girth of a node (an edge that is defined as the length of the shortest cycle in passes through . Note that, since TG is bipartite, each node or edge has an even girth. For a node , we also use the notation to denote the length of the shortest nontrivial closed walk , we use initiated from node . Similarly, for an edge to denote the length of the shortest nontrivial the notation closed walk in which is initiated from node , and whose . Since a cycle is a special case of a nonfirst edge is not . Similarly, . trivial closed walk, we have . However, the shortest Example 3: In Fig. 1(A), nontrivial closed walk initiated from symbol node S1 is (S1, C1, . For S5, C6, S6, C1, S1) which has length six, and thus (S1, C1), we have . the edge C. When Does the Algorithm Become Suboptimal? In this subsection, we find the answer to the question: “At which iteration in the flooding schedule is the optimality of the outgoing message(s) of an edge or a node violated for the first time?” For a node , the optimality is violated when: 1) there are at least two incoming messages that are statistically dependent, or 2) there is an incoming message which statistically depends on the initial message of (in the case that is a symbol node). Case 1) happens when the contribution of the “same” message arrives at node through two edges connected to and in the same cycle. For a given cycle, this happens at node for the first time throughout the iteration process when a message passed by the farthest node in the cycle from reaches through the two semicycles of the cycle. The following lemma gives the corresponding iteration numbers for symbol and check nodes. (A detailed proof can be found in Appendix A.) Lemma 1: Assuming independent initial weights at symbol nodes, a symbol (check) node in a cycle of length , receives dependent messages via for the , where is the first time at iteration smallest integer which is greater than or equal to . The following proposition then follows from Lemma 1. Proposition 1: In a given TG, for a symbol (check) node with girth , the optimality of the algorithm is violated at due
XIAO AND BANIHASHEMI: GRAPH-BASED MESSAGE-PASSING SCHEDULES FOR DECODING LDPC CODES
to the dependencies created among ’s incoming messages for . the first time at iteration For a symbol node , Case 2) happens when the local message of is passed back to through a nontrivial closed walk (may also be a cycle at the same time) which is initiated at . Lemma 2: For a symbol node , with a nontrivial closed walk of length initiated at , the local message of will be passed back to through at iteration . Proposition 2: In a given TG, for a symbol node with pa, the optimality of the algorithm is violated at node rameter , due to the dependencies between incoming messages to and the local message of for the first time at iteration . From Propositions 1 and 2, we have the following theorem. Theorem 1: In a given TG, for a node , let denote the iteration in which the optimality of the algorithm under the flooding schedule is violated for the first time at node . Then, we have is a symbol node is a check node.
(3.1)
For an edge , using an analysis similar to the one used for nodes, we have the following theorem. , let Theorem 2: In a given TG, for an edge denote the iteration in which the optimality of the algorithm under the flooding schedule is violated for the first time at edge . Then, we have is a symbol node is a check node.
(3.2)
is As the following example shows, for an edge generally not equal to or . (S1,C1), we have Example 4: In Fig. 1(B), for edge . However, for edge (C1, S1), we have . and . And for nodes S1 and C1, we have III. MESSAGE-PASSING SCHEDULES In Theorems 1 and 2, we determined the first iteration in which the outgoing message of a node or an edge violates the optimality of an iterative algorithm. In this section, using these results, we present schedules that are designed to preserve the optimality of iterative algorithms for as many iterations as possible. The difference among different schedules is whether this is done from the point of view of nodes or edges, and whether the schedule is implemented deterministically or randomly. The schedules are designed such that the edge or the node whose optimality is longer preserved would have more opportunities to update its outgoing message(s). As we will see later, a proper choice of schedule depends on the TG structure, decoding algorithm, channel characteristics, and also the desirable tradeoff between performance and complexity. A. Deterministic Schedules In deterministic schedules, the order of message passing throughout the iteration process is predetermined. Each node or edge updates its outgoing message(s) until the value of an associated counter is decreased to zero. The detailed description follows.
2101
1) Symbol (Check)-Node-Based Schedules: To each symbol with parameters and , we as(check) node sign a counter with the initial value equal to . Then, throughout the algorithm, the counter is decreased by one after each iteration. When the counter reaches zero, node stops updating its outgoing messages until the counters for all the symbol (check) nodes reach zero. The counter is then reset to its initial value, and the process is repeated until the algorithm converges to a codeword or a maximum number of iterations is reached. 2) Edge-Based Schedules: In a given direction on the TG (from symbol nodes to check nodes or vice versa), for each edge with parameters and , we assign a counter with , which is equal to the initial value or , depending on being a symbol node or a check node, respectively. The counter is decreased by one after each iteration. When the counter reaches zero, the edge stops updating its outgoing message until the counters for all the edges in the same direction are zero. Then the counter is reset to the initial value and the process is repeated until a code word or a maximum number of iterations is reached. B. Probabilistic Schedules 1) Node-Based Schedules: To each symbol node , we as(or or ), where sign a probability is the maximum length of nontrivial closed walks initiated are the maxfrom symbol nodes in the graph, and imum girth and the maximum value of among all the symbol nodes in the graph.3 The algorithm initiates by all symbol nodes passing messages to check nodes, and subsequently, all check nodes passing messages back to symbol nodes. Beyond the initialization, at each iteration, symbol node updates its outgoing messages randomly with the assigned probability and independent of other symbol nodes. The check nodes operate as usual (flooding). The algorithm continues until a code word or a maximum number of iterations is reached. Note that this is the same schedule as the one presented in [9]. To implement the schedule at check nodes, the process is (or similar, except that the assigned probability is ), where and are the maximum girth and the maximum among all the check nodes in the TG, respectively. 2) Edge-Based Schedules: Edge-based probabilistic schedules are similar to corresponding node-based probabilistic schedules, except that we assign probabilities to edges rather , we update its message than to nodes. For each edge or if is a symbol with probability or , if is a check node, and with probability node. The parameters , and are the maximum , and over all the edges of the graph in the values of corresponding direction, respectively. All the above schedules can be implemented either unidirectionally or bidirectionally. In unidirectional schedules, the schedule is applied only in one direction in the graph, while in the other direction, messages are passed based on the standard
3These different probabilities are, in general, very close, and thus do not result in much difference in error performance.
2102
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 52, NO. 12, DECEMBER 2004
flooding. In bidirectional schedules, the schedule is applied in both directions. It can be seen that, in general, edge-based schedules and probabilistic schedules are more complex to implement, compared with node-based schedules and deterministic schedules, respectively. Also, bidirectional schedules are roughly twice as complex as the corresponding unidirectional ones. In terms of performance, as we will see in the following section, in general, more complex schedules perform better. But the improvement in performance is a function of the TG, the decoding algorithm, and the channel model. IV. SIMULATION RESULTS To study the performance of different schedules, we present simulation results on four LDPC codes. All codes are described by their parity-check matrices. Codes I and II are regular. Code I is a (1200, 600) code constructed by computer random search. Code II is a (8000, 4000) code taken from [11]. For both codes, all the symbol nodes have degree three, and all the check nodes have degree six, and there is no cycle of length four in the TG. Codes III and IV are irregular. Code III is a (1268, 456) code constructed by construction 2A [2] in [8], and also used in [9]. Code IV is a (3072, 1024) code with degree distributions optimized for the AWGN channel, but still performing very well on fading channels (the code has the same degree distributions as “ir1” in [12], and its parity-check matrix was obtained from [13]). In our simulations, we use binary phase-shift keying (BPSK) modulation. For channel models, we consider AWGN and uncorrelated flat Rayleigh fading channels with and without side information. We use BP for the decoding of LDPC codes over both channels (for the application of BP to AWGN and Rayleigh fading channels, the reader can refer to [4] and [12], respectively.). We also apply min-sum and GA to LDPC codes over an AWGN channel and BSC,4 respectively. For all the simulations, the maximum number of iterations is set to 500, and for each , up to codewords are simulated. To have a value of fair comparison, the same received vectors are passed through different decoders. To demonstrate the improvement that edge-based schedules can provide over the corresponding node-based schedules, we have given in Fig. 2 BER and frame-error rate (FER) for Code I over BSC decoded by the GA algorithm when both schedules are applied probabilistically at the symbol-node side of the TG. The curves for the flooding schedule are also given as reference. As can be seen, while the edge-based schedule provides a considerable improvement over flooding, especially at high ’s, the node-based schedule of [9] performs almost the same as flooding. The larger improvement for the edge-based schedule can, in part, be attributed to the fact that the edge girth distribution for this code has larger percentages of larger girths (for the edges, 28%, 70%, and 2% have girths 6, 8, and 10, respectively, while for the nodes, these percentages are, respectively, equal to 40%, 59%, and 1%). The statistics for the number of iterations for the three schedules are also given in Fig. 3. As can be seen, the improvement in performance for the edge-based schedule comes at a cost in the average number 4To
create a BSC, we use an AWGN channel followed by a 1-bit quantizer.
000
) of Code I decoded by GA algorithm Fig. 2. BER (——) and FER ( with different schedules on BSC (AWGN channel plus a 1-bit quantizer).
Fig. 3. Statistics of the number of iterations required for the convergence of GA algorithm with different schedules [flooding (white), node-based probabilistic (gray), edge-based probabilistic (black)] for Code I on BSC. (A) Mean. (B) Standard deviation (Std).
of iterations. Also, the implementation complexity of the edgebased schedule is larger than that of the node-based schedule. It is worth mentioning that for this code, the improvement of the edge-based schedule over the node-based schedule is negligible when used with BP and MS algorithms over an AWGN channel, and therefore, for these algorithms, using a node-based schedule appears to be a better choice. Also, the application of schedules bidirectionally does not provide much improvement for this code, and thus, is not justified. As an example where bidirectional scheduling provides nonnegligible improvement over unidirectional ones, we have given in Fig. 4 BER and FER curves for unidirectional and bidirectional edge-based probabilistic schedules applied to Code II. The decoding algorithm is BP over an AWGN channel. As can ’s. be seen, the performance is greatly improved at high In particular, the early error floor that seems to be appearing for dB has disappeared for both flooding at around schedules, with the bidirectional schedule improving the BER by more than two orders of magnitude, compared with flooding.
XIAO AND BANIHASHEMI: GRAPH-BASED MESSAGE-PASSING SCHEDULES FOR DECODING LDPC CODES
Fig. 4. BER (——) and FER (- – -) of Code II decoded by BP with different schedules over an AWGN channel.
The larger improvement by the bidirectional schedule over the unidirectional one is to be expected, as the former applies a higher degree of control over message passing than the latter does. This is, however, at the expense of more complex scheduling and a dB, the avlarger average number of iterations (at erage number of iterations for the unidirectional and bidirectional schedules are 22 and 33, respectively). It is worth noting that for Code II, the bidirectional node-based schedule provides only a negligible performance improvement over the corresponding unidirectional schedule. This can be partly justified by noticing that the average girth of check nodes for this code is 8.3, which is much smaller than the average girth of edges from the check side (9.2). The corresponding numbers for the symbol side of the graph are 8.9 (nodes) and 9.1 (edges), respectively. So, as the inclusion of control from the check side based on edge girths helps, since these girths are, on average, at least as large as the ones from the symbol side, for nodes, the situation is different, as the node girths from check side are, on average, quite smaller than the ones from the symbol side. We have also observed that, for this code, unidirectional and bidirectional edge-based probabilistic schedules perform almost the same when applied to the MS and GA algorithms. Fig. 5 shows the comparison between deterministic and probabilistic schedules for the BP algorithm applied to Code III over an AWGN channel. The performance of the flooding schedule is also given as reference. Both schedules are applied at symbol nodes, and as can be seen in Fig. 5, they both outperform the flooding schedule. Although the probabilistic schedule is superior to the deterministic schedule,5 the performance difference between them is small. The deterministic schedule does not require a random number generator at each symbol node and is 5One should note that there are other ways for implementing the deterministic schedule. One approach, for example, could be to postpone the message updating of the nodes with smaller girths for the first few iterations until more reliable information is available at their inputs. This would have the benefit of having more reliable information at the outputs of these nodes, but the disadvantage is that these nodes do not contribute as much to the decoding process in the first few iterations. The probabilistic schedule provides a good balance between these two contradictory goals in a random fashion, and thus performs better than the deterministic schedule.
2103
Fig. 5. BER (——) and FER (- - -) of Code III decoded by BP with different schedules over an AWGN channel.
Fig. 6. Statistics of the number of iterations required for the convergence of BP with different schedules [flooding (white), symbol node-based deterministic (gray), symbol node-based probabilistic (black)] for Code III over an AWGN channel. (A) Mean. (B) Standard deviation (Std).
much simpler to implement. Note that, as reflected in Fig. 6, both algorithms have more or less the same average and stan’s. dard deviation for the number of iterations at different The same trends in the relative performance and complexity of deterministic and probabilistic schedules are also observed for Codes I and II. Fig. 7 shows the performance curves for Code I decoded by the MS algorithm over an AWGN channel with a node-based probabilistic schedule. For reference, we have also given the curves of the flooding schedule for both BP and MS algorithms. It can be seen that MS with probabilistic schedule outperforms ’s. The MS with flooding schedule, especially at high same trends are also observed for Codes II and III. Moreover, it is interesting to note that although BP outperforms MS at lower ’s, the trend changes at higher values of (note that MS is an approximation of BP in the log-likelihood-ratio , and thus the two algorithms domain at large values of should perform very close, asymptotically). The statistics on the number of iterations for the three algorithms is given in Fig. 8. One can easily see the complexity cost associated with the improvement in performance in each case. Note that although the
2104
Fig. 7. BER (——) and FER (- - -) of Code I decoded by BP and MS with different schedules over an AWGN channel.
IEEE TRANSACTIONS ON COMMUNICATIONS, VOL. 52, NO. 12, DECEMBER 2004
Fig. 9. BER (——) and FER (- - -) of Code IV decoded by BP with different schedules on a Rayleigh flat-fading channel with and without SI.
the probabilistic schedule, not every node updates its outgoing messages at every iteration. V. CONCLUSION
Fig. 8. Statistics of the number of iterations required for the convergence of BP and MS with different schedules [BP with flooding (white), MS with flooding (gray), MS with node-based probabilistic (black)] for Code I over an AWGN channel. (a) Mean. (b) Standard deviation (Std).
average number of iterations for MS is larger than that of BP, the complexity per iteration for MS is much smaller. Moreover, although the probabilistic schedule has a larger number of iterations on average, unlike flooding, not every node updates its outgoing messages at every iteration. To demonstrate the advantage of using scheduling in decoding LDPC codes over fading channels, we apply a node-based probabilistic schedule to BP decoding of Code IV on an uncorrelated flat Rayleigh fading channel with and without side information (SI). Fig. 9 shows the BER and FER performance of probabilistic and flooding schedules. As can be seen, the probabilistic schedule outperforms the ’s. Regarding flooding schedule, especially at high the complexity, the probabilistic schedule consistently has a , for both larger average number of iterations (at each cases of with and without SI, the average number of iterations for probabilistic schedule is about 50% larger than that of flooding). From the point of view of the average number of computations, however, this is being offset by the fact that for
We studied the performance and complexity of a number of TG-based schedules for the iterative decoding of LDPC codes. By investigating the performance/complexity tradeoff of these schedules for a few regular and irregular LDPC codes decoded by different iterative decoding algorithms and over BSC, AWGN, and Rayleigh fading channels, we showed that the tradeoff depends not only on the girth and closed-walk distributions of the TG, but also on the decoding algorithm and the channel model. Although, in quite a few cases, the prime choice seems to be the unidirectional node-based deterministic schedule, there also appear to be cases, where probabilistic, edge-based, and/or bidirectional schedules provide significant improvement in performance. We hope that these results motivate more research in this area. In particular, it would be interesting to better understand the relationships among schedule, TG structure, decoding algorithm, and channel model. Such understanding would justify why a particular schedule for a particular TG performs well with a given decoding algorithm over a certain channel, and not so well with another decoding algorithm and/or over a different channel. Related to this, finding simple procedures for the inspection of the TG to reliably predict the performance of different schedules along with different decoding algorithms and over different channels would be of interest. APPENDIX A PROOF OF LEMMA 1 We prove the lemma for symbol nodes based on induction. The proof for check nodes is similar. It is easy to see that the optimality at is first violated when the message from the furthest node in the cycle reaches through the two half-cycles. and . AsIt is easy to verify that the lemma holds for and , we suming that the claim is true for cycle lengths
XIAO AND BANIHASHEMI: GRAPH-BASED MESSAGE-PASSING SCHEDULES FOR DECODING LDPC CODES
Fig. 10.
Graphs for proof of Lemma 1.
prove that it is also true for a cycle of length . Suppose that node is the furthest node in the cycle from variable node under consideration. We have two cases: either is a symbol node (the left graph in Fig. 10), or it is a check node (the right graph in Fig. 10). In the first case, since iterations start at check nodes, the first iteration in which the optimality is violated is the same as the one for the case with a cycle of length (the message from reaches C1 and C2 before the first iteration starts, and thus we can treat C1 and C2 as one node). Now, by , where . So, assumption, . In the other case, after one iteration the information passed by check node reaches check nodes C1 and C2. Then, treating C1 and C2 as one node, the sit, where uation is similar to the case with a cycle of length . By assumption, . So, we have ,6 which completes the proof.
2105
[8] Y. Mao and A. H. Banihashemi, “A heuristic search for good low-density parity-check codes at short block lengths,” in Proc. IEEE Int. Conf. Communications, vol. 1, 2001, pp. 41–44. [9] , “Decoding low-density parity-check codes with probabilistic schedule,” IEEE Commun. Lett., vol. 5, pp. 414–416, Oct. 2001. [10] H. Xiao, “Message-passing schedules for decoding low-density paritycheck codes,” Master’s thesis, Carleton Univ., Ottawa, ON, Canada, 2002. [11] D. J. C. Mackay. Encyclopedia of Sparse Graph Codes [Online]. Available: http://www.inference.phy.cam.ac.uk/mackay/codes/data.html#s14 [12] J. Hou, P. H. Siegel, and L. B. Milstein, “Performance analysis and code optimization of low-density parity-check codes on Rayleigh fading channels,” IEEE J. Select. Areas Commun., vol. 19, pp. 924–934, May 2001. [13] J. Hou, private communication, May 2002.
Hua Xiao received the B.Sc. degree in information science from Nankai University, Tianjin, China, in 1999, the M.Sc. degree in information and systems science from Carleton University, Ottawa, ON, Canada, in 2002, and is currently working toward the Ph.D. degree in electrical engineering at Carleton University. His research interests include information theory and coding theory, especially, efficient graph representations and low-complexity decoding algorithms for codes. Mr. Xiao was the recipient of the 2002–2003 and 2004–2005 Ontario Graduate Scholarship in science and technology. He was also the recipient of the 2003–2004 Ontario Graduate Scholarship.
ACKNOWLEDGMENT The authors wish to thank the Editor for handling this paper and the anonymous reviewers for their helpful comments. REFERENCES [1] R. G. Gallager, Low Density Parity Check Codes. Cambridge, MA: MIT Press, 1963. [2] D. J. C. Mackay and R. M. Neal, “Near Shannon limit performance of low density parity check codes,” Electron. Lett., vol. 45, no. 6, pp. 457–458, Mar. 1997. [3] D. J. C. Mackay, “Good error-correcting codes based on very sparse matrices,” IEEE Trans. Inform. Theory, vol. 45, pp. 399–431, Mar. 1999. [4] T. Richardson and R. Urbanke, “The capacity of low-density parity-check codes under message-passing decoding,” IEEE Trans. Inform. Theory, vol. 47, pp. 599–618, Feb. 2001. [5] T. Richardson, M. A. Shokrollahi, and R. Urbanke, “Design of capacityapproaching irregular low-density parity-check codes,” IEEE Trans. Inform. Theory, vol. 47, pp. 619–637, Feb. 2001. [6] R. M. Tanner, “A recursive approach to low complexity codes,” IEEE Trans. Inform. Theory, vol. IT-27, pp. 533–547, Sept. 1981. [7] N. Wiberg, “Codes and decoding on general graphs,” Ph.D. dissertation, Dept. Elec. Eng., Linköping Univ., Linköping, Sweden, 1996. 6Here, we assume that check node v ’s degree is at least three, which is the case for most practical codes. If v has degree two, the optimality at u is violated when the messages from each of v ’s neighboring symbol nodes reach u. This happens at iteration d(l + 4)=4e for a cycle of length l + 2.
Amir H. Banihashemi (S’90–A’98–M’03) was born in Isfahan, Iran. He received the B.A.Sc. degree in electrical engineering from Isfahan University of Technology (IUT), Isfahan, Iran, in 1988, and the M.A.Sc. degree in communication engineering from Tehran Univesity, Tehran, Iran, in 1991, with the highest academic rank in both classes. He received the Ph.D. degree from the Department of Electrical and Computer Engineering, University of Waterloo, Waterloo, ON, Canada in 1997. From 1991 to 1994, he was with the Electrical Engineering Research Center and the Department of Electrical and Computer Engineering, IUT. In 1997, he joined the Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON, Canada, where he was a Natural Sciences and Engineering Research Council of Canada (NSERC) Postdoctoral Fellow. He joined the Faculty of Engineering at Carleton University, Ottawa, ON, Canada, in 1998, where he is currently an Associate Professor in the Department of Systems and Computer Engineering. His research interests are in the general area of digital and wireless communications and include coding, information theory, and theory and implementation of iterative coding schemes. Dr. Banihashemi has served as an Editor for the IEEE TRANSACTIONS ON COMMUNICATIONS since May 2003. He is also a member of the Board of Directors for the Canadian Society of Information Theory, and a member of the Advisory Committee for the Broadband Communications and Wireless Systems (BCWS) Centre at Carleton University. In 1995 and 1996, he was awarded Ontario Graduate Scholarships for international students.