Bit Flipping – Sum Product Algorithm for Regular LDPC Codes R. El Alami and C. B. Gueye
M. Boussetta and M. Zouak
LESSI, Faculté des Sciences Dhar Mehraz Fez, Morocco
[email protected]
SSC, Faculté des Sciences et Techniques Fez, Morocco
M. Mrabti Ecole Nationale des Sciences Appliquées Fez, Morocco
Abstract—In this paper we present Low Density Parity Check decoding algorithm that assemble two different algorithms: SumProduct and Bit-Flipping; we denote Bit Flipping–Sum Product Algorithm (BFSP). To reduce the bit error rate, we perform the Bit-Flipping algorithm after the Sum-Product algorithm. Simulation results over an additive white Gaussian channel show that the error performance of a LDPC codes with Bit Flipping– Sum Product decoding is within 0.2 dB of the standard SumProduct decoding algorithms. Furthermore, the decoding complexity of the proposed BFSP algorithm is maintained at the same level as that of the standard Sum-Product algorithm. Keywords- Low-density-parity-check codes; bit-flipping; sumproduct; bit error rate.
I.
Figure 1. Tanner graph representation of a LDPC code and the decoding message flow.
The paper is organized as follows: Section 2 reviews standard iterative decoding of LDPC codes. In Section 3, Bit Flipping – Sum Product decoder is explained. The error performance comparisons for different codes with the proposed methods are shown in Section 4.
INTRODUCTION
L
ow density parity check (LDPC) codes are a class of linear block codes which were first introduced by Gallager in 1963 [1]. Recently, LDPC codes have received a lot of attention because their error performance is very close to the Shannon limit when decoded using iterative methods [2]. They have emerged as a viable option for forward error correction (FEC) systems and have been adopted by many advanced standards, such as 10 Gigabit Ethernet (10GBASET) [3] and digital video broadcasting (DVB-S2) [4]. Also the next generations of WiFi and WiMAX are considering LDPC codes as part of their error correction systems. Defined as the null space of a very sparse M×N parity check matrix H, an LDPC code is typically represented by a bipartite graph, called Tanner graph, in which one set of N variable nodes corresponds to the set of code word, another set of M check nodes corresponds to the set of parity check constraints and each edge corresponds to a non-zero entry in the parity check matrix H. An LDPC code is known as (j,k)regular LDPC code if each column and each row in its parity check matrix have j and k non-zero entries, respectively. The construction of LDPC code is typically random. As illustrated in Fig. 1, LDPC code is decoded by the iterative beliefpropagation (BP) algorithm [5] that directly matches its Tanner graph [6].
978-1-4244-5997-1/10/$26.00 ©2010 IEEE
II.
STANDARD ITERATIVE DECODING OF LDPC CODES
LDPC codes are defined by an M× N binary matrix called the parity check matrix H. The number of columns, represented by N, defines the code length. The number of rows in H, represented by M, defines the number of parity check equations for the code. Column weight Wc is the number of ones per column and row weight Wr is the number of ones per row. LDPC codes can also be described by a bipartite graph or Tanner graph [6]. The parity check matrix and corresponding Tanner graph of an LDPC code with code length N=9 bits are shown in Fig. 1. Each check node Ci corresponding to row i in H is connected to variable node Vj corresponding to column j in H. LDPC codes can be iteratively decoded in different ways depending on the complexity and error performance requirements. Sum-Product (SP) [5] is near-optimum decoding algorithms which are widely used in LDPC decoders and is known as standard decoders. This algorithm perform row and column operations iteratively using two types of messages: check node message α and variable node message β. The parity check matrix and the block diagram of the standard decoding algorithm are shown in Fig. 2 and Fig. 3, respectively.
denotes the set of check nodes which participate in the variable node j update. Also V(i) \j denotes all variable nodes in V(i) except node j. C(j) \i denotes all check nodes in C(j) except node i. Moreover, we define the following variables which are used throughout this paper. λj
is defined as the information derived from the loglikelihood ratio of received symbol yi,
P (x = 0 y )
i i λi = ln P ( x = 1 y ) i i
(1)
αij
is the message from check node i to variable node j. This is the row processing output.
βij
is the message from variable node j to check node i. This is the column processing output. SPA decoding can be summarized in these four steps:
Figure 2. Parity check matrix and Tanner graph representation of a (Wc = 2, Wr = 3) LDPC code with code length N = 9 bits. Check node Ci represents a parity check constraint in row i and variable node Vj represents bit j in the code
1) Initialization: For each i and j, initialize βij to the value of the log-likelihood ratio of the received symbol yj, which is λj. During each iteration, α and β messages are computed and exchanged between variable nodes and check nodes through the graph edges according to the following steps numbered 2–4. 2) Row processing or check node update: Compute αij messages using β messages from all other variable nodes connected to check node Ci, excluding the β information from Vj:
αij =
∏ sign(β ) × ϕ ∑ ϕ ( β )
j' ∈V(i)\j
ij'
j' ∈V(i)\j
ij'
(2)
Where the non-linear function Figure 3. Parity check matrix of a (Wc,Wr) LDPC code with code length N highlighting row processing operations using standard decoding.
LDPC codes are commonly decoded by an iterative message passing algorithm which consists of two sequential operations: row processing or check node update and column processing or variable node update. In row processing, all check nodes receive messages from neighboring variable nodes, perform parity check operations and send the results back to neighboring variable nodes. The variable nodes update soft information associated with the decoded bits using information from check nodes, and then send the updates back to the check nodes, and this process continues iteratively. A. Sum – Product Decoding We assume a binary code word (x1, x2, ..., xN) is transmitted using a binary phase-shift keying (BPSK) modulation. Then the sequence is transmitted over an additive white Gaussian noise (AWGN) channel and the received symbol is (y1, y2, ..., yN). We define V(i) = {j : Hij =1} as the set of variable nodes which participate in check equation i. C(j)={i : Hij =1}
ϕ ( x) = − log tanh
x 2
(3)
The first product term in Eq. 2 is called the parity (sign) update and the second product term is the reliability (magnitude) update. 3) Column processing or variable node update: Compute βij messages using channel information (λj ) and incoming α messages from all other check nodes connected to variable node Vj , excluding check node Ci.
β ij = λ j +
∑
αi ' j
(4)
i '∈C ( j ) \ i
4) Syndrome check and early termination: When column processing is finished, every bit in column j is updated by adding the channel information (λj ) and α messages from neighboring check nodes.
z j = λj +
∑
i '∈C ( j )
αi ' j
(5)
From the updated vector, an estimated code vector X = {x1, x2, ..., xN} is calculated by:
1, if z i ≤ 0 xi = 0, if z i > 0
SPA, iter=10 BFSP,iter=10 10
-1
10
10
10
BIT FLIPPING – SUM PRODUCT DECODING ALGORITHM
The proposed algorithm assembles two different algorithms, the first one is the sum-product and the second is the Bit Flipping.
2. In the second step, for each check node equal zero, we have to find variable nodes corresponded to this check node in the parity check matrix H, we denote these variable nodes; valid column, as shown in Fig 4. 3. The third step is to put all valid variable nodes in a vector; we denote this variable valid column array 4. The fourth step consists to flip each bit in the estimated code word that doesn’t belong to the valid column array. ERROR PERFORMANCE SIMULATION RESULTS
In this section, the error performance of two regular LDPC codes for the proposed algorithm is presented. The simulations are performed over an additive white Gaussian noise (AWGN) channel with BPSK modulation. The maximum number of iterations is set to Imax = 10. Valid variable nodes correspond to C1
Valid check nodes
Figure 4. Proposed decoding algorithm steps for a (2, 3) LDPC code with N=9 bits.
-3
-4
0.2dB
-5
0
0.5
1
1.5 2 Eb/No in dB
2.5
3
3.5
Figure 5. Simulation result for regular LDPC code (3,6) with N=720 bits 0
The decoding algorithm process is as follows:
10
SPA, iter=8 BFSP, iter=8 -1
10
BER
1. The first step of the proposed BFSP algorithm will find the syndrome vector s by multiplying the tentatively decoded bit sequence x with the transpose of the parity check matrix, i.e. s = z.HT. If the syndrome vector s is an all-zero vector, the decoder will declare successful decoding and the iterations will be terminated. If not, go on to the second step.
-2
BER
10
If H · X = 0, then X is a valid code word and therefore the iterative process has converged and decoding stops. Otherwise the decoding repeats from step 2 until a valid code word is obtained or the number of iterations reaches a maximum number, Imax, which terminates the decoding process.
IV.
0
(6)
T
III.
10
-2
10
-3
10
0.2 dB -4
10
0
0.5
1
1.5 Eb/No in dB
2
2.5
3
Figure 6. Simulation result for regular LDPC code (3,6) with N=600 bits
The following labelings are used for the figures: “BFSP” for the proposed Bit Flipping – Sum Product “SPA” for the standard Sum Product Algorithm. Figure 5 depicts the error performance of a (3, 6) regular LDPC code with N = 720 bits. As shown in the figure, at bit error rate (BER) = 2× 10−5, the performance gap between the standard Sum Product and Bit Flipping Sum Product for high SNR values, is 0.2 dB. For low SNR value, the noise is very strong, so at the stage of flipping the bits, the proposed algorithm flips correct and incorrect bits. Thus, the performance of the proposed algorithm is similar, SNR from 0 dB to 1 dB, or poor, SNR from 1 dB to 1.7 dB, compared to the classical one. Figure 6 shows the error performance of a (3,6) LDPC code, N = 600 bits. At BER = 9×10−4, the performance gap between the standard Sum Product and Bit Flipping Sum Product, is 0.2 dB. The same for this example for low SNR values, from 0 dB to 0.5 dB, our algorithm flips correct and incorrect bits; because, at the step 2 of our algorithm, section 3, we can have for a valid check node, 2, 4 or more (an even number) of variable nodes that can be incorrect, then the proposed algorithm will not flip these bits.
V.
CONCLUSION
In this contribution, we proposed a novel LDPC decoding algorithm for regular LDPC codes; it’s Bit Flipping – Sum Product decoder. Simulation results show that the proposed algorithm outperforms the classical one for 0.2 dB while maintaining the same level of complexity. Our simulation results show that for low SNR values the error performance of the proposed LDPC decoder is similar to Standard SumProduct, so we have to improve our algorithm to reflect the low SNR values. ACKNOWLEDGMENT The authors thank M. Mrabti and M. Zouak have help us achieve this works and very helpful discussions; and gratefully acknowledge support from Maroc Telecom.
REFERENCES [1] [2] [3] [4] [5]
[6]
R.G. Gallager, “Low-density parity check codes,” IRE Transaction Info.Theory, vol. IT-8, pp. 21–28, Jan. 1962. “IEEE P802.3an, 10GBASE-T task force,” http://www.ieee802. org/3/an. “T.T.S.I. digital video broadcasting (DVB) second generation framing structure for broadband satellite applications,” http://www.dvb. org. “IEEE 802.16e. air interface for fixed and mobile broadband wireless access systems. ieee p802.16e/d12 draft, oct 2005.,” . D. J. C. MacKay, “Good error-correcting codes based on very sparse matrices,” IEEE Transactions on Information Theory, vol.45, pp. 399– 431, Mar.1999. M. R. Tanner, “A Recursive Approach to Low Complexity Codes,” IEEE Transactions on Information Theory, vol. 27, September 1981.