This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the ICC 2007 proceedings.
A Modified Bit-Flipping Decoding Algorithm for Low-Density Parity-Check Codes T.M.N. Ngatched, F. Takawira
M. Bossert
School of Electrical, Electronic and Computer Engineering University of KwaZulu-Natal Durban 4041, South Africa Email:
[email protected],
[email protected]
Department of T.A.I.T University of Ulm Albert-Einstein-Allee 43, 89081 Ulm, Germany Email:
[email protected]
Abstract― In this paper, a modified bit-flipping decoding algorithm for low-density parity-check (LDPC) codes is proposed. Both improvement in performance and reduction in decoding delay are observed by flipping multiple bits in each iteration. Our studies show that the proposed algorithm achieves an appealing tradeoff between performance and complexity for many constructions of LDPC codes.
I. INTRODUCTION Low-density parity-check (LDPC) codes, originally introduced by Gallager [1], and brought into prominence by MacKay and Neal [2], have been attracting a great deal of research interest. The interest in these codes stems from their near Shannon limit performance, their simple descriptions and implementations, and their amenability to rigorous theoretical analysis. LDPC codes can be decoded with various decoding methods, ranging from low to high complexity and from reasonably good to very good performance. These decoding methods include hard-decision ([1], [3]), soft-decision ([1], [4]-[8]), and hybrid decoding ([9]-[16]) schemes. The algorithm in [15], which we call improved weighted bit flipping (IWBF) was shown to offer one of the most appealing performance versus cost tradeoffs. Though it was designed for high-rate finite geometry (FG) LDPC codes [10], we noticed that it offers good performance for any LDPC code whose parity-check matrix has reasonably large column weights. FGLDPC codes are algebraically generated and allow effective, low-cost, hardware encoder implementations. In general, an LDPC code whose parity-check matrix has relatively large column weights, with IWBF, performs less than 1 dB away from its performance with iterative decoding based on belief propagation (IDBP). IWBF uses the fact that the codewords of an LDPC code are sparsely distributed in an N -dimensional space over GF (2), where N is the code length. Thus, in almost all cases the decoding leads to either a correct codeword or it cannot find any codeword. In this paper, a modification of IWBF is proposed. The modified algorithm, which we called multiple bit-flipping (MBF), updates multiple bits in each iteration. The modification is based on the observation that, for low-density parity-check matrices which satisfy the row-column (RC) constraint (the RC constraint is the requirement that no two rows or two columns of a matrix have more than 1-component
in common), the syndrome weight increases with the number of errors on average. Therefore, the idea is to use the syndrome weight in each iteration during the decoding process to approximately estimate the number of bits to be flipped. This modification is achieved with a small increase in complexity, but significantly speeds up the decoding process. Surprisingly, it also leads to significant improvement in performance in terms of both the bit-error rate (BER) and frame-error rate (FER). The remainder of the paper is organized as follows: In the next section, notations, definitions, and the standard IWBF algorithm are reviewed. The modified algorithm is presented in Section III. Some simulation results are provided in Section IV to illustrate the performance of the modified algorithm and, finally, conclusions are drawn in Section V. II. STANDARD IMPROVED WEIGHTED BIT-FLIPPING ALGORITHM A. Notations and Basic Definitions A binary LDPC code is completely described by its sparse binary parity parity-check matrix H . For an ( N , K ) LDPC code, H has N columns and M ≥ N − K rows since it may contain some redundant parity-check sums. For a regular LDPC code, H has a constant column weight wc and a constant row weight wr . For each row m , 0 ≤ m < M , define the following index set: N( m ) {n : H mn = 1} . (1) It is clear that N ( m ) is the set of bits that participate in the mth parity-check. Similarly, for each code bit, i.e. each column n , 0 ≤ n < N , the set of parity-checks in which the nth code bit participates can de defined as: M(n) {m : H mn = 1} . (2) Suppose a binary ( N , K ) LDPC code with length N and dimension K is used for error control over a binary-input additive white Gaussian noise (BIAWGN) channel with zero mean and power spectral density No 2 . Assume binary phase-shift-keying (BPSK) signaling with unit energy. A codeword
c = (c0 , c1 ,… , cN −1 ) ∈ {GF (2)} is mapped N
into
bipolar sequence x = ( x0 , x1 ,… , xN −1 ) before its transmission,
This work is partially supported by Alcatel and Telkom South Africa as part of the centers of Excellence programme.
1-4244-0353-7/07/$25.00 ©2007 IEEE
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the ICC 2007 proceedings.
where xi = (2ci − 1) with 0 ≤ i < N . Let y = ( y0 , y1 ,…, y N −1 ) be the soft-decision received sequence at the output of the receiver matched filter. For 0 ≤ i < N , yi = xi + ni where ni is a Gaussian random variable with zero mean and variance No 2 . An initial binary hard decision of the received
(
)
sequence, z (0) = z0(0) , z1(0) ,… , z N(0)−1 , is determined as follows: 1, if yi ≥ 0 zi(0) = . 0, if yi < 0
(3)
For any tentative binary hard decision z made at the end of each decoding iteration, the syndrome vector is computed as s = z ⋅ H T . The log-likelihood ratio (LLR) for each channel output yi , 0 ≤ i < N is defined as: Li
ln
P ( ci = 1| yi )
P ( ci = 0 | yi )
.
(4)
The absolute value of Li , Li , is called the reliability of the initial decision zi(0) . The larger the magnitude Li
is, the
(0) i
larger the reliability of the hard-decision digit z is. For each parity-check sum, i.e., each row m in H , 0 ≤ m < M , define the “lower check reliability” value lm and the “upper check reliability” value um as follows: lm
min Ln , um
n∈ N ( m )
max Ln .
n∈ N ( m )
(5)
B. Algorithm Vectors z (0) and L = ( L0 , L1 , … , LN −1
)
are inputs to the
decoder. Suppose we are starting the kth iteration, and that at the end of the (k − 1)th iteration the tentative binary hard decision is z ( k −1) with corresponding syndrome vector s ( k −1) = z ( k −1) ⋅ H T ≠ 0 . In the kth iteration, we want to flip one bit in z ( k −1) and create a new tentative binary hard decision vector z ( k ) . To choose the bit to flip, we first define for each bit n , 0 ≤ n < N , the cumulative metric over all the checks in M(n) :
decoding stops. Otherwise, we start another decoding iteration. The bit to be flipped is chosen not only based on the number of unsatisfied check sums each bit is contained in, but also based on the reliability of each bit with respect to the reliabilities of the most and the least bits that are contained with it in the same unsatisfied check sum. Since the metric φn( k ) maintains linearity with respect to Ln , for AWGN channel Ln can be replaced with yn . Thus no knowledge of the signal energy or noise power is required. III. MODIFIED IMPROVED WEIGHTED BIT-FLIPPING ALGORITHM The above algorithm can be modified to reduce the decoding delay by flipping multiple bits in each iteration. The idea of the modification is based on the observation that, for low-density parity-check matrices which satisfy the RC constraint, the syndrome weight increases with the number of errors on average. Therefore, in each iteration, the syndrome weight gives an approximation of the number of bits to be flipped. In essence, we calculate p = wH ( s ( k ) ) wc , where wH (a) denotes the Hamming weight of a , wc the column weight of the code, and x the greatest integer less than or
{
equal to x . We then find the set D ( k ) = j1( k ) , j2( k ) ,… , j (pk )
}
of
the p smallest values of φn( k ) , 0 ≤ n < N , and flip the bits in this set. Like the standard IWBF algorithm, the MBF algorithm described above is a search algorithm. If in the kth iteration the tentative binary hard decision vector z ( k ) coincides with a previously considered vector z ( k0 ) , k0 < k , then D ( k +1) = D ( k0 +1) and z ( k +1) = z ( k0 +1) . Since there is no valid
{
}
codeword vector in z ( k0 ) , z ( k0 +1) ,… , z ( k −1) , we will not find a
Ln − lm 2, if sm( k −1) = 0, (7) ( k −1) Ln − ( um + lm 2 ) , if sm = 1. The bit to be flipped is the one that has the smallest cumulative metric. Let j ( k ) denote the bit position to be flipped at the kth iteration. Where, j ( k ) = arg min φn( k ) . (8)
codeword if we continue the search. The decoding process is trapped in an infinite loop, and decoding failure will be reported when the maximum allowable iteration number is reached. To detect and avoid such infinite loops when they appear, a loop detection mechanism was introduced and applied to the standard IWBF in [15]. This loop detection can be modified and applied to the proposed algorithm to improve the decoding performance without much complexity increase. When detected, the loop can be avoided simply by decreasing the number of bits to be flipped. Suppose that we are in the kth iteration, that the sets of bits flipped so far are D (1) , D (2) , , D( k −1) , and that the set of bit positions selected for flipping at the kth iteration is D ( k ) . For each position j , 0 ≤ j < N , and for each k ′ , 1 ≤ k ′ < k , we
After flipping the bit at the position j ( k ) in z ( k −1) , we obtain a
compute the number, n (j k ′) , of times it appears in the set
new tentative hard decision z ( k ) and update the syndrome vector to s ( k ) . If s ( k ) = 0 , we have a valid codeword and the
{D
φn( k )
∑
m∈M( n )
φn(,km) , 0 ≤ n < N ,
(6)
where for each check m ∈ M(n)
φn( ,km)
0≤ n < N
( k ′)
, D ( k ′+1) ,
, D(k )
}
modulo 2. Next we compute the
weight of the binary sum z ( k ′−1) + z ( k ) , w( k ′) , as follows:
This full text paper was peer reviewed at the direction of IEEE Communications Society subject matter experts for publication in the ICC 2007 proceedings.
w( k ′ ) =
∑
0≤ j < N
n (j k ′) .
(9)
If w( k ′) = 0 for any k ′ , that indicates a loop. For 0 ≤ j < N , n (j k ′) can be computed iteratively. Let
(
n( k ′) = n0( k ′ ) , n1( k ′) ,
)
, nN( k−′ )1 , we have:
′ ( k ′ +1) + 1 (mod 2), if j ∈ D ( k ) n j n (j k ′) = ( k ′+1) (10) ′ if j ∉ D( k ) n j , for 1 ≤ k ′ < k , with initial condition n (j k ) = 1 for j ∈ D ( k ) , and
n (j k ) = 0 for j ∉ D ( k ) . Let p ( k ) the number of bits flipped at
iteration k and p ( k ′) the number of bits flipped at iteration k ′ , w( k ′) can also be computed iteratively, for 1 ≤ k ′ < k , as follows: w( k ′) = w( k ′+1) + p ( k ′) − 2mkk ′+1 , (11) with initial condition w( k ) = p ( k ) . In (11), mkk ′+1 is the number of positions j , 0 ≤ j < N , for which n (j k ′) = n (jk ′+1) = 1 . The steps of the modified algorithm that incorporates loop detection and prevention are as follows: Step 1) Initialization: Set iteration counter k = 0 . Calculate z (0) . For each m , 0 ≤ m < M , calculate lm 2 and um + lm 2 . Step 2) Calculate syndrome s ( k ) . If s ( k ) = 0 , stop the decoding and return z ( k ) . Otherwise calculate p = wH ( s ( k ) ) wc where wc is the column weight of the code. Step 3) k ← k + 1 . If k > k max , where kmax is the userdefined maximum number of iterations, declare decoding failure and stop the decoding. Step 4) For each n , 0 ≤ n < N , calculate φn( k ) .
{
Step 5) Find the set D ( k ) = j1( k ) , j2( k ) ,… , j (pk ) smallest (k ) 1
0≤ j
values (k ) 2
< j