An Efficient Stopping Criterion for Turbo Product Codes - IEEE Xplore

0 downloads 0 Views 160KB Size Report
Abstract—In this letter, a stopping criterion using the error- detecting capability of linear block codes is proposed for the decoding of turbo product codes.
IEEE COMMUNICATIONS LETTERS, VOL. 11, NO. 6, JUNE 2007

525

An Efficient Stopping Criterion for Turbo Product Codes Guo Tai Chen, Lei Cao, Senior Member, IEEE, Lun Yu, and Chang Wen Chen, Fellow, IEEE

Abstract— In this letter, a stopping criterion using the errordetecting capability of linear block codes is proposed for the decoding of turbo product codes. The iterative decoding is stopped when the outputs from the Chase decoder are valid codewords for all rows and columns simultaneously. Simulation shows that the proposed method can reduce about one and half iterations compared with an existing stopping method, without noticeable BER performance loss. Some modification has also been discussed which may further reduce the decoding complexity. Index Terms— Stopping criterion, decoding, turbo product codes.

I. I NTRODUCTION

T

HOUGH introduced by Elias in 1954, product block codes [1] have raised much interest since 1994 when Pyndiah presented an iterative decoding method [2] based on the Chase-II algorithm. These codes with turbo decoding, often termed as turbo product codes (TPCs) or block turbo codes (BTCs), have been regarded as an attractive alternative to turbo codes [3] built on convolutional component codes, especially when high code rates are required. In the decoding of turbo codes, early stopping techniques with little performance degradation are desired to reduce the computational cost and delay. These techniques have been rigorously studied for turbo codes with convolutional component codes. However, till date, early stopping for TPCs has not been fully investigated yet. In [4], the scaling factor and standard derivation of extrinsic information are calculated and their variation is used for the stopping criterion. In [5], [6], the decoding stops when all codewords decoded by the Chase-II algorithm are the same as the signs of the input information matrix. In this letter, we present another yet very efficient stopping criterion for decoding of TPCs using the error-detecting capability of linear block codes. Specifically, we propose to stop the turbo decoding when all the rows and columns of the decision matrix decoded by the Chase decoder are valid codewords. It will be shown that compared with [5], [6], the proposed method can reduce the overall decoding complexity above one iterations without any noticeable decod-

Manuscript received December 11, 2006. The associate editor coordinating the review of this letter and approving it for publication was Dr. Daniela Tuninetti. This work was supported by China National Natural Science Foundation under Research Grants No. 60372070 and No. 60328103. G. T. Chen and L. Yu are with the College of Physics & Information Engineering, Fuzhou University, Fuzhou, Fujian Province 350002, P.R.O.C. (e-mail: [email protected]). G. T. Chen is also with the Fuqing Branch of Fujian Normal Univerisity, Fuqing, Fujian Province 350300, P.R.O.C. L. Cao is with the Department of EE, University of Mississippi, University, MS 38677, USA (e-mail: [email protected]). C. W. Chen is with the Department of ECE, Florida Institute of Technology, Melbourne, FL 32940, USA. Digital Object Identifier 10.1109/LCOMM.2007.062023.

ing performance degradation. In addition, the method may be further modified to make more complexity reduction possible. In the following, coding and decoding of TPCs are first described in Section II. The stopping criterion is then presented in Section III, followed by the discussion of complexity reduction and further modification in Section IV. Finally, the conclusion is given in Section V. II. T URBO P RODUCT C ODES 1

Let C (n1 , k1 , δ1 ) and C 2 (n2 , k2 , δ2 ) denote two systematic linear block codes. Here, ni , ki and δi (i=1,2) are the codeword length, the number of information bits and the minimal Hamming distance, respectively. A 2-D product code with these two block codes can be obtained as follows. 1) placing (k1 × k2 ) information bits in an array of (k1 rows) and (k2 columns); 2) coding the k1 rows using code C 2 ; 3) coding the n2 columns using code C 1 . After encoded, the product code has minimal Hamming distance (δ1 × δ2 ) with (n1 × n2 ) bits, including information (k1 × k2 ) bits. In 1994, Pyndiah [2] presented an iterative decoding algorithm for these block product codes. For each row or column, the Chase-II algorithm is used to find a list of candidate codewords from which the likelihood codeword and extrinsic information for each bit of the codeword are calculated. Assume that additive white Gaussian noisy (AWGN) channels and BPSK modulation, {0 → −1, 1 → +1}, are considered. Let R = (r1 , r2 , · · · , rn ) be a row (or a column) of the soft-input information matrix after the noise channel. Then, the decoding process based on the Chase-II algorithm is as follows. 1) Obtain the sign decision Y = (y1 , y2 , · · · , yn ) from R, by setting yj = 0 for rj < 0 and yj = 1 for rj ≥ 0. 2) Determine the positions of p least reliable bits (LRBs) in Y using R. 3) Form 2p test patterns T i , i = 1, . . . , 2p , by setting 0 or 1 in LRB positions and setting 0 in other positions. 4) Form 2p test sequences Zi = Y ⊕ T i , i = 1, . . . , 2p . 5) Calculate the syndromes of these test sequences. 6) Using an algebraic decoder to find corresponding codewords and form a pool of candidate codewords. 7) Select the maximum-likelihood codeword D = (d1 , d2 , · · · , dn ) from the pool which is closest to R. 8) If there exists a compete codeword C = (c1 , c2 , · · · , cn ) in the pool for the j th bit of D, where cj = dj , the extrinsic information for the j th bit of D is obtained as

c 2007 IEEE 1089-7798/07$25.00 

wj =

(D · R − C · R) dj − rj 2

(1)

526

IEEE COMMUNICATIONS LETTERS, VOL. 11, NO. 6, JUNE 2007

10

where “·” is the inner product. If there is no such compete codeword, the extrinsic information is set as

8

(2)

where β is a preset reliability factor. If no codeword can be found when decoding R, wj is then set as 0 and the output decision D will be the sign decision of R, that is, dj = 0 when rj < 0 and dj = 1 when rj ≥ 0, j = 1, · · · , n. As shown in [8], the above obtained extrinsic information W will be normalized and then multiplied with a scaling factor, α, before being added back to R for the next round of decoding. α and β are parameters that can be chosen from simulations. However, for the extended BCH (eBCH) component codes with single-error correcting capability, it was shown [7] that there is no noticeable performance loss by setting α = 0.5 and β = 1, without using normalization. In this letter, two component codes in a product code are the same eBCH code.

Average iteration

wj = βdj

proposed [5],[6]

6 TPC(64,57,4)2 4 2

TPC(32,26,4)2

0 1

2

3 SNR (dB)

4

5

Fig. 1. Average number of iterations vs. SNR for two stopping methods where maximum iteration number is set as 10.

100 10-1 TPC(64,57,4)2

10-2 BER

III. A S TOPPING C RITERION FOR P RODUCT C ODES Early stop of the decoding of TPCs has rarely been investigated in the literature. An iterative decoding method was proposed in [4] that uses the value of scaling factor and the standard deviation of extrinsic information as a stopping criterion. However, it needs a process to calculate the deviation of extrinsic information and dynamically estimate and change the value of scaling factor. Another stopping criterion was proposed in [5], [6], where the decoding terminates when the sign sequences of the soft-input information matrix equal the codewords decoded by the Chase decoder. This stopping condition generally requires many iterations to satisfy. Simulation shows that after the decoding converges to a product codeword where all rows and columns are codewords, very few frames may change to a different product codeword with more iterations. Therefore, in this letter, we propose to stop the decoding once the output from the Chase decoder is a product codeword. Since all rows and columns are valid codewords when decoding stops, this proposed method makes a wrong decision only in the following scenario: there are er (≥ δ1 ) rows each of which has el (≥ δ2 ) errors and all these errors must be located in the crossing positions between el columns and the previous er rows. As a result, all these rows and columns may be decoded to wrong codewords simultaneously. Intuitively, this situation is very unlikely to occur, which can also be verified by the BER performance in later simulation. Since block codes have both error-correcting and errordetecting capabilities, whether a decoded sequence is a codeword can be easily checked through the syndrome. Only when all rows (or columns) in one direction after the Chase decoding are all codewords, the detection starts for the columns (or rows) in the other direction before conducting the next round of decoding in that direction. Moreover, only the first k2 columns (or k1 rows) along the detection direction need to be checked because the last (n2 − k2 ) columns (or (n1 − k1 ) rows) are the modulo-2 addition of the first k2 columns (or k1 rows). The detection will be terminated whenever a row (or column) is found not being a codeword. In summary, the proposed stopping process can be described as follows.

10-3 TPC(32,26,4)2

10-4

, , ,

10-5 10-6 10

proposed [5],[6] 10

-7

1

2

3

4

SNR (dB) Fig. 2. Performance vs. SNR for the two stopping methods and the conventional decoding with 10 iterations.

1) After performing the soft-decoding of the input information matrix, if the Chase decoder finds that all rows (or columns) are codewords, go to step 2). Otherwise, continue the conventional turbo decoding. 2) Detect the first k2 columns (or k1 rows) of the hard decision by calculating the syndromes one by one. Whenever the non-zero syndrome (indicating a noncodeword column (or row)) is found, the detection stops and the decoder updates the soft-input by adding the extrinsic information and continue the decoding along this direction. Go to step 1). If the detected columns (or rows) are all codewords, stop the decoding. Fig. 1 shows the simulation results for TPC(32,26,4)2 and TPC(64,57,4)2 in AWGN channels and the maximum number of iterations is 10. It can be found that about one and a half iterations (one iteration includes two half-iterations along rows and columns respectively) can always be reduced with the proposed stopping criterion compared with the method in [5], [6]. BER performance is shown in Fig. 2, where the results after 10 iterations of TPC are also presented. The performance of all three methods are almost the same. More iterations do not produce noticeable improvement. The crossover feature for the two codes in very low BER in Fig. 2 is consistent with the result in [8].

CHEN et al.: AN EFFICIENT STOPPING CRITERION FOR TURBO PRODUCT CODES

527

TABLE I C OMPLEXITY OF C HASE II DECODING AND C ODEWORD D ETECTING

Syndrome addition Syndrome comparison Metric calc. Extrinsic info. calc. Other computation

n + 2p − 1 2p 2p − 1 n find error positions and modify metrics

1

Codeword detecting n 1 none none

Average number/n

Chase II algorithm

1.2

none

0.8 TPC(64,57,4)2 0.6 0.4 0.2

IV. C OMPLEXITY R EDUCTION AND F URTHER M ODIFICATION

V. C ONCLUSION In this letter, an efficient early stopping criterion was proposed for turbo product block codes. Compared with an existing method, it gives more than one iteration reduction without noticeable performance loss. More complexity reduction may be obtained with further modification.

1

Fig. 3.

2

3 SNR (dB)

4

5

Average number of rows/columns for detection over n vs. SNR.

100 Percent of row/columns assumed correct when d d

This proposed method does need additional operations to calculate the syndromes of the rows or columns. Though the average number of rows and columns need to be checked must be larger than k, as shown in Fig. 3, this value is only around n for SNR range where BER is less than 10−2 . Here, the number of iterations is 4, same as that in [8]. In other words, the additional computation is the calculation of syndromes for about half-iteration of decoding but does not include other computation in the Chase-II decoding. This complexity and that used in a Chase-II decoding for one row (or column) is shown in TABLE I. As known in Section III, one and a half iterations can be reduced with the proposed stopping criterion. As a result, the proposed method can save the calculation of one full-iteration and the calculation except syndrome computation of one half-iteration, compared with the method [5], [6]. The complexity may be further reduced with some modification of the above proposed method. For example, it generally needs more iterations before the decoding converges in lower SNR. Hence, we may set an iteration number to start the detection process when the channel side information is available. When decoding converges, many rows (or columns) could have been detected as codewords after the Chase decoding. Therefore, when knowing that all rows (or columns) are codewords in one direction, if the number of columns (or rows) being codewords, d, in the other direction is greater than a predefined value d0 , we may assume the columns (or rows) whose codewords are also the sign decision of corresponding soft-input information as correctly decoded codewords. Bits in these codewords will be given high extrinsic information and need not to be Chase decoded in the next stage so as to reduce the complexity. As shown in Fig. 4, when d0 =10 is set for both TPC(32,26,4)2 and TPC(64,57,4)2 with maximum iteration number 4, on average more than 70% of those detected rows (or columns) do not need Chase decoding in the following stages, which obviously reduces the decoding complexity. Moreover, little BER performance degradation has been observed in the simulation with this modification.

TPC(32,26,4)2

95 90 TPC(64,57,4)2

85 80

TPC(32,26,4)2

75 70 65 1

2

3 SNR (dB)

4

5

Fig. 4. Percentage of rows/columns assumed to be correct among the d detected rows/colums when d ≥ d0 .

R EFERENCES [1] P. Elias, “Error-free coding,” IRE Trans. Inf. Theory, vol. 4, no. 4, pp. 2937, 1954. [2] R. Pyndiah, A. Glavieux, A. Picart, and S. Jacq, “Near optimum decoding of products codes,” in Proc. IEEE GLOBECOM 1994, vol. 1, pp. 339343. [3] C. Berrou, A. Glavieux, and P. Thitimajshima, “Near Shannon limit errorcorrecting coding and decoding: Turbo codes (I),” in Proc. IEEE ICC 1993, vol. 2, pp. 1064-1071. [4] A. Picart and R. Pyndiah, “Adapted iterative decoding of product codes,” in Proc. IEEE GLOBECOM 1999, vol. 5, pp. 2357-2362. [5] P. A. Martin and D. P. Taylor, “On adaptive reduced-complexity iterative decoding,” in Proc. IEEE GLOBECOM 2000, vol. 2, pp. 772-776. [6] Q. Zhang and T. Le-Ngoc, “A decoding algorithm for turbo product codes using optimality test and amplitude clipping,” in Proc. IEEE GLOBECOM 2001, vol. 1, pp. 664-668. [7] C. Argon and S. W. McLaughlin, “An efficient Chase decoder for turbo product codes,” IEEE Trans. Commun. vol. 52, no. 6, pp. 896-898, June 2004. [8] R. Pyndiah, “Near-optimum decoding of product codes: Block turbo codes,” IEEE Trans. Commun., vol. 46, no. 8, pp. 1003-1010, 1998.