Density evolution for two improved BP-based decoding ... - IEEE Xplore

4 downloads 119 Views 219KB Size Report
Abstract—In this letter, we analyze the performance of two improved belief propagation (BP) based decoding algorithms for. LDPC codes, namely the ...
208

IEEE COMMUNICATIONS LETTERS, VOL. 6, NO. 5, MAY 2002

Density Evolution for Two Improved BP-Based Decoding Algorithms of LDPC Codes Jinghu Chen, Student Member, IEEE, and Marc P. C. Fossorier, Senior Member, IEEE

Abstract—In this letter, we analyze the performance of two improved belief propagation (BP) based decoding algorithms for LDPC codes, namely the normalized BP-based and the offset BP-based algorithms, by means of density evolution. The numerical calculations show that with one properly chosen parameter for each of these two improved BP-based algorithms, performances very close to that of the BP algorithm can be achieved. Simulation results for LDPC codes with code length moderately long validate the proposed optimization. Index Terms—Block codes, density evolution, iterative decoding, LDPC codes.

I. INTRODUCTION

A. BP and BP-Based Algorithms and denote the message sent For iteration , let and the message sent from from bit node to check node to bit node , respectively [2]. Also denote the check node , and the set of neighboring check nodes for bit node as as . For set of neighboring bit nodes for check node and . regular LDPC codes, we have Then each iteration of the BP decoding includes the following two steps. and each 1) Processing in check nodes—for each , update

L

OW DENSITY parity check (LDPC) codes [1], [2] can achieve very good performance with the so-called BP algorithm or sum-product algorithm [2]. The BP algorithm can also be simplified to the BP-based algorithm [3], which greatly reduces the decoding complexity in implementation, but degrades the decoding performance as well. In [4], density evolution has been elaborated to analyze the capacity of LDPC codes for BP decoding. The density evolution algorithm was applied in [5] and [6] to the BP-based decoding algorithm, and in [7] to modified versions of the latter based on corrective terms. In [8], we proposed to improve the performance of the BP-based algorithm by normalization. In this letter, we apply density evolution to obtain the thresholds of the normalized BP-based algorithm and another improved BP-based algorithm, named offset BP-based algorithm. Surprisingly good results are obtained for both algorithms, whose implementations are much simpler than that of the BP algorithm. The density evolution of these two improved BP-based algorithms is studied from a practical viewpoint, with no particular attention on theoretical issues such as concentration and convergence theorems. II. DENSITY EVOLUTION FOR LDPC CODES WITH BP-BASED DECODING In the following, we assume BPSK modulation over an are Gaussian random AWGN channel. The received values and variance . The variables with mean . log-likelihood ratio of bit is Manuscript received December 26, 2001. The associate editor coordinating the review of this letter and approving it for publication was Dr. K. Narayanan. This work was supported by the National Science Foundation under Grant CCR-97-32959 and Grant CCR-00-98029, and by the Hawaii Center for Advanced Communications (HCAC). The authors are with the Department of Electrical Engineering, University of Hawaii, Honolulu, HI 96822 USA (e-mail: jinghuspectra.eng.hawaii.edu; [email protected]). Publisher Item Identifier S 1089-7798(02)05094-9.

(1) 2) Processing in bit nodes—for each and each update

,

(2) The BP-based algorithm uses simplified processing in check nodes based on (3)

. Note that the max-product algorithm of [5] and with the Max-Log-MAP algorithm presented in [6] are equivalent to the BP-based algorithm of [3] (also known as min-sum algorithm). B. Density Evolution for Iterative BP-Based Decoding The density evolution of the BP-based algorithm in check nodes processing can be found in [5], [6]. In the following, we present it in a slightly different way, but the results are equivalent. , denoted as , is a function of inA specific dependent identically distributed random variables, denoted as , with probability density function . Similarly to the notations in [6], we define (pdf) and , . Note that and are the probabilities of for with magnitude greater than , and sign and , respectively. Then for

1089-7798/02$17.00 © 2002 IEEE

odd number of negative values in and

(4)

CHEN AND FOSSORIER: DENSITY EVOLUTION FOR BP-BASED DECODING ALGORITHMS OF LDPC CODES

Hence the probability distribution function of

is (5)

Similarly, for

(6) with respect to for both (5) and Taking the derivative of as (6), we finally obtain the pdf of

209

formances very close to those of the BP algorithm. However, for long LDPC codes, this criterion to determine scaling factors is not as good. For long code lengths, we propose to approximate the performance of the normalized BP-based algorithm having a fixed scaling factor by density evolution, and consequently find the best scaling factors. It is quite straightforward to modify the density evolution for the BP-based algorithm to take the normalization into account. The only modification is the probability density function corresponding to (8), i.e. (9) and all the other procedures are the same as in the BP-based algorithm. B. Offset BP-Based Algorithm

(7) The density evolution procedure in bit nodes is identical to , denoted as , that of BP decoding and the pdf of can be numerically computed with fast Fourier transform based on (2) [4]. III. DENSITY EVOLUTION FOR TWO IMPROVED BP-BASED DECODING Let and represent the values computed by the BP algorithm and the BP-based algorithm with (1) and (3), respectively. It was shown in [8] that the following two statements hold. and have the same sign, i.e., . 1) has larger magnitude than , i.e., . 2) Based on these two statements, two improved BP-based algorithms are presented in the following, the first having already been suggested in [8]. Then we apply density evolution to evaluate the performance for each of these two improved algorithms and determine the best corresponding parameters for the decoder. A. Normalized BP-Based Algorithm The check node processing with (3) could be improved by with a constant, i.e. dividing (8) where is a normalization factor greater than one. To obtain the optimum performance, should vary from one iteration to another, and depends on SNR values. However to facilitate the implementation, and the analysis as well, we keep only one scaling factor for all iterations and all SNR values. In [8], a criterion was proposed to determine the normalizawith (8) have the same tion factor by letting the updated mean as with the BP algorithm. The corresponding is determined for the first iteration, either by simulation or theoretically, and kept for all subsequent iterations. Simulation results show that for geometric LDPC codes and regular LDPC codes of short or medium length, the normalized BP-based algorithm with the normalization factor determined by this method can achieve per-

Another possible approach to improve the accuracy of the extrinsic messages delivered by the BP-based algorithm is to reduce the reliability values by a positive constant in the following way: (10) Note that all extrinsic messages with reliability values smaller than are set to 0, such that they have no contribution to the following bit node processing. As in the case of the normalized BP-based algorithm, should be changed with the iteration number to achieve the best possible performance. Instead, for simplicity again, we keep it fixed. To develop the density evolution of the offset BP-based algoused in the rithm, we need to modify the definition of density evolution procedure of the BP-based algorithm, in accordance with the update (10) as (11) where

.

In (11), an impulse function is introduced in the pdf by updating with (10). But it disappears in the pdf of after the processing in bit nodes with (2), since the pdf of is continuous. The existence of the impulse function in does not have too much effect on the density evolution in bit node processing and only slight modifications are needed. C. Numerical Results We have performed numerical calculations of the thresholds for the normalized and offset BP-based algorithms with different values and and different code ensembles. The results for three ensembles of LDPC codes with rate 1/2 are shown in Figs. 1 and 2. With a properly chosen and , these two improved algorithms can achieve performances very close to that of the BP algorithm. The best possible thresholds of the two improved BP-based algorithms are summarized in Table I, and compared with those obtained for the BP [4] and the BP-based [5], [6] algorithms. Most of the gap between BP and BP-based decoding can be bridged by both improved methods. The normalized BP-based algorithm slightly outperforms the offset BP-based algorithm, but may also be slightly more complex to implement. Importantly, these results also suggest that little additional improve-

210

IEEE COMMUNICATIONS LETTERS, VOL. 6, NO. 5, MAY 2002

Fig. 1. Threshold  for the normalized BP-based algorithm.

Fig. 3. Bit error performance of an (8000, 4000) regular LDPC code (maximum iteration number: 100).

decoding algorithms. Although the code length is far from infinity, we can still observe the effectiveness of the two improved BP-based algorithms. For this code, the gap between BP and BP-based algorithms is about 0.5 dB. However, with the normalized BP-based and the offset BP-based algorithms, the gap and , can be reduced to about 0.05 dB with respectively. These values correspond to the best parameters found for that family of codes in Figs. 1 and 2. For comparison, we also plot the performances of these two algorithms (which corresponds to the criterion of [8]) and with . For this code length, these parameter choices are not as good as the proposed ones. For regular LDPC codes of medium length, both the proposed approach and the approach of [8] achieve comparable error performances. Fig. 2.

REFERENCES

Threshold  for the offset BP-based algorithm. TABLE I THRESHOLDS (IN dB) FOR VARIOUS DECODING ALGORITHMS

ments could be achieved by allowing either each iteration for these families of codes.

or

to change at

IV. SIMULATION RESULTS In Fig. 3, simulation results are plotted for an (8000, 4000) , under various regular LDPC code [9] with

[1] R. G. Gallager, Low-Density Parity-Check Codes. Cambridge, MA: M.I.T. Press, 1963. [2] D. J. C. MacKay, “Good error-correcting codes based on very sparse matrices,” IEEE Trans. Inform. Theory, vol. 45, pp. 399–431, Mar. 1999. [3] M. Fossorier, M. Mihaljevic´, and H. Imai, “Reduced complexity iterative decoding of low density parity check codes based on belief propagation,” IEEE Trans. Commun., vol. 47, pp. 673–680, May 1999. [4] T. Richardson and R. Urbanke, “The capacity of low-density parity check codes under message-passing decoding,” IEEE Trans. Inform. Theory, vol. 47, pp. 599–618, Feb. 2001. [5] S. Chung, “On the construction of some capacity-approaching coding schemes,” Ph.D. dissertation, M.I.T., Sept. 2000. [6] X. Wei and A. N. Akansu, “Density evolution for low-density paritycheck codes under Max-Log-MAP decoding,” Electron. Lett., vol. 37, pp. 1225–1226, Aug. 2001. [7] A. Anastasopoulos, “A comparison between the sum-product and the min-sum iterative detection algorithms based on density evolution,” in Proc. Globecom 2001, San Antonio, TX, Nov. 2001. [8] J. Chen and M. Fossorier, “Near optimum universal belief propagation based decoding of low-density parity check codes,” IEEE Trans. Commun., vol. 50, pp. 406–414, Mar. 2002. [9] D. J. MacKay. Online database of low-density parity check codes. [Online]. Available: http://wol.ra.phy.cam.uk/makcay/codes/data.html.

Suggest Documents