A New Ensemble of Rate-Compatible LDPC Codes ∗ Department
Kai Zhang∗, Xiao Ma∗ , Shancheng Zhao∗, Baoming Bai† and Xiaoyi Zhang‡
arXiv:1205.4070v1 [cs.IT] 18 May 2012
of Electronics and Communication Engineering, Sun Yat-sen University, Guangzhou 510006, GD, China † State Lab. of ISN, Xidian University, Xi’an 710071, Shaanxi, China ‡ National Digital Switching System Engineering and Technological R&D Center, Zhengzhou 450002, China Email:
[email protected]
Abstract—In this paper, we presented three approaches to improve the design of Kite codes (newly proposed rateless codes), resulting in an ensemble of rate-compatible LDPC codes with code rates varying “continuously” from 0.1 to 0.9 for additive white Gaussian noise (AWGN) channels. The new ensemble rate-compatible LDPC codes can be constructed conveniently with an empirical formula. Simulation results show that, when applied to incremental redundancy hybrid automatic repeat request (IR-HARQ) system, the constructed codes (with higher order modulation) perform well in a wide range of signal-tonoise-ratios (SNRs).
I. I NTRODUCTION Rate-compatible low-density parity-check (RC-LDPC) codes, which may find applications in hybrid automatic repeat request (HARQ) systems, can be constructed in at least two ways. One way is to puncture (or extend) properly a well-designed mother code [1–6]. Another way is to turn to rateless coding [7–10], resulting in rate-compatible codes with incremental redundancies [11]. Recently, the authors proposed a new class of rateless codes, called Kite codes [12], which are a special class of prefix rateless low-density parity-check (LDPC) codes. Kite codes have the following nice properties. First, the design of Kite codes can be conducted progressively using a onedimensional optimization algorithm. Second, the maximumlikelihood decoding performance of Kite codes with binaryphase-shift-keying (BPSK) modulation over AWGN channels can be analyzed. Third, the definition of Kite codes can be easily generalized to groups [13]. In this paper, we attempt to improve the design of Kite codes and investigate their performance when combined with high-order modulations. The rest of this paper is organized as follows. In Section II, we review the construction of the original Kite codes. The design of Kite codes and existing issues are discussed. In Section III, we present three approaches to improve the design of Kite codes, resulting in good rate-compatible codes with rates varying continuously from 0.1 to 0.9 for additive white Gaussian noise (AWGN) channels. In section IV, we present simulation results for the application of Kite codes to the HARQ system. Section V concludes this paper. II. R EVIEW
OF
K ITE C ODES
A. Definition of Kite Codes An ensemble of Kite codes, denoted by K[∞, k; p], is specified by its dimension k and a so-called p-sequence
p = (p0 , p1 , · · · , pt , · · · ) with 0 < pt < 1 for t ≥ 0. A codeword c = (v, w) ∈ K[∞, k; p] consists of an information vector v = (v0 , v1 , · · · , vk−1 ) and a parity sequence w = (w0 , w1 , · · · , wt , · · · ). The parity bit atPtime t ≥ 0 can be computed recursively by wt = wt−1 + 0≤i ℓ have been fixed and the parameters qj with j < ℓ are irrelevant to the current prefix code, the problem to design the current prefix code then becomes a one-dimensional optimization problem, which can be solved, for example, by the golden search method [20]. What we need to do is to make a choice between any two candidate parameters qℓ and qℓ′ . This can be done with the help of simulation, as illustrated in [12]. D. Existing Issues The above greedy optimization algorithm has been implemented in [12], resulting in good Kite codes. However, we have also noticed the following issues. 1) In the high-rate region, Kite codes suffer from error-floors at BER around 10−4 . 2) In the low-rate region, there exists a relatively large gap between the performances of the Kite codes and the Shannon limits. 3) The optimized p-sequence depends on the data length k and has no closed form. Therefore, we have to search the p-sequence for different data lengths when required. This inconvenience definitely limits the applications of the Kite codes. The first issue has been partially solved by either taking RS codes as outer codes in [12] or inserting fixed patterns into the parity-check matrix in [21]. The objective of this paper is to provide simple ways to overcome the above issues. III. I MPROVED D ESIGN OF K ITE C ODES As mentioned in the preceding section, we consider only code rates greater than 0.05 and partition equally the interval (0.05, 1] into 19 sub-intervals. Given a data length k. Define k ⌋ for 1 ≤ ℓ ≤ 20. nℓ = ⌊ 0.05ℓ A. Constructions of the Parity-Check Matrix We need to construct a parity-check matrix H = (Hv , Hw ) of size (n1 − k) × n1 , where Hv is a matrix of size (n1 − k) × k corresponding to the information bits and Hw is a lower triangular matrix with non-zero diagonal entries. Let C[n1 , k] be the code defined by H. We then have a sequence of prefix codes with incremental parity bits. Equivalently, we can have a sequence of prefix codes with rates varying “continuously” from 0.05 to 1. To describe the algorithm more clearly, we introduce some (ℓ) (ℓ) definitions. Let H (ℓ) = (Hv , Hw ) be the parity-check (ℓ) matrix for the prefix code with length nℓ . Let Hv (·, j) (ℓ) (ℓ) and Hv (t, ·) be the j-th column and t-th row of Hv , (ℓ) respectively. Sometimes, we use Hv (t, j) to represent the (ℓ) entry of Hv at the location (t, j). Given {qℓ , 1 ≤ ℓ ≤ 19}, the parity-check matrix H = H (1) can be constructed as follows. First, the matrix Hv corresponding to the information bits can be constructed progressively as shown in the following algorithm. Algorithm 1: (Row-weight Concentration) (20)
1) Initially, set Hv
be the empty matrix and ℓ = 19.
TABLE I T HE p- SEQUENCES
2) While ℓ ≥ 1, do the following. (δ) a) Initialization: generate a random binary matrix Hv of size (nℓ − nℓ+1 ) × k, where each entry is independently drawn from a Bernoulli distribution with success probability qℓ ; form the matrix corresponding to the information bits as ! (ℓ+1) H v . (5) Hv(ℓ) = (δ) Hv b) Row-weight concentration: i) Find t1 (nℓ+1 − k ≤ t1 < nℓ − k) such that (ℓ) the row Hv (t1 , ·) has the maximum Hamming weight, denoted by Wmax ; ii) Find t0 (nℓ+1 − k ≤ t0 < nℓ − k) such that the (ℓ) row Hv (t0 , ·) has the minimum Hamming weight, denoted by Wmin ; iii) If Wmax = Wmin or Wmax = Wmin + 1, set ℓ ← ℓ − 1 and go to Step 2); otherwise, go to the next step; (ℓ) iv) Find j1 (0 ≤ j1 ≤ k−1) such that Hv (t1 , j1 ) = 1 (ℓ) and that the column Hv (·, j1 ) has the maximum Hamming weight; (ℓ) v) Find j0 (0 ≤ j0 ≤ k−1) such that Hv (t0 , j0 ) = 0 (ℓ) and that the column Hv (·, j0 ) has the minimum Hamming weight; (ℓ) (ℓ) vi) Swap Hv (t0 , j0 ) and Hv (t1 , j1 ); go to Step i). Remarks. From the above algorithm, we can see that the (δ) incremental sub-matrix Hv corresponding to the information vector is finally modified into a sub-matrix with rows of weight Wmin or Wmin + 1. Such a modification is motivated by a theorem as shown in [22] stating that the optimal degree for check nodes can be selected as a concentrated distribution. Such a modification also excludes columns with extremely low weight. Second, the matrix Hw corresponding to the parity bits is constructed as a random accumulator specified in the following algorithm. Algorithm 2: (Accumulator Randomization) 1) Initially, Hw is set to be the identity matrix of size (n1 − k) × (n1 − k). 2) For t = 0, 1, · · · , n1 − k − 2, do the following step by step. a) Find the maximum integer T such that the code rates k/(k + T ) and k/(k + t + 1) falls into the same subinterval, say (0.05ℓ, 0.05(ℓ + 1)]; b) Choose uniformly at random an integer i1 ∈ [t + 1, T ]; c) Set the i1 -th component of the t-th column of Hw to be 1, that is, set Hw (i1 , t) = 1. Remarks. The accumulator randomization approach introduces more randomness to the code such that the current parity bit depends randomly on previous parity bits. We also note (ℓ) that, for all ℓ, Hw has the property that each of all columns but the last one has column weight 2. It is worth pointing out that both the row-weight concentration algorithm and the
Code rate k/(k + t) (0.95, 1.00] (0.90, 0.95] (0.85, 0.90] (0.80, 0.85] (0.75, 0.80] (0.70, 0.75] (0.65, 0.70] (0.60, 0.65] (0.55, 0.60] (0.50, 0.55] (0.45, 0.50] (0.40, 0.45] (0.35, 0.40] (0.30, 0.35] (0.25, 0.30] (0.20, 0.25] (0.15, 0.20] (0.10, 0.15] (0.05, 0.10]
pt (k = 1890) 0.0380 0.0200 0.0130 0.0072 0.0046 0.0038 0.0030 0.0028 0.0018 0.0017 0.0015 0.0014 0.0013 0.0012 0.0012 0.0012 0.0011 0.0011 0.0011
pt (k = 3780) 0.0170 0.0110 0.0050 0.0039 0.0023 0.0020 0.0016 0.0013 0.0010 0.0009 0.0007 0.0007 0.0006 0.0006 0.0005 0.0005 0.0005 0.0004 0.0004
5 6&
4 .3 2 + /0 1 00 * ./) , + * )
7&8& 5 6&
! "#$ %&'(
Fig. 1. Performances of the constructed improved Kite code and Kite code with k = 1890. The maximum iteration number is 50. From left to right, the curves correspond to rates 0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, and 0.9, respectively.
accumulator randomization algorithm are executed in an offline manner. To construct the prefix code of length nℓ , both of these two algorithms modify only the incremental nℓ − nℓ+1 rows of the parity-check matrix associated with the original Kite code, which do not affect other rows. B. Greedy Optimization Algorithms It has been shown that, given {qℓ , 1 ≤ ℓ ≤ 19}, we can construct a parity-check matrix H by conducting the row-weight concentration Algorithm 1 and the accumulator randomization Algorithm 2. Hence, we can use the greedy optimization algorithm to construct a good parity-check matrix. We have designed two improved Kite codes with data length k = 1890 and k = 3780. The p-sequences are shown in Table I. Their performances are shown in Fig. 1 and Fig. 2, respectively. From Fig. 1 and Fig. 2, we have the following observations. • The improved Kite codes perform well within a wide range of SNRs.
:;
:;
:;
F
the p-sequence. To this end, we have plotted the optimized parameters in Table I in Fig. 3. We then have the following empirical formula 1 1.65 qℓ = + 2.0 (6) k (1.5 − 0.05ℓ)6
k U lmn; k U :no;
=E
=D
j di =C h :; ea f g =B ff ` ed :; c _ b a:;=A ` _ :;
:;
:;
for 1 ≤ ℓ ≤ 19. From Fig. 3, we can see that the above formula is well matched to the optimized p-sequence. I U ;p
9:;
9