Improved Upper Bound for Erasure Recovery in Binary Product Codes

2 downloads 0 Views 320KB Size Report
erasure patterns, this paper develops a tight upper bound on the post-decoding erasure rate for any binary product code. The analytical derivations are verified ...
Improved Upper Bound for Erasure Recovery in Binary Product Codes A. A. Al-Shaikhi and J. Ilow [email protected] and [email protected] Dept. of Electrical and Computer Engineering, Dalhousie University Halifax, NS, Canada B3J 2X4 Abstract— Product codes are powerful codes that can be used to correct errors and/or recover erasures. The focus of this paper is to evaluate the performance of such codes under the erasure scenarios. Judging the erasure recovery performance of a product code based on its minimum distance is pessimistic because the code is actually capable of recovering many erasure patterns beyond those with the number of erasures determined by the minimum distance. By investigating the non-correctable erasure patterns, this paper develops a tight upper bound on the post-decoding erasure rate for any binary product code. The analytical derivations are verified through computer simulations using Hamming and single parity check (SPC) product codes. A good agreement between the derived formulas and simulation results is documented. Index Terms— Erasure decoding, product codes, single parity check (SPC) codes.

P

I. INTRODUCTION

roduct codes are attractive because they provide a mechanism for constructing long codes without increasing the complexity of the decoder. They are formed by combining two codes C1 (n1, k1, d1) and C2 (n2, k2, d2) using matrix arrangement of data, where ni is the code length, ki is the number of information bits, and di is the minimum distance of the component code to produce the code Cp (n1n2, k1k2, d1d2). The attractive feature of product codes is that they can be decoded by a simple iterative component-wise decoding. One of the most popular and yet simple component code is the single parity check (SPC) code. The bit error rate (BER) performance of product codes under various decoding schemes has been studied in [1] and [2]. In contrast, this paper investigates exclusively the erasure recovery capability of product codes. Erasure decoding algorithms are important in applications where lost data are to be recovered without resorting to retransmissions. A well documented scenario of data loss treated in this paper as erasure occurs in Asynchronous Transfer Mode (ATM) networks where cells are discarded in the intermediate switches because of buffer overflows. In this case it is left up to the higher layers in the protocol stack to recover from cell loss. If the cells arrive at the destination, it

can be safely assumed that the received bits are correct. In real-time applications such as video transmission where the Automatic Repeat Request (ARQ) scheme cannot be deployed, a relatively new approach is to use the packet/cell level forward error control (FEC) to recover lost packets/cells. The results in this paper are applicable to the design of such a cell loss recovery mechanism, provided that the cells are interleaved at the bit level to produce random erasures patterns in the decodable codewords of a product code. SPC codes have been applied for cell loss recovery in ATM networks because they are simple and fast to decode. In SPC product codes (SPCPC), information bits are arranged in k2xk1 matrix. Then in the one dimensional (1-D) SPCPC, the encoding using SPC code is done column-wise only forming (k1(k2 +1), k1k2, 2) product code, where 2 is the minimum distance of the product code since d1=1 and d2=2. A detailed description, analysis, and deployment in ATM networks of such codes for cell loss recovery schemes with/without retransmission are found in [3]-[7]. The authors in [8] extend the SPCPC encoding to two dimensions. In this case of 2-D SPCPC, the encoding using SPC code is done row- and column-wise forming ((k1 +1) (k2 +1), k1k2, 4) product code, where 4 is the minimum distance of the product code since d1=2 and d2=2. The analysis of the cell loss rate after applying 2-D SPCPC is tedious and, as a result, the authors in [8] resort to deriving a simple upper bound on the code performance. The author in [9] then found a tighter bound for cell loss in 2-D SPCPC under erasure decoding. In this paper the Hamming product code (HPC) is proposed as an alternative scheme to recover lost cells. A computationally efficient erasure decoding algorithm for the HPC is introduced. Also, a very tight upper bound for the post-decoding erasure rate of any binary product code is derived. The derived bound is based on analyzing the erasure patterns, which leads to identifying the unrecoverable patterns. The paper is organized as follows. In Section II, the encoder and erasure decoder of HPC are introduced. The derived upper bound is presented in Section III. In Section IV, performance simulations are presented. The main conclusions are summarized in Section V.

II. HAMMING PRODUCT CODES The focus in this paper is on bit level operations within the product codes with erasures occurring at random within the received codeword. So, the erasure channel model adopted in this paper is depicted in Fig. 1, where p is the random channel erasure rate and e represents a symbol in doubt. In this model, we either receive the bits correctly or they are missing (in doubt), i.e. the bits cannot be in error. Erasure decoding is the simplest form of soft decision decoding. In any code, one can correct twice as many erasures as errors because we know exactly where the erasures are, whereas the locations of the errors are not known. For a code having a minimum distance dmin, the number of erasures guaranteed to be recovered is at most dmin-1. Similarly as in the case of 2-D SPCPC, in the HPC, the encoder arranges the information bits in a matrix of k2xk1, and then encoding each of the k2 rows using C1 Hamming code and each of the resultant n1 columns using C2 Hamming code, thus forming the HPC. Then, the resultant matrix is sent through the channel row-wise. With bit interleaving at the encoder, the erasures in the channel (even as a result of cell loss) results in random erasure patterns within the received matrix codeword. Since the Hamming component codes have a minimum distance of three, the resultant HPC has a minimum distance of nine and hence assures the recovery of all patterns of eight or less erasures. Based on the premise that the HPC is characterized by simple and fast error decoding procedures, the same is expected for erasure recovery. Erasure decoding of HPC is done component wise. Since each Hamming code has a minimum distance of three, it is capable of recovering at most two erasures from each row or column. For a single bit erasure recovery within one column or row, the decoding is performed in a single-step procedure by simply placing a zero in the erased position and then applying the conventional Hamming decoding. For two bits erasure recovery within one column or row, the decoding is performed by the following three-step procedure: 1. Place zeros in the erased positions. 2. Apply conventional Hamming decoding. 3. Check the following: a) If there were no correction or one correction occurred in one of the erased positions, then erasure decoding is done. b) If the correction does not correspond to any of the erased positions. Abandon Hamming decoding and just place ones in the erased positions. Erasure decoding is done. The above scheme of erasure decoding within one column or row is capable of recovering up to two erasures by applying the conventional Hamming decoding only once. As a result, the time required for erasure decoding of HPC is minimized.

1-p 0

0 p e p

1

1 1-p

Figure 1: The Erasure Channel Model

Within a received codeword matrix, the standard iterative decoding is to decode erasures in all rows and columns sequentially. In the approach proposed in this paper, in order to minimize further HPC erasure decoding times, the decoding algorithm first corrects double erasures in a single column or raw, and only after this process is completed starts correcting single bit erasure patterns within one column or row. Every time a single bit erasure is recovered within the row/column, the decoding algorithm checks if this did not affect three bit erasure patterns within the corresponding column/row to result in two bit erasures as these are corrected next. The erasure decoding stops when all single and double bit erasures within all rows and columns in the codeword matrix are recovered. It has been observed that the proposed decoding procedure on average triggers 50% less of row and column conventional Hamming decoding as compared to the standard sequential decoding.

III. ERASURE RATE ANALYSIS In this section, a product code is evaluated in terms of the post-decoding erasure rate, Pe. For i-erasures before decoding in a received n2xn1 codeword matrix, let U(i) denote the number of unrecoverable erasure patterns. It should be observed that even though a pattern of i-erasures before decoding is not recoverable, after decoding the number of not recovered erasures in this pattern in general is smaller than i. For the U(i) patterns, let ei denotes the average number of remaining erasures after decoding an unrecoverable pattern with initial i erasures. Then,

Pe =

1 n1n2 n1n2 −i i eU ∑ i (i ) p (1 − p ) n1n2 i =1

(1)

where erasures are assumed to occur randomly at a rate of p.

The code capability of erasure decoding depends on its minimum distance, i.e., all erasure patterns of dmin–1 erasures or less are recovered and hence U(i)=0 for 1≤ i ≤ dmin–1. The most straightforward approach to evaluate (1) is to assume that any pattern of dmin=d1d2 or more erasures is not going through the decoding process, i.e., it is left intact by the decoder. With this assumption, (1) reduces to the following loose bound:

1 Pe < n1n2

n n  n n −i i 1 2  p i (1 − p ) 1 2 ∑ i = d1d 2  i  n1n2

(2)

 n1n2   . Also, as discussed earlier, when i  

and therefore U i 〈〈 

an i-erasure pattern is truly unrecoverable, some of its erasures may be recovered, and ei ≤ i. To arrive at a tight bound of (1), we attempt to find an exact value of U(i) for the first few non-zero terms in (1). Although ei≤i , when evaluating (1), ei=i is assumed since this has a minimal effect on (1) as compared to a large value of U(i). Now, let us consider the erasure patterns having dmin=d1d2 erasures. All such patterns are recoverable except the event that d2 erasures occur in a row and d1 erasures occurs in all columns affected by the row erasures (the rectangular shape erasure patterns). This means that the exact number of unrecoverable patterns in this case as reported in [8] is given by: (3)

This paper investigates higher number of erasures than what is in (3). So, if we add m more erasure to the above pattern, it will be the only unrecoverable erasure patterns of d1d2+m erasures provided that m≤min(d1, d2)–1. Since we are left with n1n2–d1d2 positions to choose the m positions, the number of such patterns is given by:

 n  n  n n − d1d 2   U (d1d 2 + m) =  1  2  1 2 m   d1  d 2 

(4)

Equation (4) supersedes (3) by setting m to zero. If m reaches d1 or d2, (4) is not true since in this case some patterns are repeated. Fortunately, the number of repeated patterns can be found and hence subtracted from (4). The

number of repeated patterns is

(5)

where P(r) is the probability of getting r outcomes and each outcome occurs with probability p. From (5), the number of consecutive outcomes in the binomial expansion can be related by the following:

 n   n − r  n   =      r + 1  r + 1  r 

(6)

Using (6), we end up with the following:

Equation (2) is only an approximation of (1), since many of the i-erasures (i≥d1d2) within U(i) patterns are recoverable,

 n  n  U (d1d 2 ) =  1  2   d1  d 2 

 n − r  p   P(r ) P (r + 1) =    r + 1  1 − p 

 n  n  d 2  1  2  and  d1  d 2 + 1

 n  n  d1  2  1  for m=d1 and m=d2, respectively. To put  d 2  d1 + 1 those repeated patterns in a good form, we know that the probability of consecutive Bernoulli trials performed n times in the binomial distribution is related by:

 n − d 2  n1  n2   n  n     d 2  1  2  = d 2  2  d1  d 2 + 1  d 2 + 1  d1  d 2   n − d  n  n   n  n  d1  2  1  = d1  1 1  1  2   d 2  d1 + 1  d1 + 1  d1  d 2 

(7)

(8)

If d1=d2, then subtracting (7) and (8) from (4), the number of unrecoverable patterns is given by:

n n n n −d d  n −d  n −d  U(d1d2 +m) = 1  2  1 2 1 2 −d1 1 1 −d2 2 2  (9) d1d2  m   d1 +1   d2 +1  where d1≤m≤min(d1d2–1, d1+d2–1). If d1≠d2, then the number of unrecoverable patterns is given by:

  n1  n2  n1n2 − d1d 2   n − d   − d1  1 1      m   d1  d 2    d1 + 1  U (d1d 2 + m) =   n1  n2  n1n2 − d1d 2  − d  n2 − d 2   2  d + 1   d1  d 2  m   2    if d 2 = min(d 1 , d 2 )   if d 1 = min(d 1 , d 2 ) 

(10)

where min(d1, d2)≤m≤min[2min(d1, d2)–1, max(d1, d2)–1]. If the min[2min(d1, d2)–1, max(d1, d2)–1] is max(d1, d2)–1, then we have one extra term in (10) given by (9), where max(d1, d2)≤m≤2min(d1, d2)–1. To recap all equations, first we will consider the situation of d=d1=d2. In this case, the number of unrecoverable patterns is given as follows:

  n1  n2 n1n2 − d1d2       m   d1 d2  U(d1d2 + m) =   n1  n2 n1n2 − d1d2  − d  n1 − d1  − d  n2 − d2  1 2 d1 d2  m   d1 +1   d2 +1   0 ≤ m ≤ d −1  (11)  d ≤ m ≤ min(d 1 d 2 - 1, d 1 + d 2 - 1) In the most general case, when d1≠d2, and ds=min(d1, d2), and dx=max(d1, d2), then the number of unrecoverable patterns is:

  n1  n2 n1n2 − d1d2       m  d1 d2     n1  n2 n1n2 − d1d2   n1 − d1       − d1  m   d1 +1  d1 d2   U(d1d2 + m) =   n1  n2 n1n2 − d1d2   n2 − d2        − d2   m   d2 +1  d1 d2    n1  n2 n1n2 − d1d2  − d  n1 − d1  − d  n2 − d2   1 2 d d  m   d1 +1   d2 +1   1  2  0 ≤ m ≤ ds −1

 d s = d 2 ≤ m ≤ min(2d s − 1, d x − 1) (12)  d s = d1 ≤ m ≤ min(2d s − 1, d x − 1)   d x ≤ m ≤ 2d s − 1 Erasure patterns with the high number of erasures contribute in the calculation of (1) at a very small level because of p (1 − p ) i

n1n2 −i

factors, and are considered here

as being left intact by the decoder similar to the loose bound calculation. So, such patterns are assumed to be equal to the following:

 nn  U (d1d 2 + m) =  1 2   d1 d 2 + m 

(13)

where m≥2min(d1, d2)–1 when d1≠d2 or m≥min(d1+d2, d1d2) when d1=d2. Although intuitively U(d1d2+m) is smaller than as speculated in (13), it can be shown that (13) is an exact value when the number of erasures is greater than or equal to the following: d1d 2 + m ≥ n2 (d1 − 1) + (n1 − (d1 − 1) )(d 2 − 1) (14) Substituting in (1) the appropriate terms for U(i) as given in (11), (12), and (13) and taking ei=i, the proposed bound in this paper for the post-decoding erasure rate, Pe, is formed.

[

] [

]

comparison between different codes, Fig. 2 does not account for bandwidth expansion in the HPC, which has the rate of 16/49~1/3 as compared to the high rates in 1-D SPCPC and 2D SPCPC. Fair comparisons in the Pe performance improvements of HPC over SPCPC are beyond the scope of this paper. Although SPC codes in general are simpler and faster to recover erasures than Hamming codes, the latter is still considered not computationally involved. This has been observed in comparable running times of these algorithms. The actual reason for this behavior is effectiveness of the simplified procedure for erasure recovery in HPC developed in Section II of this paper. Figures 3 (a), (b) and (c) show the post-decoding erasure rate, Pe, of 1-D SPCPC, 2-D SPCPC and HPC, respectively. In each of these graphs, the Pe is obtained using four approaches: (i) loose bound in (2) (line with circles); (ii) Pe bound developed in [8] (line with triangles); (iii) the bound proposed in this paper (solid line) and (iv) Monte-Carlo simulation (line with squares). From these figures, it can be seen that the developed bound is very close to the results obtained through the Mont-Carlo simulations, and it is tighter than the bound developed in [8]. Although the bound developed for 2-D SPCPC in [9] could be used here for performance comparisons, we decide not to include it because of its limited applicability. The bound developed in this paper is applicable to any types of binary product codes. The only parameters in estimating Pe are the lengths and the minimum distances of the component codes. As a result, the bound developed could be used to code rate calculations in adaptive packet level FEC applications where the code is selected based on p, the raw probability of erasure in the channel which in turn is related to the packet loss rate in the network.

IV. PERFORMANCE SIMULATIONS To check the validity of the developed analytical results, in this section, the erasure decoding performance is documented using Monte-Carlo simulations and the tight upper bound from Section III. The following three product codes are deployed in testing the post-decoding erasure rate, Pe, approximation accuracy: (1) the (7x6, 6x6, 2x1) 1-D SPCPC proposed in [6] for which n1=6, n2=7, k1=k2=6, d1=1, and d2=2; (2) the (7x7, 6x6, 2x2), (2) 2-D SPCPC proposed in [8] for which n1=n2=7, k1=k2=6, and d1=d2=2; and (3) the (7x7, 4x4, 3x3) HPC obtained from two (7, 4, 3) Hamming constitute codes. Figure 2 shows the erasure decoding performance, Pe, obtained using Monte-Carlo simulations as a function of p, the raw probability of erasure in the channel. From this diagram, it can be observed that in order to achieve Pe=10-5, the raw erasure rate in the channel should not exceed (i) p=10-3, (ii) p=2x10-2 and (ii) p=10-1 for 1-D SPCPC, 2-D SPCPC and the HPC, respectively. In overall performance

Figure 2: Performance Comparison of Three Product Codes Over Erasure Channel

types of binary product codes: it requires only the lengths and the minimum distances of the component codes. The accuracy of the bound has been verified using Hamming Product Codes (HPC) as well as two versions of the Single Parity Check product codes (SPCPC). New results regarding erasure decoding of HPC have been presented, as well. The development of the improved bound for the postdecoding erasure rate has been motivated by the applicability of such codes in combating the packet loss in communication networks where the retransmissions are not always feasible.

REFERENCES [1] [2] [3] [4] [5] [6] [7] [8] [9]

Figure 3: Performance of Three Product Codes Over Erasure Channel

V. CONCLUSION This paper developed a tight upper bound for the postdecoding erasure rate in binary product codes over symmetric erasure channels. The bound is established by finding the exact number of unrecoverable erasure patterns for a given number of erasures in the received codeword matrix. The focus in the bound calculation is in characterizing the unrecoverable erasure patterns when the number of erasure in the received codeword matrix is close to the minimum distance of the product code. The bound is applicable to any

L. Ping, S. Chan, and K. Yeung, “Iterative decoding of multidimensional concatenated single parity check codes,” in IEEE Int. Conf. Communications, vol. 1, 1998, pp. 131–135. X. Huang, N. Phamdo, and L. Ping, “BER bounds on parallel concatenated single parity check arrays and zigzag codes,” in Global Telecommunications Conference-GLOBECOM, 1999, pp. 2436–2440. H. Ohta and T. Kitami, “A cell loss recovery method using FEC in ATM networks,” IEEE J. Select. Areas Commun., vol. 9, pp. 1471–1483, Dec.1991. H. T. Lim and J. S. Song, “Cell loss recovery method in B-ISDN/ATM networks,” Electronic Letters, vol. 31, no. 11, pp. 849–851, May 1995. H. T. Lim, D. Nyang, and J. S. Song, “Improving the performance of cell loss recovery in ATM Networks,” Electronic Letters, vol. 32, no. 17, pp. 1540–1542, August 1995. M. A. Kousa, A. K. Elhakeem, and H. Yang, “Performance of ATM networks under hybrid ARQ/FEC error control scheme,” IEEE/ACM Trans. Networking, vol. 17, pp. 917–925, Dec. 1999. C. H. Xie and S. Berber, “Simulation Study of a Cell Loss Recovery Scheme in ATM Networks,” in ICICS-PCM fourth international conference, vol. 2, Dec 2003, pp. 907–911. M. A. Kousa and A. H. Mugaibel, “Cell loss recovery using twodimensional erasure correction for ATM networks,” in Seventh Int. Conf. On Telecommunication Systems, Mar. 1999, pp. 85–89. M. A. Kousa, “A Novel Approach for Evaluating the performance of SPC Product Codes Under Erasure Decoding” IEEE Transactions on Communications, Vol. 50, No.1, January 2002.

Suggest Documents