Vandermonde Matrix Packet-Level FEC for Joint Recovery from Errors and Packet Loss Ali A. Al-Shaikhi and Jacek Ilow Department of Electrical and Computer Engineering Dalhousie University Halifax, Nova Scotia, Canada
[email protected] [email protected]
Abstract— Due to layer interactions, packets in cross layer protocol designs are impaired with both errors and packet loss, where the latter is also referred to as a packet erasure. In realtime applications, packet-level forward error correction (FEC) schemes are favorable candidates to enhance the performance of such protocols without retransmission. This paper proposes to use one type of code to recover from both impairments. In particular, the approach builds on recently introduced packet oriented systematic block codes in which parity matrix depends on the Vandermonde structure with shift operators. These codes are maximum distance separable (MDS) and have the advantage over other MDS codes of simple encoding/decoding processes. At the packet level, de-coupling of erasure recovery and the error correction processes is a challenging task, and this paper explores the advantages of packet-level product of two codes: one for erasure recovery and the other for error correction. Monte Carlo simulations show that the throughput of the proposed system improves when applying both error correction and erasure recovery compared to a system applying erasure-only recovery with equivalent overall minimum distance.
I. I NTRODUCTION In the conventional protocol stack, packet loss recovery is performed at the transport layer using automatic repeat request (ARQ), while the partial recovery from the erroneous bit transmissions is performed at the physical link layer using conventional FEC [1]. Specifically, type-II hybrid-ARQ protocols send the parity check digits/packets from the underlying FEC encoder for error correction to the receiver only when they are needed [2]. This method was an improvement over type-I hybrid-ARQ protocols which send the FEC redundancy independently of the channel conditions. In packet combining ARQ systems, each received packet is combined with its predecessors until the resulting combined packet could be reliably decoded [3]. In packet-level FEC codes where a group of symbols comprises a packet, the mechanism for declaring erasures (lost packets) is to rely on the packets sequence number to determine the location of lost packets. With the proper interleaving and grouping of symbols, codes originally designed for error corrections, in the case of errors introduced in the channel in symbol-by-symbol transmission, are already used for lost packet recovery (as erasure decoding) in the Internet, to aid services based on the user datagram protocol (UDP). In packetlevel FEC, similarly just as in conventional FEC, the recovery
978-1-4244-2644-7/08/$25.00 © 2008 IEEE
from lost packets improves the packet loss rate (PLR) in the network without the necessity of retransmission. This approach is beneficial for services involving real-time applications and multicasting [3], [4]. Traditional wireless communication protocols do not relay corrupted packets to the higher layers and do not forward them over multiple hops. This can lead to a significant number of packet drops and thus a severe deterioration in the performance of such real time applications and/or multi-hop networks, making the recovery of lost packets using packet-level FEC not feasible. In this paper, we consider that packets are lost due to overflow of buffers in the intermediate forwarding network nodes while packets with errors, from data link layer, are not discarded. The erroneous frames/packets are still livered to the mechanism for joint recovery from errors and packet loss. This assumption is in alignment with recent protocols for wireless networks that rely on significant interactions between various layers and coupling their functionalities, with the goal of achieving more efficient performance. This research area referred to as cross-layer design (CLD) offers new opportunities for the applications of the packet-level FEC codes. Cross-layer protocols support better quality of service (QoS) and exhibit substantial promise to mitigate the problem of dropped packets since they relay to higher layers and forward over multiple hops the corrupted packets [5], [6]. There are a number of powerful and efficient FEC schemes to recover from both erasures and errors simultaneously [7], such as low density parity check (LDPC) and Reed Solomon (R-S) codes [3]. However, the codewords in these codes are rather short, and this dictates the construction of the r parity packets from the k information packets which is usually visualized as arranging the k information packets row-wise and running the FEC code column-wise. An alternative to this is to use the turbo codes to construct first the redundancy bits based on the information bits to get the turbo code frame of coded bits, and then split the frame into packets [8]. In this paper, we use recently designed packet-level FEC codes to present a number of approaches that can recover packets impaired with both erasures and errors to increase the throughput of the communication systems. In addition to erasure/error recovery capability of the coding approaches, another important aspect to consider is the complexity of
the encoding and decoding processes. This aspect motivates the investigations in this paper where the only operations permitted on packets are arithmetic packet shifts and binary additions. The systematic codes used in the proposed approaches depend on coefficient matrices following the Vandermonde structure. The latter, by incorporating packet shifts, facilitates fast processing of whole packets during encoding and decoding. This is in contrast with other types of packetlevel block codes where, in a block of n packets, the parity matrix is used multiple times to construct multiple codewords, and decoding is triggered multiple times to recover from the erasures and/or errors. The proposed codes are MDS and, while maintaining comparable recovery performance as in the more conventional ones, are also quite flexible in the choice of the code parameters. II. L INEAR B LOCK C ODES The packet-level codes, considered in this paper, operate by converting a block of k information packets into a group of n coded packets. Such codes will be able to correct/recover for a certain number of errors and/or erasures governed by the minimum distance dmin of the code. These codes are represented with parameters (n, k, dmin ). Maximum distance separable (MDS) codes, considered in this paper, featuring an optimal correction capability, have dmin = n − k + 1. These codes can correct for t = (dmin − 1)/2 = (n − k)/2 errors or e = dmin − 1 = n − k erasures [3]. In conventional applications of systematic codes to packetlevel FEC, a unit of information, either symbol or bit, mi , i = 1, · · · , k, is taken from each of the k information packets. These k symbols are used to construct r parity symbols with the help of the coefficient matrix P. These parity symbols are then transmitted in r redundancy packets. The coded/transmitted symbols on n = r + k packets, represented by the column vector p, are calculated at the transmitter (encoder) using the following linear system of equations: G·m
=
p
(1)
where m is the column vector of k information symbols from ⎡ ⎤ I k packets, G = ⎣ · · · ⎦ is the n × k generator matrix, and P I is the k × k identity matrix. For R-S codes, the matrix multiplication in (1) uses Galois field (GF) arithmetics. In general if all information packets are of same length L, (1) is used Lb + 1 times to construct the coded packets, where b is the number of bits in each symbol and . is the floor operator. The assumption used in this paper is that all packets in the coded group are of the same length. This imposes some limitations which can be overcame by padding the packets to the same length. For all symbols in the group of packets, we rewrite (1) using the packet version of this relation: G·M
=
P
(2)
where M is a matrix of information packets arranged in rows, each with Lb + 1 symbol elements, and P is the corresponding matrix of coded packets.
The main challenge in the design of a code is to design G, or in the case of systematic codes, the corresponding coefficient matrix P. For MDS codes, the matrix G should be designed in such a way that any k × k sub-matrix, Gk , has to be invertible (full rank). The simplest two designs of G are the repetition code and the single parity check code. In the case of the systematic codes considered in this paper, there are other matrices that could be used as P resulting in the desired properties of Gk being invertible such as the Cauchy and Vandermonde matrices. We will discuss next the Vandermonde matrix which is used as the coefficient matrix P in the R-S codes. The Vandermonde matrix is also utilized in the design of the proposed code in this paper, but with a different building element than in the case of R-S codes. The Vandermonde matrix V, with r × k elements, is given by the following [9]: ⎤ ⎡ α02 · · · α0k−1 1 α0 ⎢ 1 α1 α12 · · · α1k−1 ⎥ ⎥ ⎢ ⎢ α2 α22 · · · α2k−1 ⎥ (3) V=⎢ 1 ⎥ ⎥ ⎢ . .. .. .. .. ⎦ ⎣ .. . . . . k−1 2 · · · αr−1 1 αr−1 αr−1 This matrix is proven to be non-singular if the parameters αz , for z = 0, . . . , r − 1, are distinct. For R-S codes with P =
V, the αz elements are taken from the extended GF, GF pb , where p is a prime number (p = 2 is used most of the time) and b is any integer (b = 8 is used for highest efficiency to represent a byte). Therefore, multiplication and addition operations in the encoding and decoding processes must be done on that extended GF. As a result, the following problems are encountered when working with R-S codes for packet level FEC: (i) the code rates (parameters) are limited and (ii) the encoding and decoding processes are computationally intensive. Moreover, it has been shown that the Vandermonde matrix, based on the elements taken from the finite GF, is not always non-singular [9]. III. VANDERMONDE - BASED B INARY C ODE D ESIGN In this section, we present a code design using a coefficient matrix P, based on the Vandermonde structure, which is efficient to implement and possesses the desired properties when performing the decoding procedure in the proposed MDS codes. We choose the αz elements in the Vandermonde matrix as in (3) to be xz , for z = 0, . . . , r − 1, where xz · Mi stands for a right arithmetic shift operator by z bits applied to the row information packet Mi , i = 1, · · · , k. In other words, Mi is represented as a polynomial Mi (x); however, for brevity of notation, we will use Mi to describe the polynomial representation of a packet. Therefore, the coefficient matrix in our code is given by: ⎤ ⎡ 1 1 1 ··· 1 ⎢ 1 x1 x2 · · · xk−1 ⎥ ⎥ ⎢ 2 4 2(k−1) ⎥ ⎢ 1 x x ··· x V=⎢ ⎥ (4) ⎥ ⎢ .. .. .. .. .. ⎦ ⎣ . . . . . 1
xr−1
x2(r−1)
···
x(r−1)(k−1)
To ensure that this new Vandermonde matrix, consisting of the arithmetic shift operators xz when applied to the information packets, results in elements that are distinct, the packet size has to be increased by at least (r − 1)(k − 1) over the original information packet size L by filling these packets initially with bits of zeros to the size of ps = L +(r−1)(k−1). This packet size ensures that the arithmetic shifts implement delay (not cyclic shifts). The encoding and decoding processes for the proposed design are summarized next. A. Encoding Process The Vandermonde-based matrix that is augmented with the identity matrix (systematic code) forms the n × k generator matrix of the code which is given by: ⎤ ⎡ Ik×k (5) G = ⎣ ··· ⎦ Vr×k where V is the designed matrix. For the proposed (n, k, n − k + 1) systematic MDS code, the encoder uses (2) and (5) to calculate: ⎤ ⎡ Ik×k ⎣ ··· ⎦ · M = P (6) Vr×k where P is a column vector consisting of the elements Pi (i = 0, ..., n−1) which are the n coded packets where the first k of them are the original information packets. The remaining n − k packets are generated by modulo-2 addition of all k information packets after a proper shift of each information packet. As compared to conventional encoding based on (1) and (2), the encoding in (6) is fast and efficient because it uses just shifting and modulo-2 addition operations. Next we summarize the decoding processes for erasure recovery and single erroneous packet correction for the proposed codes. B. Decoding Process The MDS code can be used to recover from erasures or to correct for errors. Erasure recovery and error correction techniques for packet oriented designs were described, proven, and analyzed in [7]. In this section, we summarize these two techniques individually. For erasure recovery technique, at the receiver side, assuming no errors, if there are L lost information packets where (L ≤ k), we have to solve (2) for the missing information packets, ML , as follows:
L −1 L G · P (7) ML = where (·)−1 represents the inverse of a matrix, GL is an L×L sub-matrix of G that remains after proper substitution for the L is the received parity packets packets that are received and P after substituting properly for the received information packets. Essentially, GL can be any square sub-matrix of V. The subsystem of equations in (7) has always a unique solution, because of the geometric progression nature in each row of the Vandermonde matrix when the elements shift operators.
arethe −1 Also, we verified the invertibility of Gk by simulation,
−1 for a large set of parameters n and k [7]. Although, GL can be found by any matrix inversion techniques, we showed a very efficient technique to find the inverse of such packetbased design. The technique converts all operations into simple modulo-2 additions and shifts [7]. For error correction, at the receiver, all the coded packets P are received. From the received packets arranged row-wise in a matrix R = E + P where E is the error packets, we have to infer first which packet(s) is(are) in error, and then, within these packet(s), where are the error locations and their values. For binary codes considered in this paper, the error values are not required since by knowing their positions one just flips them. We use syndrome decoding which depends only on the error packets and the parity check matrix H. The parity check matrix, H, is the n × r matrix given by: ⎤ ⎡ Vk×r (8) H = ⎣ ··· ⎦ Ir×r
where (·) represents the transpose of a matrix. We showed how the syndrome decoding is utilized to correct a single erroneous packet in a group of n received packets. Specifically, we designed a family of codes having a minimum distance of three that is capable of correcting a single packet in error irrespective of the size of the packet. The family has a code parameters consisting of (k + 2, k, 3), where k ≥ 2 and a rate of k/(k + 2) [10]. IV. PACKET R ECOVERY A PPROACHES AND P ERFORMANCE Here, we present the packet recovery approaches and their performance using previously investigated MDS codes in a combined error-erasure channel model. Firstly, we will describe the error-erasure channel encountered in CLD and how it can be decomposed into a modified error channel so that the channel can be used to analyze the system performance in cross layer protocols. Secondly, we will discuss the combined packet recovery approaches consisting of an erasure recovery code and an error correction code. Finally, we shall discuss the throughput performance of the packet recovery approaches in the channel impaired with joint errors and erasures. A. Error-Erasure Channel Model The binary error-erasure channel (EEC) model shown in Fig. 1 assumes that the unit of transmission is subject to error and erasure effects. A transmitter sends a unit and the receiver receives the unit, error(s) in the unit, or a message that the unit is not received (“erased”). The parameters describing the hybrid channel model are the erasure crossover probability μ, the error crossover probability ρ and 1−ρ−μ, which represents the probability of receiving a unit correctly. This hybrid channel model encompasses errors as in the binary symmetric channel (BSC) and erasures as in the binary erasure channel (BEC). Because this hybrid model is not directly suitable for throughput analysis in cross-layer protocols, the procedure described in [6], [11] is followed in order to decompose the hybrid channel into an erasure-dependent BSC.
Fig. 1.
The error-erasure channel model.
As in the conventional approach, the packets are composed of two parts: header and payload. The header contains two checksums, one for the header information only and the other for the payload. First, the parameters that quantify the packet erasures and the bit errors in a packet in CLD protocols are introduced. The probability that at least a single bit is in error in the header and/or in the payload is denoted by δ. Therefore, in the conventional system the probability that a packet is dropped due to the failure of at least one of the checksums is also δ. As a result, δ is the raw packet drop probability. The probability that at least a single bit is in error in the header only is represented by λ. If it is assumed that the header must be error-free in both the conventional and CLD systems, then the probability of a packet drop in both the CLD and conventional systems due to the failure of the header checksum is also λ. Thus for the CLD system, δ − λ represents the probability of a delivered but corrupt packet. Now, if represents the probability of error of a random bit selected from the corrupted packet, the overall probability of bit error in the CLD system is (δ − λ) · . Finally, the conditional probability ρconditional , that a bit in error is part of an unerased packet is given by: (δ − λ) · (9) 1−λ In other words, the conditional crossover probability, ρconditional , converts the composite EEC into a conditional BSC with only bit errors which can be visualized by setting μ = 0 in Fig. 1 (Note: ρ = ρconditional ). This conditional BSC is simulated to analyze the throughput of the system utilizing Vandermonde-based packet recovery codes when the packets are affected by both errors and erasures. ρconditional =
B. Packet Recovery Approaches Based on Vandermonde Parity Matrix Joint error and erasure recovery schemes start with a powerful code, where part of its capability is reserved for error correction, t, and the other part is reserved for erasure recovery, e, provided that 2t+e < dmin . A procedure [12] to recover from both errors and erasures using one code works as follows. First, the erased positions of the received codeword are filled with zeros and the resulting codeword is decoded normally (error decoding). The Hamming distance is measured between the codeword filled with zeros and the decoded codeword.
Next, the erased positions of the received codeword are filled with ones and the resulting codeword is decoded normally. The Hamming distance is measured between the codeword filled with ones and the decoded codeword. Then, the decoded codeword having the lowest measured Hamming distance is chosen. For example, the code (8, 4, 5) with dmin = 5 is considered to demonstrate the properties of the proposed design. Such a code can be used to recover from two erasures and to correct a single erroneous packet simultaneously. At the receiver, it is necessary to recover lost packets first, and then correct for errors. This approach is not suitable for the proposed codes, since recovering lost packets by using corrupted received packets will introduce errors in the recovered packets. This increases the errors in the codeword, and correcting them becomes more difficult since it is more probable that the code capability is exceeded. Therefore, after erasure recovery, there is no benefit in using the code for error correction in the packets. In addition, using such powerful codes increases the difficulty and complexity of encoding and decoding. A feasible method of recovering from both errors and erasures is to use two codes: one to recover from erasures and another to correct errors. Three approaches are presented and discussed: the concatenation approach (Approach 1), erasure-error recovery approach (Approach 2), and error-erasure recovery approach (Approach 3), where the name in the latter two approaches implies the order of performing recovery. In Approach 1, two codes are used: an outer code and an inner code. The information packets are encoded first by the outer code, to generate the first set of parity packets. The information packets together with the first set of parity packets are then encoded again with the inner code to generate the second set of parity packets. Figure 2 shows the encoding process in the concatenation approach, using the (6, 4, 3) systematic code as the outer code and the (8, 6, 3) systematic code as the inner code. Initially there are four information packets (represented by blue rectangles), each with a size of 1000 bits. Each packet is padded with three zeros (represented by red rectangles). Then the first (outer) code is used to generate two more packets (represented by green rectangles). The resultant six packets (blue/red and green rectangles) are padded with five zeros (brown rectangles) and then encoded using the second (inner) code to generate two more packets (yellow 1003In this approach, the overall rate equals
4 6 rectangles). 1000 ∼ 6 8 1003 1008 = 0.4960 = 0.5. At the receiver, the decoding process starts using the outer code and then the inner code. The outer code is used to recover from erasures and then the inner code is used to correct for errors. Concatenating codes are thus associated with the same problem as described for the traditional approach which uses only one powerful code, since correcting for erasures first causes errors to be introduced in the recovered packets. As a result, there is no benefit in using the second code to recover from errors unless the code used is very powerful to recover a lot of erroneous packets. Other approaches can tested by stacking k1 packets and applying two codes: one along the stacked packets and the
Fig. 2.
The encoding process in the concatenation approach (Approach 1).
other one within each packet. This coding structure can be implemented by means of two different approaches: Approach 2 and Approach 3. To demonstrate, by examples, the operations of these two approaches, the (7, 5, 3) systematic code is used with dmin = 3, permitting recovery of two lost packets or correction of one erroneous packet. In Approach 2, the decoder starts by recovering from erasures and then corrects for errors. Therefore, this is referred to as an erasure-error recovery approach. An example of this approach is shown in Figure 3. Initially there are five information packets (represented by blue rectangles), each with a size of 1000 bits. Each information packet is divided into five smaller units, each with a size of 200 bits. Each smaller unit is padded with four zeros (red rectangles). The resultant padded small units are then encoded by using the (7, 5, 3) systematic code to generate two more units (yellow rectangles), making the total size of the packet 1428. This process is repeated for each of the five information packets. The resultant five packets are padded with four zeros (brown rectangles) and encoded again by using the (7, 5, 3) systematic code to generate two more packets (green rectangles). rate for this
200overall
2 1428 The ∼ = 0.4988 second scheme equals 57 = 0.5. 1432 204 At the receiver, of the seven coded packets, some will be lost, some will be in error, and some will be received correctly. In the first stage of the decoding process, if two or fewer packets were lost out of the the seven coded packets, the missing packets will be recovered. The second decoding stage starts by removing the last four zeros from the recovered five information packets. Then, each packet of size 1428 bits is divided again into seven smaller units each of size 204 bits. In these seven smaller units, any single unit in error will be corrected. Following this, the last two smaller units of size 204 bits are removed from each recovered packet of size 1428. The resultant packet size is 1020 bits. Then the last four bits are removed from each small unit of size 204 bits. This yields the original packet size of 1000 bits. The encoding process in Approach 3 is shown in Fig. 4. The decoder first corrects errors and then recovers from erasures. Therefore, this method is referred to as an error-erasure recovery approach. The first encoding process deals with erasures and the second with errors. The encoding process starts with five information packets (represented by blue rectangles), each with a size of 996 bits. Each packet is padded with four zeros (brown rectangles), making the total packet size 1000 bits. These five packets are encoded, generating two more packets (green rectangles). Each coded packet is divided into five smaller units each of size 200 bits. Each smaller unit is then padded with four zeros (red rectangles) and encoded again
Fig. 3.
The encoding process in the erasure-error approach (Approach 2).
by using the (7, 5, 3) code within the packet to generate two more small units (yellow rectangles). This process is repeated for each of the seven packets. The overall rate for this
2 coded 996 200 = 0.4982 ∼ scheme equals 57 = 0.5. At the 1000 204 receiver, of the seven coded packets, some will be lost, some will be in error, and some will be received correctly. The first stage of the decoding process is applied to each received packet to correct for errors. This stage starts by dividing each received packet of size 1428 bits into seven smaller units each of size 204 bits. Any single unit in error of these seven smaller units will be corrected. Following this, the last two smaller units of size 204 are removed from each decoded packet of size 1428 bits. The resultant packet size is 1020 bits. The last four bits are then removed from each small unit of size 204 bits, yielding a packet size of 1000 bits. The second stage of the decoding process is then applied to the resultant packets. If any five packets are received out of seven, it is possible to recover the remaining two packets in this second decoding stage.
Fig. 4.
The encoding process in the error-erasure approach (Approach 3).
The complexity of the three approaches can be compared by calculating the required number of encoding and decoding operations. Approach 1 requires only two encoding operations, using the two codes (6, 4, 3) and (8, 6, 3). Hence, two decoding operations are also needed at the receiver. Approach 2 requires six encoding operations, using the code (7, 5, 3), and hence six decoding operations at the receiver. Approach 3 requires eight encoding operations, using the code (7, 5, 3), and hence eight decoding operations at the receiver. A performance analysis of the approaches described above is presented below.
1
C. Simulation Results
1 Approach 1: Concatenation Approach 2: Erasure−Error Approach 3: Error−Erasure Erasure Only: dmin = 5
0.9 0.8
Conventional: No Coding
throughput
0.7 0.6 0.5 0.4 0.3 0.2 0.1 0
0
0.5
1
1.5 2 2.5 3 the raw packet drop probability, δ
3.5
4
4.5 −3
x 10
Fig. 5. Throughput for a network subject to errors and erasures ( = 0.01, λ = 0.1).
V. C ONCLUSION We presented packet-level FEC approaches to recover packets affected by both errors and erasures, a scenario encountered
0.8
min
Conventional: No Coding 0.7 throughput
Now, we present the simulation results for the approaches discussed in Section IV-B. The throughput results for the three approaches are demonstrated with respect to two parameters: the raw packet drop probability, δ, and the conditional crossover probability, ρconditional , as formulated in Section IV-A. The parameter λ is set to 0.1 in the simulations. The throughput, η, is defined as the ratio of the number of correctly decoded packets Nd to the total number of transmitted packets d Nt , η = N Nt . The throughput results of conventional uncoded and erasure-only recovery systems are also included for the purpose of comparison. The throughput results were obtained by utilizing the actual encoding and decoding algorithms. The performance is independent of the actual packet size; however, the latter determines the percentage of the overhead associated with padding the packets with zeros. Throughput simulation results versus the raw packet drop probability δ are shown in Fig. 5. As expected, it can be seen that the throughput decreases with an increase in the raw packet drop probability (parameter δ). The figure also demonstrates the advantage of the combined error-erasure approaches (Approach 2 and Approach 3) over the erasureonly recovery systems. It is clear that even when the error rate is high, Approach 2 and Approach 3 still outperform the other schemes. This can be attributed to the fact that the errors are not propagated. The throughput performance of the concatenated scheme (Approach 1) is poor due to errorpropagation, and the low (t=1) error recovery capability of the outer code. The erasure-only systems exhibits a poorer performance, with an increase in δ, as this parameter also affects the bit errors introduced in the payload. This is more evident when the throughput is plotted against the conditional crossover probability ρconditional , as shown in Fig. 6.
Approach 1: Concatenation Approach 2: Erasure−Error Approach 3: Error−Erasure Erasure Only: d =5
0.9
0.6 0.5 0.4 0.3 0.2 0.1 0
0
1
2 3 4 bit error probability in an unerased packet, ρ
conditional
5
6 −3
x 10
Fig. 6. Throughput for a network subject to errors and erasures (δ = 0.2, λ = 0.1).
in cross-layer protocols design. It was shown that the recovery using erasure-only and the concatenation approaches perform poorly in such situation because of the high packet drops and the error propagation, respectively. Applying error correction within the packets showed a very good throughput performance using the erasure-error and error-erasure approaches. From the results, it is also recommended to design the recovery approaches to start decoding first for errors then for erasures as appeared in the error-erasure approach especially when the error probability is relatively high. R EFERENCES [1] G. Dimic, N. D. Sidiropoulos, and R. Zhang, “Medium access control physical cross-layer design,” IEEE Signal Processing Mag., pp. 40–50, Sept. 2004. [2] A. Shiozaki, “Adaptive type-II hybrid broadcast ARQ system,” IEEE Trans. Commun., vol. 44, no. 4, pp. 420–422, Apr. 1996. [3] D. J. Costello, J. Hagenauer, H. Imai, and S. B. Wicker, “Applications of error-control coding,” IEEE Trans. Inform. Theory, vol. 44, no. 6, pp. 2531–2560, Oct. 1998. [4] E. Fujiwara and D. K. Pradhan, “Error-control coding in computers,” IEEE Computer, pp. 63–72, July 1990. [5] V. Srivastava and M. Motani, “Cross-layer design: a survey and the road ahead,” IEEE Commun. Mag., pp. 112–119, Dec. 2005. [6] S. S. Karande and H. Radha, “Does relay of corrupted packets increase capacity?” in Proc. IEEE Wireless Commun. and Netw. Conf. (WCNC’05), vol. 4, 2005, pp. 2182–2187. [7] A. A. Al-Shaikhi, “Innovative designs and deplyments of erasure codes in communication systems,” Ph.D. dissertation, Dalhousie Univ., 2007. [8] A. Al-Shaikhi, J. Ilow, and X. Liao, “An adaptive FEC-based packet loss recovery scheme using RZ turbo codes,” in Proc. IEEE 5th Annual Conference on Communication Networks and Services Research (CNSR’07), Fredericton, Canada, May 2007. [9] J. Lacan and J. Fimes, “Systematic MDS erasure codes based on Vandermonde matrices,” IEEE Commun. Lett., vol. 8, no. 9, pp. 570– 572, Sept. 2004. [10] A. Al-Shaikhi and J. Ilow, “Packet oriented error correcting codes using Vandermonde matrices and shift operators,” in IEEE Wireless Communications and Networking Conference (WCNC’08), Las Vegas, Nevada, Mar. 2008, accepted. [11] S. S. Karande and H. Radha, “The utility of hybrid error-erasure LDPC (HEEL) codes for wireless multimedia,” in Proc. IEEE Int. Conf. on Commun., (ICC’05), vol. 2, May 2005, pp. 1209–1213. [12] L. H. C. Lee, Error-Control Block Codes for Communications Engineers. Norwood, MA: Artech House, INC, 2000.