New high-rate convolutional codes for concatenated ... - IEEE Xplore

3 downloads 0 Views 307KB Size Report
Abstract— This paper considers the use of the best high-rate k/(k+1) convolutional codes obtained using the new construction technique described in [1] in a ...
New High-rate Convolutional Codes for Concatenated Schemes A. Graell i Amat, G. Montorsi and S. Benedetto1 Politecnico di Torino Corso Duca degli Abruzzi 24 10129 Torino, Italy

Overall convolutional encoder

Abstract— This paper considers the use of the best high-rate k/(k+1) convolutional codes obtained using the new construction technique described in [1] in a concatenated scheme. Simulation results for an AWGN channel and for a realistic magnetic recording channel are reported. It is shown that these codes, endowed with a decoding algorithm working on the dual code, yield performance improvements over the best known high-rate punctured codes with the same rate and memory in terms of both bit error probability and computational decoding complexity. For both the AWGN channel and the magnetic recording channel the new codes significantly lower the error floor with respect to known turbo-like code structures.

1 This work has been partially supported by ST Microelectronics and Qualcomm Inc.

i

u

splitter

I. I NTRODUCTION For applications requiring high data transmission rates powerful high-rate codes are required. Among them, convolutional codes present over block codes the advantage of a simpler soft decoding, and thus yield an inherently higher coding gain. On the other hand, the decoding complexity of such high-rate convolutional codes using a sequence maximum-likelihood (ML) decoding algorithm (such as Viterbi algorithm), or a symbolby-symbol maximum a posteriori (MAP) decoding algorithm, increases linearly with the number of edges in each trellis section, and this, in turn, increases exponentially with the code rate. The solution, so far, has been to puncture a low-rate (typically, rate 1/2) mother convolutional code, thus obtaining a good compromise between performance and decoding complexity. The error-detection and error-correction capabilities of a convolutional code are directly related to the distance properties of the encoded sequences. Indeed, asymptotically with the signal-to-noise ratio, the free distance and the number of nearest neighbors define the error probability performance. A hard puncturing of the mother convolutional code in order to achieve very high-rate codes leads to significantly worse codes in terms of distance spectrum with respect to unpunctured codes of the same rate. In [1] a new construction method that allows to obtain highrate convolutional codes, endowed with a moderately complex decoding algorithm, was presented. The codes obtained improve the distance spectrum with respect to the best known high-rate punctured convolutional codes. Simulation results for an optimally decoded single code were given in [1].

u′

i

u′′ i

block encoder

conv. encoder

c

BC

c

CC

c

i

Fig. 1. The structure of the encoder.

In this paper, we embed the so-obtained high-rate good convolutional codes as constituent encoders into serial concatenated codes with interleavers. Salient features of this paper include the use of such codes as outer encoders in a realistic model for the magnetic recording channel that considers the presence of colored noise as well as signal-dependent transition noise. A suitable encoder-decoder scheme for magnetic storage systems must consider the use of high-rate codes due to the more severe code rate penalty in magnetic recording. Because the inputs to the recording channel are restricted to binary, the only way to accomodate the redundancy is to increase the recording density, which grows with the inverse of the coding rate. The partial response magnetic channel can be viewed as a rate-1 convolutional encoder, acting as the inner encoder in a serially concatenated coding scheme. On this subject, considerable work has been done recently in [2][3][4], focusing on the use of a single convolutional codes as outer encoder. To obtain high-rate codes, these previous works have considered the puncturation of a rate 1/2 mother encoder, which generally leads to a significant error floor due to its distance spectrum. We will show thate the codes proposed in [1] improve this performance due to their better distance spectrum features. II. T HE CONSTRUCTION TECHNIQUE The proposed construction method in [1] is based on the scheme of Fig. 1. The basic idea is to build an overall convolutional encoder composed of two constituent encoders: a block encoder associated to parallel edges, and a low to moderate rate feed-forward convolutional encoder that defines the non parallel edges, i.e., the dynamical part of the trellis. The resulting encoder is a convolutional encoder, and can therefore

0-7803-7400-2/02/$17.00 © 2002 IEEE

1661

TABLE I

be decoded using a simple a posteriori probability (APP) algorithm. The encoder works as follows. Consider an (n, k) code C with k = n − 1 and the structure of Fig. 1. Let us express the information and code sequences as

WEIGHT SPECTRA FOR k/(k + 1) ENCODERS

Rate

ν

x

Encoder, z

df

adf , adf +1

6/7

3 4 3 4 3 4 3 4 3 4 3 4 5

3 3 3 3 3 3 3 3 3 3 3 3 4

[16,14,11,12,17,15,13] [25,37,31,33,35,27,23]

3 4 3 4 2 3 2 3 2 3 2 3 3

12,124 43,351

7/8

u

= [u0 , u1 , u2 , . . . ] h i (0) (1) (k−1) (0) (1) (k−1) = u0 u0 . . . u0 , u1 u1 . . . u1 , . . . (1)

8/9 9/10

and 10/11

c = =

[c0 , c1 , c2 , . . . ] h i (0) (1) (n−1) (0) (1) (n−1) c0 c0 . . . c0 , c1 c1 . . . c1 ,... (2)

i h (0) (1) (k−1) is the input vector conwhere ui = ui ui . . . ui i h (0) (1) (n−1) taining k bits and ci = ci ci . . . ci is the output vector of n bits. Let us also define the integer number x ranging from 1 to n − 2 as a parameter in the construction technique (see section III in [1]). The first n − x − 1 information bits of (0) (1) (n−x−2) the vector ui , u0i = ui ui . . . ui are encoded by a n−x−1 rate block encoder, generating a codeword cBC of n n bits. At the same time, the last x information bits of ui , (n−x−1) (n−x) (n−2) u00i = ui ui . . . ui , are encoded by a suitable x rate x+1 convolutional encoder, generating a codeword cCC of length x + 1. Finally, n − x − 1 leading zeros are added to cCC and the codeword c is obtained adding (modulo 2) bit-bybit the two previous codewords cBC and modified cCC . The relationship between u and c can also be expressed in a compact form as c = uG (3) where G is a polynomial generator matrix in the form µ ¶ GBC G= X GCC

16/17

[17,12,16,11,7,13,15,3,0] [33,35,31,16,27,37,12,23,0] [17,15,16,11,13,2,12,14,5,7] [33,21,31,27,35,12,37,23,4,17] [6,2,17,16,3,7,12,13,5,11,15] [35,31,27,25,33,37,21,23,6,12,17] [5,7,12,16,2,17,13,4,3,6,11,15] [6,27,31,35,21,37,33,4,23,25,12,13] [57,26,12,63,53,36,73,32, 14,51,55,75,43,61,41,45,71]

28,274 78,784 1,40 18,211 2,55 21,293 3,81 24,414 4,124 45,731 55,1510

Without loss of generality, the systematic generator matrix of an encoder with rate k/(k + 1) can be written as Gsys = Ik v

(6)

where Ik is a k × k identity matrix and v = [v0 , . . . , vk−1 ] is a vector of k elements whose entries may be, in general, rational functions with the same denominator, i.e., the encoder may turn out to be recursive. For simplicity, to describe the systematic encoder we can use a single vector z, z = [z’, zr ]. The z’ elements describe the numerator polynomials of v, and zr describes the common denominator polynomial of all elements of v. Each polynomial is given in octal notation with the lowest-degree term on the right. The new high-rate codes are reported in table I. III. D ECODING HIGH - RATE CONVOLUTIONAL CODES

(4)

GBC is the (n − x − 1) × n generator matrix of the block code, X a zero matrix, and GCC identifies the generator matrix of the constituent convolutional code. In [1], the best high-rate feed-forward convolutional codes in the form of fig. 1 are reported. However, high complexity related to a high rate needs a decoding algorithm working on the dual code, which, in turn, requires the encoder to be in systematic form [8]. An equivalent systematic encoder is directly obtained from the generator matrix G of the non systematic encoder by permuting and linearly combining matrix rows. Thus, the equivalent systematic code generator matrix can be expressed as the product of G with the inverse matrix of the left k × k submatrix Ak×k of G Gsys = A−1 k×k · G

11/12

[11,15,7,1,3,5,17,13] [37,33,27,21,23,25,31,35]

(5)

Since iterative (turbo) decoding of concatenated codes results in a high-gain with a moderate coding complexity, low-complexity symbol-by-symbol a posteriori probability (APP) decoders, which are needed in the iterative decoding procedure, have reached an increasing interest [5]. Unfortunately, a straightforward application of APP decoding to highrate codes becomes very rapidly impractical as the coding rate increases. A significant breakthrough in complexity reduction occurs if we consider a decoding algorithm working on the dual code. As shown in [6], it is possible to derive the MAP decision for a code working on its dual. The advantage of this approach is a reduction of the decoding complexity when k > n − k since the number of codewords to consider is decreased. In [7] the algorithm of [6] was modified to provide soft information for the decoded bits in the form of Log-Likelihood Ratios that are commonly used in the iterative decoding of binary codes. More recently, in[8] it was shown that a proper modification

1662

TABLE II

SCCC architecture. AWGN channel 1 − L'i

N UMBER OF VISITED EDGES PER DECODED BIT AND PER STATE FOR DIFFERENT TYPE OF DECODERS OF A RATE k/(k + 1) CODE .

k

SISO

2 4 6 8 16

1.33 3.20 9.14 28.44 3855.06

Punct. 1/2 SISO 1.33 1.60 1.71 1.78 1.88

High-rate encoder

π

Inner encoder

AWGN channel

Inner SISO

π

1 + L'i

1 − R 'i

Outer DSISO

1 + R 'i

DSISO SCC architecture. Magnetic Recording Channel

0.67 0.40 0.29 0.22 0.12

High-rate encoder

π

PR channel

1 − L'i

Channel SISO

π

1 + L'i 1 − R 'i

Outer DSISO

1 + R 'i

Fig. 2. Serially concatenated systems.

of the bit soft information yields a MAP decoding algorithm working on the dual code perfectly equal to the one pertaining to the original code (we call this algorithm DSISO). A. Complexity of the dual code algorithm An appropriate measure of the decoding complexity for a convolutional code consists of the number of visited edges per decoded bit 1 . In table II we report a comparison of the decoding complexity for different type of decoders on rate k/(k+1) codes. When a symbol-by-symbol MAP algorithm is used (such as the BCJR or SISO algorithm [5]) the decoding complexity D grows exponentially with k and the encoder memory υ. For large k the complexity can be mitigated using punctured convolutional codes, 2k+υ k2υ+1 D= , Dpunc = (7) k+1 k+1 The decoding algorithm working on the dual code is shown to be the simplest one especially for very high rate codes, since its decoding complexity decreases with k. Indeed, in this case, the trellis encoder on which the decoding algorithm works is that of a rate 1/(k+1) convolutional code, so that its decoding complexity becomes

the a posteriori probability for each transmitted bit assuming independent and identically distributed (i.i.d) Gaussian noise samples. In particular it accepts at the input the sequences of the total likelihood ratios (LR) defined as Li (I) ,

Ddual

(9)

where y is the received sequence of noisy samples and c is the encoded sequence, and outputs the sequences of the a posteriori probabilities required in the iterative procedure Li (O) =

p(xi = 1|y) p(xi = 0|y)

(10)

where xi is the transmitted symbol. The DSISO detector is formally equivalent to the SISO algorithm, provided that we substitute the sequence of LRs with the sequence of the so-called Reflection Coefficients (RC) [8], defined as follows: Ri (·) ,

υ+1

2 = k+1

p(yi = 1|ci ) p(yi = 0|ci )

1 − Li (·) 1 + Li (·)

(11)

(8)

IV. T HE CONCATENATED CODING SCHEMES As shown in fig. 2, we consider serially concatenated codes where the inner encoder can be either a convolutional encoder or a rate 1 magnetic channel. In the case of a high-rate inner convolutional encoder, the transformations from likelihood ratios to reflection coefficients (see (11) below) and vice-versa would be anticipated before the Inner SISO decoder, which, in turn, becomes a Dual-SISO decoder. The corresponding decoder consists of a soft-input softoutput (SISO) inner decoder, and a dual-SISO (DSISO) decoder matched to the outer code. The SISO detector computes ——————— 1

As seen in the following, this way of measuring the decoding complexity leads to a reduction in complexity for the DSISO algorithm with respect to the one working on the punctured trellis equal to k. Indeed, this complexity reduction would be mitigated by considering also the complexity of computing the metric of each visited edge. This refinement is not considered here.

Thus, as pointed out in fig. 2, a double transformation is required at the input and in the output of the outer decoder to recast the RC into LR. V. S IMULATION RESULTS OVER THE AWGN CHANNEL In this section we present simulation results to demonstrate the effectiveness of the proposed new high-rate convolutional codes used as constituent encoders in a concatenated architecture. In fig. 3 we show the performance of the best rate 7/8 and 11/12 based on the unpunctured codes of table I acting as outer encoders. A simple 2-state, rate 1, differential encoder is used as inner encoder. As a comparison, the curves for the best equivalent (in terms of rate and memory) punctured convolutional codes [12] are also plotted. For all codes, the curves refer to a symbol-by-symbol APP decoder, working on the dual code for the unpunctured ones. Only the 10th iteration is plotted, and an information block length N=4000 is considered.

1663

1.00E+00

VI. S IMULATION RESULTS OVER THE MAGNETIC RECORDING CHANNEL

The serial concatenated coding scheme for a PR magnetic recording channel is shown in figure 5. The input data sequence u is fed to a high-rate convolutional encoder which generates an encoded sequence c. The output coded data sequence is then interleaved using an s-random interleaver [9], π, and then passed through the rate 1 inner encoder, which is the precoded EPR4 channel denoted by a polynomial equalization target (1 + D − D2 − D3 ). The precoder 1/(1 ⊕ D2 ) is employed to turn the overall channel to a recursive one, as this has been shown to improve bit error performance. 1.00E+00

1.00E-01

1.00E-01

1.00E-02

1.00E-03

1.00E-04

BER

It is shown that unpunctured codes outperform significantly the equivalent punctured ones. The most important gain is obtained at high SNR. In fact, while punctured codes exhibit an error floor at a BER above 10−6 , the use of unpunctured ones permits a significant reduction of it. The presence of the error floor is a common characteristic of concatenated coding systems related to the free distance of the outer code. Thus, the better performance of schemes based on unpunctured codes on the error floor region can be explained with its better distance properties with respect to unpunctured ones (free distance df = 3 instead of 2). Moreover, as explained in the complexity subsection, the use of the dual MAP algorithm to decoding unpunctured codes permits a reduction of the decoding complexity by a factor k = 7 and k = 11 for the unpunctured codes with respect to the equivalent unpunctured ones. So, we improve both performance and decoding complexity. In fig. 4 we report the curves for the same previous concatenated schemes based on various unpunctured codes. For all codes a block length of N=4000 is considered. A curve for the rate 7/8 code and a block length of N = 8000 is also plotted. The effect of varying the block length leads to a significant lowering of the error floor. Finally, we report in the same figure the performance for a rate 9/11 code obtained concatenating two unpunctured codes of rate 10/11 and 9/10 respectively.

1.00E-05

1.00E-06 Unpunct. Unpunct. Unpunct. Unpunct. Unpunct. Unpunct. Unpunct. Unpunct. uncoded

1.00E-07

1.00E-08

1.00E-09

9/11 code 7/8 code 7/8 code. N=8k 8/9 code 9/10 code 10/11 code 11/12 code 16/17 code

1.00E-10 2

2.4

2.8

3.2

3.6

4

4.4

4.8

5.2

5.6

6

Eb/No

Fig. 4. Performance of various unpunctured codes acting as an outer code in an SCCC scheme. AWGN channel.

A. Channel Models The performance of the new high-rate convolutional codes are investigated for two magnetic recording channels. The first consists of a simple discrete-time model of a digital magnetic recording channel subject to ISI that assumes a partial response polynomial on the form (1 − D)(1 + D)2 (EPR4), followed by an AWGN channel. The second refers to a more realistic channel model that incorporates the actual write/read and equalization process as well as the effect of the signaldependent noise, that we refer to as media noise. Media noise is a signal-dependent noise arising from the stochastic “zigzag” nature of transitions in thin film media [10]. Fig. 6 depicts the discrete-time model of the magnetic recording channel with media noise. This approach requires the assumption that media noise can be represented as a (non stationary) Gaussian process, hence completely specified through its non-stationary autocorrelation function. Estimation of the higher order moments of the noise process through the Caroselli-Wolf microtrack model [11] suggests this is an adequate representation especially at high input signal-to-noise ratios, as intuitively justified by the central limit theorem. Both inductive and magnetoresistive heads sense transitions in the direction of magnetization, which correspond to a step signal in the write current. The magnetic recording channel response is thus characterized by the step response s(t). Here we model this step response with the Karlquist head model described by the formula µg ¶ ¶ µg 2 −x 2 +x ˆ h(x) = arctan (12) arctan d d

1.00E-02

1.00E-03

BER

1.00E-04

AWGN nk

1.00E-05

uk High-rate ck encoder

1.00E-06

1.00E-07 Punct. 7/8 code 1.00E-08

π

xk

precoder

recording channel

Equalizer

Unpunct. 7/8 code Punct. 11/12 code

1.00E-09

outer encoder

Unpunct. 11/12 code

2

uncoded 2

2.4

2.8

3.2

3.6

4

4.4

4.8

5.2

5.6

2

3

precoded EPR4 channel: (1⊕D )/(1+D-D -D ) inner encoder

1.00E-10 6

Eb/No

Fig. 5. Encoding system.

Fig. 3. Outer code comparison for the AWGN channel.

1664

yk

1.00E+00

where g is the head gap and d is the effective flight heigh. The average transition response is obtained convolving s(t) with the average magnetization pattern described by the WilliamComstock model µ ¶ 2x m(x) = tanh (13) πa

1.00E-01

1.00E-02

BER

1.00E-03

1.00E-04

1.00E-05

1.00E-06

a being the transition parameter. Thus, h(x) =

1.00E-07

dm(x) ˆ ∗ h(x) dx

1.00E-08 2

where x ¯ = x1 , x2 , . . . , xN is a collection of N points along the magnetization direction and R represents the crosscorrelation matrix (N × N ) of the noise at these points. R is a positive defined covariance matrix that can be diagonalized via an orthogonal linear transformation K



RK = I

Finally, the last input in fig. 6 is the white noise, combination of head noise and thermal noise. The noise sequence is then filtered by the discrete-time FIR to generate the correct correlated noise to be applied to the detector. B. SNR Definition For simulations using the more realistic channel model, the signal-to-noise-ratio (SNR) is defined as Vp2 σn2

where Vp is the base to peak value of the isolated transition response of the channel (normalized to 1) and σn is the rms value ak

Precoder

white noise

ck

10

{-1,0,1}

1-D −

Channel Response

SNR media 10

K

white noise

10



SNR white 10

4

5

FIR

Fig. 6. Media noise channel model.

Detector

6

7

8

9

10

11

Eb/No

Fig. 7. Performance of various codes for the EPR4 white noise channel.

of the noise at the channel input for an uncoded transmission. Uncoded transmission assumes a code-rate of 1 and an average number of transitions of 0.5. This allows to fix a common base to compare different coding techniques SN Rmedia

=

SN Rwhite

=

(16)

Given any set of N independent gaussian variables g¯n (σ = 1), a noise sample of the magnetization pattern can be generated as nm (x)0 = K g¯n0 (17)

SN R = 10 log10

3

(14)

The noise described here can be summarized as a multivariate Gaussian noise with a joint distribution µ ¶ 1 −1 0 1 exp − (15) x ¯ x ¯ R f (¯ x) = 2 (2π)N/2 (det R)1/2

−1

Unpunct. 8/9 code Unpunct. 9/10 code Unpunct. 10/11 code Unpunct. 11/12 code Unpunct. 16/17 code Punct. 11/12 code uncoded

R ) mix R SN R + 10 log10 ( ) 1 − mix SN R + 10 log10 (

(18)

In these definitions the noise power is assumed to increase linearly with the code rate. The factor mix (0 ≤ mix ≤ 1) allows to specify the desired percentage of media noise at the channel input. C. Performance results In fig. 7 we report simulation results for the high-rate convolutional codes of rates 8/9, 9/10, 10/11, 11/12 and 16/17 for the EPR4 white noise channel. All encoders have four memory elements, except the last one which has five memory elements. All simulations used a information block size of N = 4000 and 10 decoding iterations. A gain of approximately 5 dB over the uncoded curve is observed at a BER of 10−5 for the rate 8/9 code. This gain grows to 6 dB for a BER of 10−6 since the error floor region has not been reached yet. Passing from a rate 8/9 to 16/17 introduces a loss of approximately 1 dB, but the curve slope is maintained and no performance degradation is observed in the floor region. As a comparison, the curve for the best known 16-state, rate 11/12, punctured code is also plotted. It shows an error floor at a BER of 10−6 while the unpunctured code of the same rate does not. The reason for this is due to the better distance spectrum of the unpunctured code. Simulation results for the more realistic channel model using a 20-80 ratio of AWGN to media noise are reported in Fig. 8 for a channel user bit density of 2.6. It corresponds to a channel bit density of 2.93, 2.84 and 2.76 for the three considered codes, respectively. The rate 8/9 code offers a gain over 6 dB at a BER of 10−6 over uncoded system and no error floor is observed at a BER lower than 10−8 .

1665

1.00E+00

Unpunct. 8/9 code Unpunct. 11/12 code Unpunct. 16/17 code

1.00E-01

uncoded 1.00E-02

1.00E-03

BER

1.00E-04

1.00E-05

1.00E-06

1.00E-07

1.00E-08

1.00E-09 18

19

20

21

22

23

24

25

26

SNR

Fig. 8. Performance of various codes for the real EPR4 channel. UBD=2.6.

The new high-rate codes lead to a significant gain in the error floor region with respect to the best known punctured codes and the state-of-the-art turbo-like magnetic recording systems [3] (error floor just below 10−6 ), in both EPR4 AWGN and more realistic magnetic channel. VII. C ONCLUSIONS New high-rate convolutional codes have been studied as constituent encoders for two serially concatenated schemes. In the first one the high-rate convolutional code is considered as an outer encoder in an SCCC encoding system, followed by an AWGN channel. In the second architecture, a magnetic recording channel is viewed as the inner encoder while the high-rate codes act as the outer code. Due to their better distance spectrum over the corresponding punctured convolutional codes, these codes offer a simple way to improve the BER performance. Moreover, if used in conjunction with a decoding algorithm working on the dual code, they permit a decoding complexity reduction. Such codes are especially atractive for magnetic recording systems, where high-rate codes are necessary. A gain of approximately 6 dB was observed at a BER of 10−6 for the AWGN white noise channel. These performance were maintained when a more realistic channel model was introduced.

The main gain over similar schemes using punctured convolutional codes is obtained in the error floor region. The substancial advantages over high-rate punctured codes open the way for powerful, yet practical, implementation of high-rate, moderate complexity, coding schemes for high rate requiring applications. Further investigations on the use of such codes for the magnetic recording channel are necessary. Some of the future research directions are their study over high-order partial response channels, and on how to improve the performance of the decoding algorithm by exploiting the correlation in the noise samples at the input of the channel detector. R EFERENCES [1] A. Graell i Amat, G. Montorsi and S. Benedetto, A New Approach to the Construction of High-Rate Convolutional Codes, IEEE Communications Letters, vol. 5, no. 11, pp. 453-455, November 2001. [2] L. L. mcPheters and S. W. McLaughlin Precoded PRML, serial concatenation, and iterative (turbo) decoding for digital magnetic recording, IEEE Trans. Magn., vol. 35, no. 5, pp. 2325-2327, Sept. 1999. ¨ [3] T. Souvignier, M. Oberg, P. H. Siegel, R. E. Swanson, and J. K. Wolf, Turbo Decoding for Partial Response Channels, IEEE Trans. Commun., vol. 48, no. 8, pp. 1297-1308, Aug. 2000. [4] W. E. Ryan, Concatenated Codes for Class IV Partial Response Channels, IEEE Trans. Commun., vol. 49, no. 3, pp. 445-454, Mar. 2001. [5] S. Benedetto et al., Soft-input Soft-output Modules for the Construction and Iterative Decoding of Code Networks, European Trans. on Telecomm., vol. 9, no. 2, pp. 155-172, March/April 1998. [6] C.R. Hartmann and L.D. Rudolph, An Optimum Symbol-by-Symbol Decoding Rule for Linear Codes, IEEE Trans. Inform. Theory, vol. IT-22, pp. 514-517, Mar. 1974. [7] Sven Riedel, MAP Decoding of Convolutional Codes Using Reciprocal Dual Codes, IEEE Trans. Inform. Theory, vol. 44, no. 3, pp. 1176-1187, May 1998. [8] G. Montorsi and S. Benedetto, An Additive Version of the SISO Algorithm for the Dual Code, ISIT 2001, Washington DC, June 2001. [9] D. Divsalar and F. Pollara, Turbo codes for PCS applications, Proc. IEEE Int. Conf. Communications, pp. 5459, Seattle, WA, June 1995, [10] N. R. Belk, P. K. George and G. S. Mowry, Noise in High Performance Thin-Film Longitudinal Media, IEEE Trans. Magn., vol. 19, pp. 13501355, March 1985. [11] J. Caroselli and J. K. Wolf, A New Model for Media Noise in Thin Film Magnetic Recording Media, Proc. SPIE Int. Symp. Voice, Video, and Data Communications, vol. 2605, pp. 29-38, Oct. 1995. [12] Y. Bian, A. Popplewell and J.J. O’Reilly, New Very High Rate Punctured Convolutional Codes, Elettronic Letters, vol. 30, no. 14, pp. 1119-1120, July 1994.

1666

Suggest Documents