Document not found! Please try again

On Low-Rate Convolutional Codes for Code ... - Semantic Scholar

1 downloads 0 Views 124KB Size Report
This observation is used to propose modifications to a previously suggested code-spread CDMA system. Keywords— low-rate convolutional codes, code-spread.
On Low-Rate Convolutional Codes for Code-Spread Code Division Multiple Access Johan Lassing, Anders Persson, Tony Ottosson and Erik Str¨om Dept. of Signals and Systems, Chalmers University of Technology S-412 96 G¨oteborg, Sweden [email protected] Abstract— The performance in terms of bit-error probability of low-rate convolutional codes is addressed. The Heller bound on the free distance is revised and an alternative formulation is given. A discussion of the key parameters determining the code performance and how to obtain good codes is followed by a comparison of codes of various rates for a fixed constraint length. It is concluded that simply lowering the rate does not give a great performance gain, but the encoder complexity must be increased to increase the coding gain. This observation is used to propose modifications to a previously suggested code-spread CDMA system. Keywords— low-rate convolutional codes, code-spread CDMA, Heller bound, distance spectrum, code performance.

I. Introduction The class of convolutional error correcting codes have been used extensively over the years to provide reliable communication over channels having a low signal-to-noise ratio. Among the advantages of convolutional codes over, e.g., block codes is the availability of a maximum-likelihood decoding algorithm for the entire class of convolutional codes able to exploit soft information in the decoding. Among the disadvantages can be mentioned that to realize a convolutional code comparable in performance to a block code of large block length (> 100) a prohibitively complex ML decoder is required. Among the recent applications of convolutional codes in wireless communication can be mentioned the interesting idea of using low-rate (that is, few bits in, many bits out) convolutional codes for coding individual mobile users and then simultaneously transmitting several users to the common base station using the same channel. This scheme, known as code-spread code-division multiple access or CSCDMA, has been considered in, e.g., [1] [2] [3]. In this paper we address issues relating to the performance of low-rate convolutional codes in general, and the application to CS-CDMA in particular. II. Convolutional code performance In this paper only the performance of convolutional codes of rate 1/n is considered. Due to the limited space, standard convolutional code notation and jargon will be used without necessarily explaining all details, but rather referring to textbooks such as, e.g., [4, ch. 4] or [5, ch. 2]. The bit-error probability of a convolutional code is usually found from the evaluation of a union bound on the true error probability of the code [6]. This union bound is

well-known and given by [4, ch. 4]  ∞  ∂T (D, W )  Pb ≤ C0 = C c[d]D0d 0  ∂W D=D0 ,W =1

(1)

d=df

where T (D, W ) is the transfer function of the encoder (which depends on the particular code) as a function of the enumerators D and W , enumerating the output sequence Hamming distances and the input sequence bit weights respectively. The constant C0 (which can be taken to be 1) depend on both the code and the particular channel of interest, while the constant D0 is the Bhattacharyya bound [4, p. 63] and depend only on the channel. The constant df is the free distance of the code and together with the distance spectrum c[d] it has a large impact on the code performance, as we will see in the next sections. III. The free distance An important property of a convolutional code is the free distance df of the code. The general statement that if the free distance of the code is large, the performance of the code will be good, is valid for most channels of practical interest. A tight upper bound on the free distance in terms of the inverse code rate n and the constraint length K can be obtained as a consequence of the Plotkin bound for block codes and is usually referred to as the Heller bound [7]   l−1 2 dh = min l (K + l − 1)n (2) l≥1 2 −1 where the minimization is over all l ≥ 1 and the floor operator x denotes the smallest integer less than or equal to x. This bound is tight in the sense that for a large range of values n and K codes can be found that achieve df = dh . If we treat l as a continuous variable and differentiate the expression within the floor operator in (2) with respect to l and set it to zero, we obtain the equation 2lmin − lmin ln 2 = C

(3)

where C = K ln 2 + 1 − ln 2. This equation has no closed form solution, but we may obtain a tight upper bound on the location of the minima, lmin . To find this minima, we assume that lmin ln 2 ≤ C and insert this into (3), which gives the solution lmin ≤ log2 2C. Since this relation also fulfills the assumption that lmin ln 2 ≤ C for all C > 0, we see that it is a valid solution. Therefore, we may restate

the Heller bound as (where it is understood that l is an integer)   l−1 2 dh = (K + l − 1)n (4) min 1≤l≤log2 2(K+1) 2l − 1 where we have used the simple result C < K + 1. The result in (4) implies that very few terms are actually needed to obtain the Heller bound, which is not obvious from the formulation in (2). For example, the location of the minima lmin of a K = 1000 convolutional code is at l = 9. Since it is clear that the free distance increases (approximately) linearly with n, the free distance alone is not a good measure when comparing the performance of codes of different rates. More interesting is the normalized free distance (or as in [8], the asymptotic coding gain) given by δf = R · df =

df n

(5)

This way of comparing codes of different rates (for the same constraint length) will show the same performance of a parent code of rate 1/n and all repetition codes of rate 1/mn, m = 2, 3, . . ., formed from this code, although their free distances may differ considerably. This is intuitively satisfying, since we can not expect to obtain a coding gain by simply repeating the encoded bits. IV. The distance spectrum Referring to (1), we see that the free distance accounts for the starting point of the sum of (1). Since, in general, for good codes and practical communication channels the terms c[d]D0d are quickly decreasing with increased d, the first term contributes most to the resulting error bound. This is the reason for the importance of the free distance. From this first term we can also see why δf is a good comparative measure between various codes. To see this, we use, as an example, the Bhattacharyya bound for the binary-input Gaussian channel, D0 = e−γs , where γs is the symbol-energy-to-noise density ratio, giving the first term of the sum on the right-hand side of (1) as c[df ]e−γs df = c[df ]e−γb df /n = c[df ]e−γb δf

(6)

so that the normalized distance δf = df /n is the parameter indicating the amount of coding gain provided by the code used. Here γb denotes the bit-energy-to-noise density ratio at the decoder input. In the limit as γb → ∞, the performance of the code is determined exclusively by δf and c[df ], whence the name asymptotic coding gain for δf . If the signal-to-noise ratio is not very large, several of the initial terms in the sum of (1) will contribute significantly to the error bound. Therefore, good codes are desired to have a slowly increasing distance spectrum, c[d]. V. How to obtain good codes In general, by a good code we will mean a code that achieves some criterion of optimality, usually taken to be a minimization of some probability of error. This may be the probability of bit error, which is the normal criterion

for convolutional codes, but it may also be the probability of block error, which is interesting for packet transmission. Note that this error probability criterion of optimality is not at all the only relevant criterion one can think of. For example, we could say that a good code is a code that achieves fixed error probability for a given channel, with as small delay as possible. However, in this paper we will assume that a minimal bit-error probability is desired. The problem with this criterion is that there is no simple relation between the encoder structure (i.e., the code generator vectors) and the bit-error probability of the code. Therefore, all possible choices of generator vectors must be tested to find the optimal code, which is exceedingly timeconsuming. To further complicate the situation, the optimal code (w.r.t. minimizing bit-error probability) will vary with the channel. To solve these annoying issues, the standard approach is to turn to equation (1), and observe that both the transfer function and the distance spectrum along with the Bhattacharyya bound for the channel provide upper bounds on the bit-error probability. What we must remember in doing this is that we now have changed our criterion of optimality, now we try to attack an upper bound on the bit-error probablity and not the bit-error probability itself, which indeed is a different thing. Nevertheless, the bound in (1) reduces the time required for searching for the optimal code (now w.r.t. the upper bound) considerably. Still, however, there are no simple relations between the code generators and either the transfer function or the distance profile, so that a full search is still required. Searching for codes that perform well with respect to reducing the magnitude of the union bound of (1) has been considered in [9]. Note that these codes are optimized for a certain channel. To further reduce the required search time, it is also common to drop the formulation of the weighted sum in (1), and directly consider the distance spectrum c[d]. The reason for doing this is that there exist fast algorithms for obtaining the first few terms of the distance spectrum [10]. In doing this, however, we are taking an even bigger step away from the original optimality criterion, since we now define an optimum code to be a code having an optimum distance spectrum [11]. Hopefully, the correlation between codes having good distance spectra and codes giving a low probability of error is high. VI. Using the union bound to compare codes In [2] an extensive search for optimum distance spectrum codes is presented and convolutional codes of constraint lengths K = 7, 8 and 9 and rates ranging from 1/4 to 1/512 are tabulated. These low-rate codes have been used to provide the error correction in CS-CDMA systems, where the separation of users in the receiver relies exclusively on the error-correcting capabilities of the codes and not at all on orthogonality between users as in the traditional DSCDMA systems [2] [3]. However, as is noted in [8] and [2], the normalized free distance suffers from a saturation effect, which means that as the rates become lower, no additional coding gain is

obtained, but the effect of lowering the code rate is comparable to that of repetition, which provides no coding gain. The only way to obtain a larger coding gain is to increase the complexity of the encoder, i.e., to increase K. This is clearly seen in figure 1, which shows the normalized version of (4) as a function of n for various K. It is also obvious from this figure that the saturation occurs for low values of n, so that not very much seems to be gained by searching for new generator vectors to add instead of simply repeating the already present generator vectors.

δf = df /n (normalized Heller bound)

6 2.5

K=8

5

K=7

2.5

d = 0, 1, . . .

so that the (unnormalized) free distance of the lower-rate code is twice that of the parent code. Due to this “upsampling” of the distance spectrum, for (7) to evaluate to the same P˜b for both the codes we would require M1 = 2M0 . In general, if the rate of code 0 is R0 and the rate of code 1 is R1 , where R0 > R1 , the relation between the number of terms M0 and M1 used in (7) for the two codes is R0 n1 M1 = M0 = M0 (8) R1 n0

VII. The normalized distance spectrum

K=6

From the discussion in the previous sections, it is clear that the normalized distance spectrum, defined as,

K=5 2.5

c˜(d/n) = c[d],

d = 0, 1, . . .

(9)

K=4

3

K=3

2.5

20

40 60 n (inverse code rate)

80

100

Fig. 1. Normalized Heller bound as a function of the inverse code rate, n, for various constraint lengths. Note that each plot starts at n = 2.

To be able to compare the performance of the different codes, we will use the union bound of (1). Since the distance spectrum is easily accessible, we will evaluate the sum of the right-hand side, rather than using the transfer function. For the comparison we will use a Gaussian channel and the relation Pb  P˜b =

c1 [2d] = c0 [d] c1 [2d + 1] = 0,

where the last equality is valid for rate 1/n codes.

4

2 0

once is given by

M  d=df

c[d]Q



M   2γb d/n ≤ c[d]D0d

(7)

provides a relevant means of comparing the performance of convolutional codes of various rates. This means that the performance of different codes can be compared on a fair basis by fixing a maximum normalized distance, δmax , and evaluate the union bound of the codes up to this maximum distance. The procedure is made clear from a figure like figure 2, where the normalized distance spectrum of three K = 7 codes of rates R = 1/2, R = 1/4 and R = 1/8 are compared. From this figure it can be seen that all three codes have the same asymptotic coding gain, δf , but with different multiplicities c˜(δf ). Therefore, the performance of these codes are similar. The effect of lowering the rate is also apparent in this figure, the lower rate codes sample the baseline more frequently and the multiplicities can be placed with fewer constrains which results in a small performance improvement for these lower rate codes. When the approximation in (7) is evaluated, the normalized distance spectrum is weighted by the factors Q (2γb d/n) and summed up to δmax .

d=df

VIII. Low-rate code performance where the soft inequality is necessary since the bound now is truncated and not necessarily an upper bound anymore (see [6] for a discussion of this truncation). The Q-function is defined, e.g, in [4, p. 62]. To do a fair comparison between codes of different rates, we need to have a way of deciding the number of terms, M , to incorporate in the sum when evaluating the performance of the various codes. Since we want the performance of any code obtained from repetition of a parent code to be identical to that of the parent code, it is obvious that M is dependent on the rate of the code evaluated. To see this, we realize that if the distance spectrum of the parent code is c0 [d], the distance spectrum, c1 [d], of a code obtained from repeating every generator vector of the parent code

In figure 3 the performance of a set of optimum distance spectrum K = 7 convolutional codes presented in [2] is shown for rates from 1/4 down to 1/32 for fixed γb of 2, 4, 6 and 8 dB. These codes have been used for spreading in CSCDMA systems in e.g., [2] and [3], where the code of rate 1/32 is used to obtain the multiple access. However, as can be seen from both figure 3 and the zoom in figure 4, the performance difference between the code of rate 1/4 and 1/32 is very small. For the application to CS-CDMA systems, another important issue is that of synchronization. In a traditional DS-CDMA system the known spreading waveforms imposed on the transmitted signals simplify the synchronization algorithms, whereas no such waveforms are present in

−2

250

γb = 2 dB −4 γb = 4 dB

log10 (BER)

−6 150

−8

γb = 6 dB

−10

100 −12 50

0 4

δf

−16 4.5

γb = 8 dB

−14

5

2.5 6.5 6 δ (normalized distance)

7

7.5

8

Fig. 2. Distance spectra of constraint length K = 7 convolutional codes of rates R = 1/2 (•), R = 1/4 () and R = 1/8 ().

the CS-CDMA systems. However, in [12] the problem of delay estimation in a CS-CDMA system is addressed and the importance of having few code generators is identified. The K = 7 code of rate 1/32 uses 12 different generator vectors, while the R = 1/4 uses only 4 generators, and that if the rate 1/32 is replaced by the rate 1/4 code, the remaining factor 8 spreading can be achieved by simple repetition with almost no loss in performance. The synchronization algorithms, however, will benefit from this repetition and an improvement can be expected from this observation. It should be noted that this does not imply that a DSCDMA system performs equally well as a CS-CDMA system, since the CS-CDMA system uses interleaving on the coded bits to achieve a significant performance gain in an uplink scenario, as was shown in [2]. However, a DSCDMA system using a K = 7, R = 1/4, convolutional code followed by direct-sequence spreading of a factor of 8 and perfect interleaving on the coded bits, will match the performance of the CS-CDMA system using a K = 7, R = 1/32 code.

5

10

20 15 n (inverse code rate)

25

30

Fig. 3. P˜b of equation (7) evaluated for convolutional codes of constraint length K = 7 as a function of the inverse code rate, n, for constant values of γb = 2, 4, 6 and 8 dB.

−5.4

log10 (BER)

c˜(d/n) (path multiplicity)

200

−5.5

−5.6

5

10

15 20 n (inverse code rate)

25

30

Fig. 4. Zoom of figure 3 of the curve for γb = 4 dB.

IX. Conclusions The performance of convolutional codes of various rates have been evaluated using a union bound approximation. The application in mind for the codes is a CS-CDMA system, where the low-rate codes are used for multiple access. It is shown that a code of rather high rate combined with a simple repetition encoder can perform almost equally well as a code of much lower rate. This observation is important for designing efficient synchronization algorithms and also for efficient decoder implementations.

[5]

References

[6]

[1]

A. J. Viterbi, “Very low rate convolutional codes for maximum theoretical performance of spread-spectrum multipleaccess channels,” IEEE Journal on Selected Areas in Communications, vol. 8, no. 4, pp. 641–649, May 1990.

[2]

[3]

[4]

[7] [8]

P. Frenger, P. Orten, and T. Ottosson, “Code-spread CDMA using maximum free distance low-rate convolutional codes,” IEEE Transactions on Communications, vol. 48, no. 1, pp. 135–144, Jan. 2000. A. Persson, J. Lassing, T. Ottosson, and E. Str¨ om, “On the differences between uplink and downlink transmission in codespread cdma systems,” in Proc. IEEE Vehicular Technology Conference Spring, Rhodes, Greece, May 2001, vol. ?, pp. ?–? A. Viterbi and J. K. Omura, Principles of digital communication and coding, McGraw-Hill, New-York, 1979. L.H. Charles Lee, Convolutional Coding: Fundamentals and Applications, Artech House, 1997. J. Lassing, T. Ottosson, and E. Str¨ om, “On the union bound applied to convolutional codes,” in Proc. IEEE Vehicular Technology Conference Fall, Atlantic City, USA, Oct 2001. R. Johannesson and K. Sh. Zigangirov, Fundamentals of Convolutional Coding, John Wiley & Sons, 1999. P. D. Papadimitriou and C. N. Georghiades, “On asymptot-

ically optimum rate 1/n convolutional codes for a given constraint length,” IEEE Communications Letters, vol. 5, no. 1, pp. 25–27, Jan. 2001. [9] P. J. Lee, “New short constraint length rate 1/N convolutional codes which minimize the required SNR for given desired bit error rates,” IEEE Transactions on Communications, vol. COM33, no. 2, pp. 171–177, Feb. 1985. [10] M. Cedervall and R. Johannesson, “A fast algorithm for computing distance spectrum of convolutional codes,” IEEE Transactions on Information Theory, vol. 35, no. 6, pp. 1146–1159, Nov. 1989. [11] P. Frenger, P. Orten, and T. Ottosson, “Convolutional codes with optimum distance spectrum,” IEEE Communications Letters, vol. 3, no. 11, pp. 317–319, Nov. 1999. [12] F. Malmsten, T. Ottosson, and E. G. Str¨ om, “Delay estimation of code-spread cdma systems,” in Proc. IEEE Vehicular Technology Conference Spring, Tokyo, Japan, May 2000.

Suggest Documents