CT09-5: Low Complexity Algorithm for the Decoding of ... - CiteSeerX

0 downloads 0 Views 300KB Size Report
complexity for the computation of ACS (Add Compare Select) is achieved. Keywords-component; low complexity, Viterbi algorithm, convolutionnal codes.
Low Complexity Algorithm for the Decoding of Convolutional Codes of Any Rate J-C. Dany, J. Antoine, L. Husson, A. Wautier

N. Paul, J. Brouet

Radio department SUPELEC Plateau de Moulon, 91192 Gif/Yvette, France [email protected]

Alcatel Research and Innovation ALCATEL Route de Nozay, 91461 Marcoussis cedex, France [email protected], [email protected]

Abstract— It is well known that convolutional codes can be optimally decoded by using the Viterbi Algorithm (VA). We propose a decoding technique where the VA is applied to identify the error vector rather than the information message. We previously focused on convolutional coders of rate ½ [4] [5]. Here we generalize the method to codes of any rate. We show that, with the proposed type of decoding, the exhaustive computation of a vast majority of state to state iterations is unnecessary. Hence, performance close to optimum is achievable with a significant reduction of complexity. The higher the SNR, the greater the improvement for reduction in complexity. For instance, for SNR greater than 3 dB, a five fold reduction in complexity for the computation of ACS (Add Compare Select) is achieved. Keywords-component; low complexity, Viterbi algorithm, convolutionnal codes.

I.

INTRODUCTION

Maximum likelihood decoding of convolutionnally coded sequences is generally achieved by application of the Viterbi Algorithm (VA) over the complete received message [1] [2] [3]. In certain channel conditions, i.e. when the number of errors is small, simpler techniques should be sufficient to produce similar results. In this paper, we propose a two step algorithm of this kind. The first step consists in an algebraic procedure for finding out the presence of detectable errors; it also gives an approximate location of such potential errors. In the second step, a VA with an appropriate metric is applied in the identified zones. A significant reduction in complexity is thus achieved with no (resp. very low) performance degradation compared to the classical hard (resp. soft) decoding using the VA. Convolutional coders of rate ½ have previously been investigated [4] [5]. In this paper, we extend the results to codes of any rate. II.

CONVOLUTIONAL CODES

We consider convolutional coders of rate K/N denoted by

C(N,K,M), where K is the number of information bits, N the

number of coded bits and M is the constraint length. From each set of K information bits, these coders produce N coded bits which are linear combinations of the K current information bits and the M-1 former set of K bits.

IEEE Communications Society

Let us note {gij(x)} the K.N generating polynomials of length M describing the coder, {mi(x)} the K information sequences, {Cj(x)} the N coded sequences produced by each convolutional block, m(x) the information message and C(x) the coded sequence at the output of the coder. The coded sequences resulting from each block are expressed by: K

C j (x) = ∑mi ( x).gij (x)

(1)

i=1

The code is defined by the following system:

C ( x) = G t ( x).m( x)

(2)

where

C ( x) = (C1 ( x) C 2 ( x) ... C N ( x) )

t

m(x) = (m1 ( x) m2 ( x) ... m K ( x ) )

t

 g11 ( x) g 21 ( x)   g12 ( x) g 22 ( x) t . G ( x) =  .  . .  g ( x) g ( x) 2N  1N

. g K 1 ( x)   . g K 2 ( x)  . .  . .  . g KN ( x) 

Property 1: The polynomials {gij(x) ; 1≤ j ≤N} have no common factor for all i. In this paper, we consider codes with minimal memory i.e. whose generating polynomials verified the property 1. If this property is not verified for a row i of the matrix G(x), it is possible to achieve a code with a minimal memory by dividing this row by the HCF (Highest Common Factor) of the polynomials {gij(x); 1≤j≤N}. The information message m(x) is the solution of the system (2) of N equations and K unknown mi(x). As N is greater than K, this system has a solution if and only if N-K control equations are verified. These control equations can be expressed in the following way: H ( x).C ( x) = 0 with  h11 ( x) h12 ( x) . h1N ( x)    H ( x) =  . . . .  h   N −K ,1 ( x) hN −K , 2 ( x) . hN −K , N ( x)  The degree of the polynomial hkj (x) is noted q kj .

0-7803-8533-0/04/$20.00 (c) 2004 IEEE

Property 2: when the property 1 is verified, it can be shown that there exists a set of polynomials { hkj } such that N −K

∑q

k

1≤ j ≤ N

These polynomials are found by searching the polynomials hkj (x) with minimal degree witch verify the system H ( x ).G t ( x) = 0 III.

DECODING

The decoding is done in two successive steps. First, an algebraic procedure determines the erroneous zones of the received message. Then these errors are corrected in the maximum likelihood sense by using a VA with an appropriate metric. A. Algebraic procedure – error detection Let us denote by R(x) the word received by the decoder and by r(x) the corresponding word resulting from a hard decision applied to the received samples. r(x) is the sum of the emitted codeword C(x) and of an error sequence noted e(x): r ( x ) = C ( x ) + e( x )

(7)

j

In the following, we denote E ( x) = (E1 ( x),..., E N − K ( x) ) the set of polynomials { Ek (x) }.

e( x) = (e1 ( x) ... e N ( x) ) with e j ( x ) = ∑ e j ,l x l t

l

i) states : A matrix E Sk (x) is associated with each state Sk. The value of the state Sk, at instant n, is given by the M-1 following elements of the matrix ESk(x):

{E

Sk 1, n

, E1Sk,n+1 ,..., E1Sk,n+q −1 , E2Sk,n ,..., E 2Sk,n+q −1 ,..., E NSk−K ,n ,..., E NSk−K ,n+q 1

2

M-1

N −K

−1

}.

If the coder satisfies property 2, there are 2 states. The initial state Sk0 is associated with the matrix ESk0(x)=V(x). Sk0 consists in the following M-1 elements of V(x): V1,n , V1,n+1 ,..., V1,n+q −1 , V2,n ,..., V2,n+q −1 ,...,VN −K ,n ,..., VN −K ,n+q −1 .

{

1

N −K

2

}

Let Sk be the current state at time n, and E Sk (x) its associated

Consequently, H ( x ).r ( x) = H ( x).C ( x) + H ( x).e( x) = H ( x).e( x ) = V ( x) It is a syndrome, noted V(x).

For detailed explanation of the error correction procedure, the conventional terminology of VA is applied. The trellis used in the VA is made up of nodes (corresponding to distinct states) connected by branches (transition between states) characterized by a branch metric. A succession of branches is called a path to which is associated a path metric.

ii) branches : A branch represents the transition between a state at instant n (current state) and a state at instant n+1 (following state). A set of values {e1' ,n , e2' ,n ,..., e N' ,n }, is associated to each branch.

(3)

where

(4)

  V1 ( x)   h11C1 + h12 C 2 + ... + h1 N C N     V ( x) =  ...  =  ...  V ( x )   h   N −K   N −K ,1C1 + hN −K , 2 C 2 + ... + hN −K , N C N  A received word is a codeword if and only if the syndrome V(x) is null. The proposed decoding algorithm is based on this property B. Proposed error correction procedure In the classical decoding procedure the VA is used to find the information sequence. Here we propose a procedure where the VA is applied to find the error vector. Correcting the information message in the maximum likelihood sense [6][7][8] corresponds to the determination of the sequence e'(x) of minimum weight such as: (5) H ( x ).(r ( x) + e' ( x) ) = 0 This is equivalent to the identification of the sequence e'(x) which fulfills the condition: H ( x ).e' ( x) = H ( x).r ( x) (6) The sequence e'(x) can be determined by using the VA to reach a state such that the N-K vectors E k (x) defined by (7) are null vectors.

IEEE Communications Society

j =1

t

= M − 1 with q k = Max(q kj ) .

k =1

N

E k ( x) = Vk ( x) + ∑ hkj ( x).e 'j ( x) = ∑ E k , j x j

matrix.

One

builds

Z ( x) = (Z ( x) ... Z i

i 1

the

i N −K

set

{ Z i (x) }

of

sequences

( x) )

t



N



M −1



j =1



j =0

with Z ki ( x) =  ∑ e 'j ,n .hkj ( x)  x n = ∑ Z ki , j x n+ j where the N-K first coefficients (Z 1i, 0 ... Z Ni −K , 0 ) satisfy the t

condition: Z ki , 0 = E kSk,n (8) For each sequence, the corresponding following state Si at instant n+1 is determined by the M-1 elements E1Sk,n + Z 1i,1 ,..., E1Sk,n+q + Z 1i,q , E 2Sk,n + Z 2i ,1 ,..., E NSk−K ,n+q + Z Ni −K ,q .

{

1

1

N −K

N −K

}

The matrix associated to this state is computed by E Si (x) = E Sk (x) + Z i(x) . (see Eq. 7) The condition (8) assures that for all states Sk at instant n, the matrix ESk(x) has its n+1 first columns equal to zero. This procedure iteratively leads to a null matrix ESk(x). iii) metrics : The principle of the proposed decoding technique is to find the sequence e'(x) of minimum weight which cancels the matrix E(x) (Eq. 7). Thus, the branch metric at instant n is defined by: Hard decoding : N

δ(n) = ∑ e 'j ,n

(9)

j =1

0-7803-8533-0/04/$20.00 (c) 2004 IEEE

Soft decoding : N

δ(n) = ∑ R j ,n .e 'j ,n

(10)

j =1

The path metric ∆k(n) is the accumulation of the branch metrics. When several paths come to the same state, the survivor is the one with the smallest path metric. iv) end of the procedure : At the end of the procedure, the survivor path is the one terminating in the state S0 = {0 0 ... 0}, which is the unique state with all the elements of ESk(x) to be null. All the sets {e1' ,n , e2' ,n ,..., e N' ,n } associated to each branch of the

survivor constitute the decoded error vector e'(x). t

  e' (x) =  ∑ e1' ,n x n ... ∑ e N' ,n x n  (11) n  n  The decoded information sequence m’(x) is then obtained by an algebraic computation derived from the system (2)

The received sequence resulting from a hard decision of R is expressed by: r=(1 1 0 1 1 0 1 1 1 1 1 0 1 0 0 1 1 1) 1 + x + x 2 + x 3 + x 4 + x 5    r ( x) = 1 + x + x 2 + x 3 + x 5  2 5  x + x    x + x2  which leads to H ( x).r ( x) = V ( x) =    x  The states and the branches of the trellis are described in table 1. The initial state is obtained from the two first column of the matrix E, which are here (0,0). Therefore the algorithm begins in this case in state S0; its associated matrix is:  0 1 1 0 0 0 E S0 = V =    0 1 0 0 0 0 TABLE I.

C. Comparison with the classical error correction procedure with VA Assuming that the coder satisfies property 2, the number of states is equal to 2M-1 and it can be proved that the number of branches leaving a state is equal to 2K., as in the classical VA. Besides, it can be shown that the differences between the two path metrics of two paths arriving at the same state are equivalent for both algorithms. It follows that the performance in term of BER of the proposed algorithm is equal to the one of classical decoding IV.

ILLUSTRATION OF THE ALGORITHM

A. Example In this part, the new algorithm is illustrated, with a soft input decoding, on the code C(3,1,3) specified by the following generator polynomials:

{E

Sk 1,n

, E 2Sk,n } state

(0,0) (0,1) (1,0) (1,1)

{e

S0 S1 S2 S3

1.7

(S0)

 1 1+ x x  H ( x) =   1 + x 1 1 + x  Let us, for instance, consider the information sequence m( x ) = 1 + x 3

The coded sequence C(x) is: 1 + x + x 2 + x 3 + x 4 + x 5    C ( x ) = 1 + x 2 + x 3 + x 5    x2 + x5   C=(1 1 0 1 0 0 1 1 1 1 1 0 1 0 0 1 1 1) For instance, in the case of BPSK modulation, the received samples can be: R=( 0.9 0.8 -1.2 0.7 0.9 -0.9 0.9 0.3 1 0.9 0.7 -0.9 0.8 ….. -1.1 -0.8 0.7 0.9 0.6 )

IEEE Communications Society

, e2' ,n , e3' ,n } Z 1i ( x)

Z 2i ( x )

(0,0,0) (1,0,0) (0,1,0) (1,1,0) (0,0,1) (1,0,1) (0,1,1) (1,1,1)

(0 0) (1 1) (1 0) (0 1) (1 1) (0 0) (0 1) (1 0)

' 1, n

(0 0) (1 0) (1 1) (0 1) (0 1) (1 1) (1 0) (0 0)

0.9

0.9

0.9

0.9

0.9

3.3

1.6

2.6

3.3

3.5

(S1)

g1 ( x) = 1 + x + x 2 ; g 2 ( x) = 1 + x 2 ; g 3 ( x ) = x 2

The corresponding matrix H(x) is:

STATES AND BRANCHES

0

(S3)

Iterations

1.0

1.7

(S2)

1

0.7 2

2.8

2.5

2.1 3

3.6

4

2.8 5

3.7

2.5 6

Figure 1. trellis obtained with the proposed algorithm for code C(3,1,3)

The corresponding trellis is given in Fig. 1 and its first three iterations are depicted in table 2. It shows how the following states (FS) are derived from the current states (CS). Underlined and bold font indicates the elements giving the value of the state and associated metrics. When several paths reach a same state, the one with the smallest metric is kept and the others are discarded. Italic font indicates the discarded path. At the end of the procedure the survivor is the path terminating in S0. It corresponds to the following sequence of pairs {e1' ,n , e2' ,n , e3' ,n } : { {0,0,0}, {0,1,0}, {0,0,0}, {0,0,0}, …, {0,0,0} }

0-7803-8533-0/04/$20.00 (c) 2004 IEEE

The estimated error vector is then e' ( x) = (0 x 0) and the decoded information sequence obtained by m' ( x) = C 2' ( x) + C3' ( x) is m' ( x) = 1 + x 3 = m( x) . t

TABLE II.

THREE FIRST ITERATIONS OF THE PROPOSED ALGORITHM

CS(1) {e , e , e ' 1,1

' 2 ,1

S0

(0,0,0)

S0

(1,1,0)

' 3,1

}

CS(2) {e1' , 2 , e2' , 2 , e3' , 2 } S0

(0,0,0)

S0

(1,1,0)

S3

(1,0,0)

S3

(0,1,0)

CS(3) {e1' , 3 , e2' , 3 , e3' , 3 }

iteration 1 FS(1) ESk S3

0110… 0100… S0 0010… 0000… iteration 2 FS(2) ESk

∆k(1) 0

S0

S0

(1,1,0)

S3

S1

(0,0,1)

S3

S1

(1,1,1)

S0

S2

(1,0,1)

S2

S2

(0,1,1)

S1

S3

(1,0,0)

S1

S3

(0,1,0)

S2

00000… 00000… 00010… 00010… 00010… 00010… 00000… 00000… 00010… 00000… 00000… 00010… 00000… 00010… 00010… 00000…

ΕS0(x)=0 ∆k(n0)≥∆0(n0) for k≠0

(12) (13)

1.7+1.6=3.3 0+0.7=0.7

Since the branch metric is a positive number, we get ∆k(n)≥ ∆0(n) at any instant n≥n0.

0+0.9=0.9

Property 3: Let us assume that, at time n, condition (13) is satisfied and that the matrix ES0 can be expressed in the following way:

∆k(3)

0.9+1.2=2.1 3.3+1=4.3 3.3+2.2=5.5 1.7+1.9=3.6 1.7+1.3=4.0 0.7+0.9=1.6 0.7+0.3=1.0

B. Useful comments In this example, considering the path reaching state S0 at iteration n=2 :

∆k(2) ≥ ∆0(2) for k≠0 and ES0(x)=0 after this iteration (for n>2) :

∆0(n) does not evolve anymore and ∆k(n) ≥ ∆0(2) for k≠0 Therefore it would be possible to stop the procedure at iteration 2. In the next section, a low complexity algorithm based on this statement is described.

IEEE Communications Society

If at one step n0, the properties (12) and (13) are verified , then it is unnecessary to carry on the VA; the survivor path is the one which reaches the state S0 = {0 0 ... 0} at the iteration n0 and which remains in this state S0 after this node.

Assuming that ES0(x)=0, then the metric of the path which remains in the state S0 keeps the value ∆0(n0), because all the branches of this path are associated to the set {e1' ,n , e2' ,n ,..., eN' ,n }={0,…,0} for time n≥n0.

∆k(2)

0.9+0=0.9

(0,0,0)

Lemma 1:

proof :

1.7+0=1.7

S0

LOW COMPLEXITY DECODING

Sufficient conditions for the applicability of reduced complexity decoding algorithm are described in this section.

1.7

00100… 00000… S1 00000… 00100… S3 00100… 00100… S0 00000… 00000… iteration 3 FS(3) ESk

S2

V.

   0 . . . 00 . . . 0 E1S,n0+ L+1 x . . . x    ES0 =  . . . . . . . . . . ... . . . . . (14) S0 0 . . . 0 0 . . . 0 E x . . . x   



N − K ,n+ L+1 n L   where at least one coefficient EkS,0n+ L+1 ; 1 ≤ k ≤ N − K is equal to 1. This configuration can occur if there are at least L consecutive null columns in the matrix V(x). Hard decoding: In case of hard decoding, for a given coder, it is possible to compute the trellis from iteration n to iteration n+L for different values of L. There exists a value L0 such as if L≥L0 then the all survivor paths at iteration n+L remain at state S0 between step n and step n+L -B with B=M-1 [4]. Consequently it is unnecessary to carry on the VA between step n-L and n+LB+1. The VA is restarted at step n+L-B+1 with S0 as initial state. Soft decoding: In case of soft decoding, the path metrics depend on the received samples. Therefore, the value of L0 and B are different for each received message. It is possible to compute the value of B through reverse VA as described below [5]. For each state Sk at time n a reverse VA is computed from step n+L toward step n until the condition (13) is satisfied. At this time, denoted n + N − B Sk , the survivor path which reaches state Sk at time n+L compulsorily comes from the state S0 at time n. The value of B is the maximal value of all values B Sk . Determining B can be computationally intensive (2M-1 reverse VA from step n to step n + N − B Sk ). To avoid this increase in complexity, a fixed value of parameter B is selected for each implementation of a given coder. Hence, all computations between the iterations n and n+L-B are avoided.

0-7803-8533-0/04/$20.00 (c) 2004 IEEE

VI.

SIMULATIONS

In order to check both decoding operation and complexity gain, we performed simulations of BPSK transmissions over AWGN channels for various SNR. These simulations have been carried out using the Monte Carlo method with 50000 frames of 100 information bits for the four coders given in Appendix. Fig. 2 shows that the BER obtained with our low complexity algorithm is close to the BER of a classical VA decoder in the case of the coder 3. The degradation for the four coders is given in table 3 for a BER of 10-4. Table 3 shows that the degradation is negligible when B is equal (or superior) to the constraint length of the coder. TABLE III.

Coder B degradation (dB)

SNR DEGRADATION FOR B=M

1 3 0.2

2 5 0.2

3 4 0

4 2 0.1

Figure 3 shows the percentage of effectively calculated ACS (Add Compare Select) compared to the classical VA decoder. The gain of complexity is nearly independent of the coder; indeed, the gain of complexity is mostly related to the length of the error free zones. At SNR=3dB, with B=M, more than 80% of computations for ACS can be economized with only less than 0.2dB performance degradation (table 3). VII. CONCLUSIONS We have previously proposed a novel low complexity algorithm for the hard and soft decoding of convolutional codes of rate ½. In this paper, we extend this algorithm to convolutional codes of any rate. A significant reduction in complexity is achievable without noticeable loss in performance. For example, for SNR superior to 3dB, less than 20% of the ACS are required for computation of iterations in the VA. Applicability of this new approach for soft output decoding of turbo codes is under investigation. . VIII. APPENDIX Coder 1: C(2,1,3) G t = (1 + x + x 2

1 + x 2 ); H = (1 + x 2 1 + x + x 2 )

Coder 2: C(2,1,5) G t = (1 + x 3 + x 4 1 + x + x 3 + x 4 ) H = (1 + x + x 3 + x 4 1 + x 3 + x 4 )

Coder 3: C(3,1,4) G t = (1 + x 2 + x 3 1 + x + x 3 1 + x + x 2 + x 3 )

1 1+ x  x H =  2 1 x 1 x2 x  + +  Coder 4: C(3,2,2)

Figure 2. BER of coder 3 (see Appendix)- AWGN channel – BPSK – 50000 frames of 100 information bits – B=1,2,3,4

1 + x x 1 + x  ; H = (1 1 + x 2 G t =  1 1   x

1+ x + x 2 )

REFERENCES [1] [2] [3]

[4] [5] [6] Figure 3. proportions of the ACS compared to the classical VA decoder – AWGN channel – BPSK – 50000 frames of 100 information bits

IEEE Communications Society

[7] [8]

G. D. Forney, JR, “The Viterbi Algorithm”, Proc. IEEE, vol. 61, pp. 268--278, 1973. G. D. Forney, JR, “Convolutional codes I: Algebraic Structure”, IEEE Trans. on Information Theory, vol. IT16, No.6 November 1970 A.J. Viterbi, "Convolutional codes and their Performance in Communication Systems, IEEE Trans. on Communications Technology, vol. COM19, No. 5 October 1971 J.-C. Dany et al., "Low Complexity Algorithm for Optimal Hard Decoding of Convolutional Codes", EPMCC'03, April 2003 J.-C. Dany et al., Low complexity Algorithm for soft Decoding of Convolutional Codes, PIMRC03 S. Lin and D.J. Costello, Jr., ‘Error Control Coding: Fundamentals and Applications’, Prentice-Hall, 1983 W.W. Peterson and E.J. Weldon, Jr., Error-Correcting Codes, 2nd edition, MIT Press: Cambridge, Mass., 1972. J.-C. Dany, "Théorie de l'information et codage", SUPELEC, Gif sur Yvette, France, February 2002

0-7803-8533-0/04/$20.00 (c) 2004 IEEE