New Optimal Hard Decoding of Convolutional Codes with ... - CiteSeerX

1 downloads 0 Views 454KB Size Report
III. DECODING. The hard decoding is performed in two successive steps. First, an algebraic .... The path metric mk(n) is the accumulation of the branch metrics.
New Optimal Hard Decoding of Convolutional Codes with reduced complexity J.-C. Danyº, J. Antoine°, L. Husson°, N. Paul*, A. Wautier°, J. Brouet* ºEcole Supérieure d'Electricité, Radio Dpt., Plateau de Moulon, 91192 Gif-sur-Yvette, France, *Alcatel Research and Innovation, Route de Nozay, 91461 Marcoussis cedex, France.

Abstract – It is well known that convolutional codes can be optimally decoded by the Viterbi Algorithm (VA). We propose an optimal hard decoding technique where the VA is applied to identify the error vector rather than the information message. In this paper, we show that, with this type of decoding, the exhaustive computation of a vast majority of state to state iterations is unnecessary. Hence, under certain channel conditions, optimum performance is achievable with a sensible reduction in complexity.

I. INTRODUCTION In case of convolutional codes transmitted over binary symmetric channels, the use of Viterbi algorithm (VA) on the whole received message enables error correction in the sense of maximum likelihood [1] [2] [3]. In certain channel conditions, i.e. when the number of errors is small or nearly equal to zero, simpler techniques should be sufficient to produce similar results. In this paper, we propose a two step algorithm of this kind. The first step consists in an algebraic decoding which allows to find out the presence of detectable errors in the sense of the maximum likelihood ; it also gives an approximate location of such potential errors. In the second step, a derivation of VA is applied in the identified zones. A significant reduction in complexity is thus achieved without any performance degradation compared to the classical VA1. The scope of this paper is limited to convolutional coders of rate ½ with hard decision. Our method can be extended to continuous output channels. Moreover, the proposed method can be easily generalized (like the VA itself) to include soft channel outputs. II. CONVOLUTIONAL CODER OF RATE ½ We consider convolutional coders of rate ½ denoted by

C(2,1,M), where M is the constraint length. From each information bit, the coders produce two coded bits which are

1

A similar idea has been proposed by Sudhakar et al. in [4]. This previous

work was not known by the authors before the review. Our work gives theoretical developments and leads to further reduction of complexity.

linear combinations of the current information bit and the M-1 preceding bits. Let us note g1(x) and g2(x) the two generating polynomials of length M describing the coder, m(x) the information sequence, C1(x) and C2(x) the coded sequences produced by each convolutional block and C(x) the coded sequence at the output of the coder. In the paper, g1(x) and g2(x) are chosen to be mutually prime. The coded sequences resulting from each block are expressed by: C1 (x) = m(x).g1 (x) (1)  C2 (x) = m(x).g 2 (x) The coded sequence C(x) is defined by: C(x) = C1 (x2 ) + x.C2 (x2 ) = m(x2 ).(g1 (x2 ) + x.g2 (x2 )) which can be written as C(x) = m(x2 ).G(x)

(2)

(3a)

where G(x) = g1(x ) + x.g2 (x ) (3b) Lemma 1: If C(x) is a code word, then all the odd powers of the polynomial C(x) G(x) have null coefficients. Proof: Let C(x) be a code word then: ∃m( x) / C ( x) = m( x 2 ).G ( x) (4) which implies that: ∃m( x) / C( x).G(x) = m( x 2 ).G(x 2 ) (5) because G(x) has its coefficients in {0, 1}. The proposed decoding algorithm is based on this lemma. 2

2

III. DECODING The hard decoding is performed in two successive steps. First, an algebraic decoding determines the erroneous zones of the received message. Then these errors are corrected in the maximum likelihood sense by using VA to find the error vector. A. Algebraic decoding – error detection Let r(x) be a word resulting from a hard decision applied to the received samples. r(x) is the sum of a transmitted code word C(x) and of an error sequence noted e(x):

r ( x) = C ( x) + e( x) where e( x) = ∑ e j x j

(6)

j

Consequently, (7) r ( x).G ( x ) = m( x 2 ).G ( x 2 ) + e( x).G ( x) Considering the odd and even parts of the error sequence, we define e(x) = eeven ( x 2 ) + x.eodd ( x 2 ) Equation (7) can be reformulated by: (8a) r ( x ).G ( x ) = m ( x 2 ).G ( x 2 ) + E even ( x 2 ) + x.E odd ( x 2 ) where  Eeven ( x 2 ) = eeven ( x 2 ).g1 ( x 2 ) + x 2 .eodd ( x 2 ).g 2 ( x 2 ) (8b)  2 2 2 2 2  Eodd ( x ) = eeven ( x ).g 2 ( x ) + eodd ( x ).g1 ( x ) Lemma2: If x.E odd ( x 2 ) is null, then the error e(x) is null or undetectable. Proof: If e even ( x 2 ).g 2 ( x 2 ) + e odd ( x 2 ).g 1 ( x 2 ) = 0 , the Bezout theorem implies that eeven(x2) is divisible by g1(x2) and that eodd(x2) is divisible by g2(x2), as g1(x2) and g2(x2) are mutually prime. Consequently, e(x) becomes: e ( x2 ) e ( x2 ) (9) e( x) = even 2 (g1 ( x 2 ) + x.g 2 ( x 2 ) ) = even 2 .G ( x) g1 ( x ) g1 ( x ) The error vector e(x) which is a multiple of G(x) is then a code word, i.e. either an undetectable error or the null element. From lemma 1 and lemma 2, we derive the following proposition: Proposition 1: The received word r(x) is a code word if and only if x.E odd ( x 2 ) is null. Moreover, the polynomial E odd ( x) = ∑ E odd x j enables to detect the detectable errors. j j

B. Proposed Error Correction Procedure Unlike in the classical decoding procedure, where the VA is used to find the information sequence, we propose a new procedure where the VA is applied to find the error vector. Correcting the information message in the maximum likelihood sense [5-7] corresponds to the determination of the sequence e'(x) of minimum weight such as: (10) ∃m' ( x) / r ( x).G ( x) + e' ( x).G ( x) = m' ( x 2 ).G ( x 2 ) This is equivalent to the identification of the sequence e' (x) = e' even ( x 2 ) + x.e' odd ( x 2 ) which fulfills the condition: E odd ( x 2 ) = e' even ( x 2 ).g 2 ( x 2 ) + e' odd ( x 2 ).g 1 ( x 2 )

(11)

The sequence e'(x) can be determined by using the VA to reach a state such that vector E(x) = ∑ E j x j defined by (12) j

is a null vector. E ( x) = E odd ( x) + e' even ( x).g 2 ( x) + e' odd ( x).g 1 ( x)

(12)

For detailed explanation of this mathematical procedure, the conventional terminology of VA is applied. The trellis used in the VA is made up of nodes (corresponding to distinct states) connected by branches (transition between states). A succession of branches is called a path which is characterized by a metric. i) states : There are 2M-1 states. A polynomial E Sk (x) = ∑ E Skj x j is associated with each state Sk. The value j

of the state Sk is given by the M-1 elements {EnSk , EnSk+1 ,..., EnSk+M −2 } of the polynomial ESk(x) The initial state Sk0 is associated to the polynomial ESk0(x)=Eodd(x). Sk0 consists of the M-1 first elements of Eodd(x): {E0odd , E1odd ,..., E Modd−2 } . ii) branches : A branch represents the transition between a state at instant n (current state) and a state at instant n+1 ' ' } (following state). A pair of values {eeven ,n , eodd ,n , is associated

to each branch. Let Sk be the current state at instant n, and E Sk (x) its associated vector. One builds the set {Zi(x)} of sequences ' ' Z i ( x) = (eeven .g 2 ( x) + eodd .g1 ( x) )x n = ∑ Z ij x n+ j ,n ,n M −1 j =0

i 0

where the first coefficient Z satisfies the condition: Z 0i = EnSk (13) For each sequence, the corresponding following state Si at instant n+1 is determined by the vector k i k i k i {En+1 + Z1 , En+2 + Z 2 ,..., En+M −1 + Z M −1 } . The polynomial associated to this state is computed by E Si (x) = E Sk (x) + Z i (x) (see Eq. 12). The condition (13) ensures that, for all states Sk at instant n, the polynomial ESk(x) has its n+1 first coefficients equal to zero. This procedure leads iteratively to a null polynomial ESk(x). iii) metrics : The principle of the proposed decoding technique is to find the sequence e'(x) of minimum weight which cancels the vector E(x) (Eq. 12). With the considerations in (11) and (12), the branch metric at instant n is defined by: ' ' (14) ∆(n) = eodd,n + eeven,n

The path metric mk(n) is the accumulation of the branch metrics. When several paths come to the same state, the survivor is the one with the smallest path metric. At the end of the procedure, the survivor path is the one terminating in the state S0 = {0 0 ... 0}, which is the unique state with all the elements of ESk(x) to be null. ' ' , eodd } associated to each branch of the All the pairs {eeven ,n ,n survivor constitute the decoded error vector e'(x).

e' (x) = ∑ [e'even,n + x.e'odd ,n ].x 2 n

The corresponding polynomial G(x) is thus: G ( x) = 1 + x + x 2 + x 4 + x 5 Let us, for instance, consider the information sequence

(15)

n

The decoded information sequence m’(x) is then obtained by: r ( x ) + e' ( x ) m' ( x ) = (16) G ( x)

m( x ) = 1 + x 4 + x 5

The coded sequence C(x) is: C ( x) = 1 + x + x 2 + x 4 + x 5 + x 8 + x 9 + x 11 + x13 + x14 + x 15 Assuming that the error sequence is e( x) = x 5 + x 7 the received sequence is expressed by:

C. Comparison with the Classical Error Correction Procedure with VA The number of states (2M-1) and the number of branches leaving a state (2) are the same in the trellis of both algorithms. At time n, the branch metric in the classical decoding procedure: ∆ VA = r2 n + Cˆ 2 n + r2 n+1 + Cˆ 2 n+1 (17) ˆ ˆ are the estimated coded bits. where C ; C

( {

2n

) ( }

r ( x) = 1 + x + x 2 + x 4 + x 7 + x 8 + x 9 + x11 + x13 + x14 + x15

which leads to r ( x).G ( x) = 1 + x 2 + x 4 + x 5 + x 6 + x 8 + x11 + x12 + x14 + x16 + x 20

)

and the polynomial Eodd(x) is defined by : x.Eodd ( x 2 ) = x 5 + x11 or Eodd ( x) = x 2 + x 5

2 n+1

The states and the branches of the trellis are described in table 1.

Let Cˆ i ( x) be the estimated code word associated to a given path Pi, then: r ( x) − Cˆ i ( x) = ei ( x ) (18) where e (x) is the estimated error vector associated to Cˆ i ( x) .

{E

Sk n

, E nSk+1 }

(0,0) (0,1) (1,0) (1,1)

i

It follows that the path metrics defined in Eq. (14) and (17) are identical. The performance in term of BER of the proposed algorithm is thus equal to the one of classical VA.

{e

state

' even,n

S0 S1 S2 S3

' } , eodd ,n

(0,0) (0,1) (1,0) (1,1)

Zi(x) (0 0 0) (1 1 1) (1 0 1) (0 1 0)

Table 1 : states and branches

IV. ILLUSTRATION OF THE ALGORITHM The initial state is obtained from the two first coefficients of the polynomial Eodd(x), which are here (0,0). Therefore the algorithm begins in state S0; its associated vector is: ES0(x)=Eodd(x)= x2+x5=(0 0 1 0 0 1 0 0 0 0 0 0) The corresponding trellis is given in Fig. 2. and its first three iterations are depicted in table 2. It shows how the following states (FS) are derived from the current states (CS). Underlined and bold font indicate the elements giving the value of the states and associated metrics. When several paths reach a same state, the one with the smallest metric is kept and the others are discarded. Italic font indicates the discarded path.

A. Example In this part, the proposed algorithm is illustrated on the well known code C(2,1,3) depicted in Fig. 1 and specified by the following generator polynomials: g1 ( x) = 1 + x + x 2 and g 2 ( x) = 1 + x 2 C1(x) m(x)

Delay

Delay C2(x)

Figure 1 : structure of the coder C(2,1,3) (0, 0) (S0)

0

(0, 1) (S1)

2

2

2

2

2

2

2

3

1

2

2

3

3

4

3

2

2

4

2

5

states 0

(1,0) (S2)

2

(1, 1) (S3)

Iterations

1

3 2

1

1 3

2

4

2

2 5

3

3 6

3

3 7

4

4 8

Figure 2: trellis obtained with the proposed algorithm for code C(2,1,3)

4 9

4

5 10

(0, 0) (S0)

3

2

0

1

3

2

2

3

1

2

2

3

3

2

2

3

2

2

3

3

4

4

2

4

4

5

3

4

4

5

(0, 1) (S1)

state 0

(1,0) (S2)

2

(1, 1) (S3)

Iteration

1

2

1

2 3

4

5

2

2

2

2 6

7

8

9

10

Figure 3: trellis obtained with the classical VA decoder for code C(2,1,3)

CS(1)

{e

' even,1

S0 S0

,e

}

(0,0) (1,1)

CS(2) S1 S1 S3 S3

{e

' even, 2

' } , eodd ,2

(0,0) (1,1) (1,0) (0,1)

CS(3) S0 S0 S1 S1 S2 S2 S3 S3

' odd ,1

{e

' even, 3

' } , eodd ,3

(0,0) (1,1) (0,0) (1,1) (1,0) (0,1) (1,0) (0,1)

iteration 1 FS(1) ESk(x) S1 0010010… S3 0110010… iteration 2 FS(2) ESk(x) S2 0010010… S0 0000010… S3 0011010… S1 0001010… iteration 3 FS(3) ESk(x) S0 S2 S2 S0 S1 S3 S3 S1

0000010… 0001010… 0001010… 0000010… 0000110… 0001110… 0001110… 0000110…

mk(1) 0 2 mk(2) 0+0=0 0+2=2 2+1=3 2+1=3

ii) We can see in the trellis (Fig. 2) that: at iteration 5: mk(5) ≥ m0(5) for k≠0 ES0(x)=0 after this iteration (for n>5) : m0(n) does not evolve anymore mk(n) ≥ mk(5) for k≠0 Therefore it would be possible to stop the procedure at iteration 5. In the next section, a low complexity algorithm based on this statement is described.

V. LOW COMPLEXITY DECODING mk(3) 2+0=2 2+2=4 3+0=3 3+2=5 0+1=1 0+1=1 3+1=4 3+1=4

Table 2: first three iterations of the proposed algorithm

At the end of the procedure the survivor is the path terminating in S0. It corresponds to the following sequence of ' ' pairs {eeven , eodd }: ,n ,n {{0,0}, {0,0}, {0,1}, {0,1}, {0,0}, {0,0}, {0,0}, {0,0}, {0,0}, {0,0}} The estimated error vector is then e'(x)=x5+x7and the decoded information sequence obtained by Eq. (16) is m' ( x ) = 1 + x 4 + x 5

B. Useful comments i) One can check that the path metrics are identical to those obtained in the classical decoding procedure illustrated by the trellis of Fig. 3.

As seen in the previous section there exist some situations where it is possible to avoid a significant part of the computations. Sufficient conditions for applying these principles for reduction in decoding complexity are described in this section. Lemma 3: If at one step n0, both the properties (19) and (20) are satisfied, then it is unnecessary to carry on the VA ; the survivor path is the one which reaches the state S0 = {0 0 ... 0} at the iteration n0 and which remains in this state S0 after this node. (19) ES0(x)={0 0 ... 0} mk(n0) ≥ m0(n0) for k≠0 (20) proof : Assuming that ES0(x)={0 0 ... 0}, then the metric of the path which remains in the state S0 keeps the value m0(n0), because all the branches of this path are associated to the ' ' pairs {eeven , eodd } ={0,0} for instant n ≥ n0. ,n ,n Since metric is a positive number, mk(n) ≥ m0(n) for any instant n ≥ n0. Property 1: Let us assume that, at instant n-N, condition (20) is satisfied and that all the vectors ESk(x) are such that:

Sk 0

, E1Sk ,..., E nSk− N −1 } = {0,0,...,0}

Sk n− N

, E nSk− N +1 ,..., E nSk− N + M −2 } = {state Sk }

Sk n− N + M −1

,E

Sk n− N + M

,..., E

Sk n

(21)

} = {0,0,...,0}

10

E nSk+1 = 1 This configuration can occur if there are at least N consecutive zeros in the polynomial Eodd(x). For a given coder, it is possible to compute the trellis from iteration n-N to iteration n for different values of N. There exists a value N0 such as if N ≥ N0 then all survivor paths at iteration n remain at state S0 between step n-N and step n-M+1. Consequently it is unnecessary to carry on the VA between step n-N and n-M+1. The VA is restarted at step n-M+1 with S0 as initial state. For the coder defined in section IV, the value of N0 equals to 4. The trellis obtained for any N ≥ 4 is illustrated in Fig. 4.

BER

{E {E {E

without code

-1

classical VA algorithm 10

10

-3

-3

-2

-1

m

m

2

3

0.7 0.6

ratio

iterations ACS

0.4 0.3 0.2 0.1 0 -3

m+3

m

1

0.8

0.5

m

0 SNR

Figure 5 : BER - AWGN channel – BPSK – 50000 frames of 100 information bits

On the basis of the principles described in section III, we propose an algorithm with optimal performance. Reduction in computational complexity is achieved by avoiding all unnecessary computations between the iterations n-N and n-M+1. m

new algorithm

-2

-2

-1

0 SNR

1

2

3

Figure 6 : Percentage of effective iterations and ACS compared to the classical VA decoder – AWGN channel – BPSK – 50000 frames of 100 information bits

m

m

m

m+1

m

m+1

m+1

m+1

m+1

m+2

m

m+2

VII.

m+1 m+2

m+2

m+3

m+2

m+2

Figure 4 : example of trellis for N=5 for code C(2,1,3)

VI. SIMULATIONS In order to check both decoding operation and complexity gain, we performed simulations of BPSK transmission over an AWGN channel for various SNR. These simulations have been carried out using the Monte Carlo method with 50000 frames of 100 information bits. As expected Fig. 5 shows that the BER (Bit Error Rate) obtained with our low complexity algorithm has same performance as a classical VA decoder. Figure 6 shows the percentage of effectively calculated iterations and ACS (Add Compare Select) compared to the classical VA decoder. At SNR=3dB, 83% of iterations are found to be unnecessary and up to 88% of computations for ACS can be economized.

CONCLUSIONS

In this paper, we have investigated a new low complexity algorithm for hard input decoding of convolutional code of rate ½. Optimal decoding performance along with significant reduction in complexity is achievable (e.g. less than 12% of the ACS are required for computation of iterations in the VA). Further investigations are in the way to evaluate this new approach for soft input and soft output decoder for applicability with turbo codes. The complexity is expected to decrease in regards to the successive iterations of the turbo decoding. REFERENCES [1] [2] [3] [4] 5] [6] [7]

G. D. Forney, JR, “The Viterbi Algorithm”, G. D. Forney, JR, “Convolutional codes I: Algebraic Structure”, IEEE Trans. on Information Theory, vol. IT16, No.6 November 1970 A.J. Viterbi, "Convolutional codes and their Performance in Communication Systems, IEEE Trans. on Communications Technology, vol. COM19, No. 5 October 1971 R. Sudhakar, et al., "Low-complexity error selective Viterbi decoder", Electron. Lett. vol. 36, No.2, Jan. 2000, pp. 147-148 S. Lin and D.J. Costello,Jr., ‘Error Control Coding: Fundamentals and Applications’, Prentice-Hall, 1983 W.W. Peterson and E.J. Weldon, Jr., Error-Correcting Codes, 2nd edition, MIT Press: Cambridge, Mass., 1972. J.C. Dany , "Théorie de l'information et codage", SUPELEC, Gif sur Yvette, France, February 2002.

Suggest Documents