Architectural Issues of Soft-Decision Iterative Decoders for ... - CiteSeerX

3 downloads 0 Views 159KB Size Report
Aug 9, 2000 - messages between nodes in the graph, with edges specified by the ... extended parity-check matrix of a cyclic code does not have a low .... Example 1 Consider the binary cyclic (7,4) Hamming code with parity-check polynomial ... (UB) on the bit error rate of this code with maximum-likelihood ..... Page 16 ...
Architectural Issues of Soft-Decision Iterative Decoders for Binary Cyclic Codes Robert H. Morelos-Zaragoza Advanced Telecommunications Laboratory (ATL) SONY Computer Science Laboratories, Inc. Shinagawa-ku, Tokyo 141-0022 Japan E-mail: [email protected] August 9, 2000 Abstract The Tanner graph associated with an extended parity-check (EPC) matrix of a cyclic code is shown to be useful in effectively implementing soft decision iterativedecoding procedures based on belief propagation. Decoding with an EPC matrix has the advantage that is universal, in the sense that it does not depend on the specific family of cyclic codes being used. It is shown that there is no need to store the complete EPC matrix, i.e., the structure of the Tanner graph over which iterative decoding is implemented. The length, dimension and parity-check polynomial are all that is needed as input parameters to the decoder. Iterative soft decision decoding can be implemented with a pair of processing elements, to pass messages between nodes in the graph, with edges specified by the parity-check polynomial. By identifying received word positions with high reliability, using a designed threshold, decoding complexity can be reduced drastically while maintaining good error performance.

Keywords — Iterative decoding, cyclic codes, Tanner graphs.

1

1 Introduction Iterative decoding based on belief propagation (BP) on bipartite graphs[1] has been successfully applied in decoding low-density parity-check (LDPC) codes [2], as well as in helping to explain the outstanding performance of turbo decoding [3]. Based on finite geometry concepts, Kuo et al [4] re-discovered a class of cyclic codes that can be iteratively decoded. These cyclic codes have low-density parity-check matrices, in the sense that the number of nonzero entries in their parity-check matrix, divided by the total number of entries, is small. Because of their algebraic structure, low-density cyclic codes have the key advantage that encoding can be done systematically and implemented with simple linear feedback shift registers. This is in contrast with general LDPC codes, for which encoding is nonsystematic. In this paper, by adding rows to the parity-check matrices of cyclic codes, their Tanner graphs are constructed. We call these matrices extended parity-check matrices. In general, the extended parity-check matrix of a cyclic code does not have a low density. However, examples have been found that achieve good error performance, in the sense that iterative decoding converges rapidly to ML decoding with increasing number of iterations. The rest of the paper is organized as follows. In section 2, a method to extend the paritycheck matrix of a cyclic code is presented and the structure of the associated Tanner graph analyzed. Based on this structure, architectural issues in the design of iterative decoders for cyclic codes are discussed in section 3. A method to reduce the computational effort is presented. By performing preliminary hard decisions on channel symbols with high reliability values, it is shown that 50% of the computations can be saved with only a few tenths of a dB in required signal-to-noise ratio. Section 4 presents simulation results for a number of cyclic codes. Finally, in section 5, conclusions of this work are drawn.

2

2 The extended parity-check (EPC) matrix of a cyclic code 2.1 Systematic encoding A key property of cyclic codes is that encoding can be made efficiently and systematically. In this section, encoding information with a cyclic code is briefly reviewed. Let  

 

code with generator polynomial

" " "







%

%



%

%

%

%

%

%

%

%

%

"  " #

  

and parity-check polynomial

 %

8



59

 

%



%

&

'

($

%



%

)





($



%

,

%

..

. %

..

%

%

..

.



..

.



. (In [5], a cyclic shift is denoted

achieved as follows [5]. Suppose first that the code rate K

denote a polynomial of degree

, , , ,

.. .

.

(1)

-.

* ($

:

0@?

A %

responding to the information polynomial B

 C

. Let B

.

 

2 &3547652 

TU



,V

%



BDBE 8



. In the first step, XYL



BZL



, for V

It follows from the cyclic structure of the code that the redundant symbols X L , V

%

'F

[ 

'



F

, are cor-

YK

'

X\L

/

where addition is over -th entry in the For cyclic

 V

 

 

PR

X 0a`

0_^  2Tc

,

`

0:

LH7$ ;  V

b 

'

Y5

2Tc

PR

(2) 0:

and

L2H7$ ;

denotes the

-th row of matrix (1).

codes with

 >@d %

A

, encoding by division of

.   

or division by

polynomials are in systematic form, so that the first Ge

F

denotes multiplication over

efficient. Either way, by recursion with

the remaining







2 &

B

 

by

2 

is more

, the coefficients of the code

coefficients are the message symbols, and

coefficient constitute the redundant symbols.

3

 H7$

.

Y5W

are obtained recursively via the parity-check relations L2H ]

,

'FBG$IHJ

9W

denote the code polynomial polynomial in

 



.) Systematic encoding can be

, whose coefficients BMLONQPSR

the message symbols to be encoded. Let X



 

+,

$

Note that the elements in the / -th row of (1) are the coefficients of the cyclic shift /



is

A parity-check matrix for !"

denote a cyclic

,





Example 1 Consider the binary cyclic (7,4) Hamming code with parity-check polynomial and parity-check matrix !" 

 # % %

f

A message B i 

X L

B

L

,V

 %

B



8l1Ym

 B





g B



B5j





 



%

 %



 % 

% 

+, %



% 

%

2 &af

is encoded as a codeword X i

. f



X 

X 



'





Xk

, by first letting

, and then solving the redundant positions: 

X h 



XsnpX g nX j

Xrq 

X k



XonpXnX g

X g npX j nX h .

tut

2.2 The extended parity-check matrix One problem with the parity-check matrix (1) is that some columns have very low Hamming weights. For example, the first and last two columns of (1) have only one nonzero entry. This means that iterative decoding based on belief propagation will not work on the associated Tanner graph. The following idea is based on the construction of finite-geometry low-density paritycheck codes presented in [4]. The parity-check matrix of a cyclic extended by



 H7$

 

the extended parity-check (EPC) matrix, denoted by rows of  



xw

code, eq. (1), is

rows. The entries of each additional row are the coefficient of the following

cyclic shifts of the parity-check polynomial: 

2s(

 H7$v 

,

xw

have as entries the coefficients of all the

2 &



,



,

 H

 



, is an -by- matrix over 

distinct cyclic shifts of

. Therefore,

PR

Tc

2 &

. The

modulo

.

Note that the additional



rows of

w

are linearly dependent on the first

5

rows, so that

the dimension of the code remains the same. The main purpose of introducing these



addi-

tional rows is to increase the Hamming weight of the columns, without increasing the density of the parity-check matrix. Increasing the weight of the columns means that code symbols are involved in more check equations. As a result, iterative decoding based on belief propagation will perform better on the graph associated with 4

xw

than with

.





(g 

h

Example 2 Fig. 1 shows simulation results of iterative decoding of the with 4 iterations, using the Tanner graphs associated with

and

xw

_yU\z7

Hamming code,

. As a reference, the figure

also shows the union bound (UB) on the bit error rate of this code with maximum-likelihood tut decoding (MLD). The improvement achieved by extending the parity-check matrix comes at the cost of increasing the number of short-length cycles, which may result in a degradation of the error rate and of the convergence to MLD as a function of the number of iterations. For a set { , let

|}{~|

denote the number of elements in { . It can be shown that the EPC matrix

of a cyclic code has Hamming weights of each row,

€

 

Hamming weight ƒ„ of the parity-check polynomial ƒ„



 |†…8‡Cˆ‰ )ŠO‹

 %

? %

, and of each column,

‡

ƒ‚

, equal to the

, where

?Œ( |

.

(In the notation of Gallager[2],

‘ Ž

ƒ‚



€



ƒ„

.)

Example 3 The EPC matrix for the binary cyclic (7,4) Hamming code is  !" " " " "

% " %

" w



"



and €

ƒ‚

ƒ„



%

%









 %  



% 

%

%





, , ,  , ,

, 



,

%

 %

,

%

% 

%

+, %







 %

%

%



"





%

" #



,

 % %

%





 -

tut

Wz .

2.3 Tanner graphs based on EPC matrices A desirable feature of cyclic codes is that encoding can be implemented with simple linear feedback shift-registers (LFSR). One would expect cyclic codes to have simple soft decision procedures. In some hard decision decoding algorithms of cyclic codes, the cyclic structure of the code is used in such a way that, once an error pattern is “trapped” in a certain code position, error patterns in other positions are corrected by cyclic shifts of LFSR in the decoding circuit. The Meggit decoder is an example[5]. 5

The EPC matrix of a cyclic code can be thought of as a way to construct an iterative soft decision decoder with cyclic structure. To see this, let the Tanner graph associated with its EPC matrix

xw

be a cyclic

 

code and consider

. Recall that the Tanner graph is a bipartite

graph with two sets of nodes, code nodes associated with codeword positions, and check nodes, associated with parity-check equations. Codes nodes are connected to a check node if and only if they participate in the same parity-check equation. It is shown now that, up to code positions, the structure of the check nodes in the graph is identical. The same holds true for the code nodes, up to the check equations involved. Let a check node ’

0

0ƒ—

tree in the graph is denoted by ’ xw



be connected to code nodes …



Š˜



Šg





Š“ Š•”7–



Šg









,/

Š•”7–



F %

5



. This

It follows from the cyclic structure of .

that ’ 0 v  —

…

Š˜2v 

Y

Š g v(





Y

Š™”7–v 

7



where the indexes are taken modulo . As a result, the structure of the connection between code nodes and and a check node /SQV is isomorphic, up to labels ? %

Vš



‡



QV

 ‡

l

QV







‡

€ @V

,

.

The generic structure of the tree connecting a check node with its parent code nodes can be specified uniquely by any of the rows of the parity-check matrix. In particular, the coefficients of the parity-check polynomial

2 &

can be used to specify the generic connections between a

check node and its parent code nodes. The EPC matrix of a cyclic code also has the property that its columns are the coefficients of all distinct cyclic shifts of the coefficients of

connected to check nodes



Š

ŠœvcL —

…8’ 0 2vcL



’ 0 g vcL







’Š˜

’ 0 ”7–vcL

 

’>Š g



'

, for %

 ?

’Š•”7› Vxš

2 &

in reverse order. Consider a code node

. Then, proceeding as above it can be shown that 

. In this case, the connections between a code

node and its children check nodes are specified uniquely by the coefficients of the polynomial  HJ

2 H 

.

Example 4 The Tanner graphs based on the PC and EPC matrices of the binary cyclic (7,4) tut Hamming code are shown in Figs. 2 and 3, respectively.

6

3 Architectural issues in implementing an iterative decoder of cyclic codes In this section, it is shown that the structure of the Tanner graph based on an EPC matrix of a cyclic code allows for simplifications in the architecture of an iterative decoder based on belief propagation.

3.1 The basic iterative decoding algorithm For description of the iterative decoding algorithm, the following notation is used. It is assumed that binary transmission with energy per symbol and

  

Ÿž

(i.e., ”0” and ”1” are transmitted as

Ÿž

S 

ƒž

, respectively) takes place over an additive white Gaussian noise (AWGN) channel

with one-sided spectral density

¡¢

.

The basic algorithm operates with the logarithms of the likelihood ratios (the log-likelihood ratios, or LLRs) of the a-posteriori probabilities, ¤

£

 %

\ £

\

, at both the code nodes (called

-metrics) and the check nodes (called ¥ -metrics). This LLR version is based on the basic

iterative decoding algorithm that appeared in Gallager’s paper [2] of 1962. 

  Ž ¦

Š

§¨§ª© 

Š



£«’ ‡ / ¨  £ ‡ / ¤ ¬     ¬   ‡ / , ¥ ‡ / ¤­    ­   ‡ / , ¥ ‡ / ® ¬   ® ­  ‡ , ‡ T7



X

¯ 

   

‡

‡ 



Code length Parity check polynomial of the code 2 & Number of nonzero coefficients of Received (soft) symbols from the channel Log-likelihood ratio of received symbols   An integer representing a link from check node ’Š to code node £¢’ ¨ ‡  / ‡ / An integer representing a link from code node Š to check node £ Metrics ¤ and ¥ of a code node Metrics ¤ and ¥ of a check node Index sets used to determine destination nodes in message passing A posteriori log-likelihood ratio of a code symbol (“soft output”) Estimated code bit 

3.1.1 Input The input to the iterative decoding algorithm for cyclic codes are the length



and the values

(or positions, in the binary case) of the coefficients of the parity-check polynomial (optional) its Hamming weight Ž . The total number of input parameters is

7

ް

l

 

and

integers, or



approximately



lcU±“4F²

ŽG

 g

bits.

3.1.2 Cyclic shift matrices Upon receiving the input parameters of the particular code structure that is to be used in the decoder, the connections between code nodes and check nodes are determined. To this end, the xw

rows and columns of the extended parity-check matrix Š

 

and

 H2v1Š

 H 

, respectively. w

Alternatively, the structure of matrix @³ Ž

integers, or

@³

³G±“4F² Ž

are re-created by taking cyclic shifts

 g

can be directly input to the decoder. This requires

bits.

3.1.3 Initialization ´

For ‡CN@…8%

'F

Y5

'

, compute the LLR of the received symbols, 

§¨§ª©

´

For ‡CN@…8%

'F

Y5

'

8

and /5NQ…8%

¤¬ 

 ‡

/

C



§¨§ª©

z

C ‡

¦





¡¢

¶ Ž

 ‡

е

.

, initialize the metrics

·¸)6 ¥

­ 

 ‡

/

C %

.

3.1.4 Iterative step 1: Message passing ´

For ‡CN@…8%

'F

Y5

'

, initialize the index sets ® ¬  ‡

a

® ­  ‡

 %

´

.

For ‡CN@…8%

'F

Y5

'

, propagate metrics

1. Top-down propagation For

8

/uN@…8%





¶ Ž

, load ¤ -metrics in check node

memory ¤­  £

¨

 ‡

 ® ¬  /

 ‡

o /

2. Bottom-up propagation For

¤¬ 

/5N@…8%

 ‡

 /

F

'



® ¬ 

Ž

 ‡

/

C

® ¬ 

 ‡

 /

 

.



, load ¥ -metrics in code node

memory ¥

¬ 

£«’

 ‡

 /

 ® ­  ‡

 /

C ¥

­  ‡

 /



® ­  ‡

 /

a

® ­  ‡

 /

 

 .

8

3.1.5 Iterative step 2: Metric computation ´

'F

For ‡CN@…8%

'

Y5

, compute metrics

1. Z-processor ( ¥ -metric computer) (a) Compute the total sum HJ

f¹¶_º» ] {

0

Suggest Documents