On Survivor Error Patterns For Maximum Likelihood Soft ... - IEEE Xplore

7 downloads 0 Views 139KB Size Report
Ordinarily a large number of error patterns is encountered in maximum likelihood soft decision syndrome decoding of a binary linear block code. However, many ...
ON SURVIVOR ERROR PATTERNS FOR MAXIMUM LIKELIHOOD SOFT DECODING J a k o v Snyders D e p a r t m e n t of Electrical Engineering - Systems, Tel A v i v U n i v e r s i t y , R a m a t A v i v 69978, Israel Summary

Complete characterization of the survivors in general, by straightforward description or explicit necessary and sufficient conditions, appears to be rather cumbersome. Nonetheless, the following simple necessary condition for survivability is applicable.

Ordinarily a large number of error patterns is encountered in maximum likelihood soft decision syndrome decoding of a binary linear block code. However, many of them can be eliminated (without scoring) on the basis of the sorted confidence values of the hard-detected bits. We present some properties that an error pattern should possess in order to survive such elimination procedure. By applying these necessary conditions for survivability the family of error patterns is substantially reducible, and soft decision decoding is facilitated. Consider an (t1.k.d) binary linear block code specified by a parity-check matrix H . Assume that codewords with equal probability are transmitted through a niemoryless channel. Let Y be the bit-by-bit hard-detected version of the received word, and assume that z, the corresponding syndrome, is nonzero. A set of 1 linearly independent columns of H that adds up to z will be called a /-pattern. Thus t does not exceed ni where n1 = t i - k is the number of check bits. A palier'ri is a I-pattern with unspecified 1. Assume, without essential loss of generality, that d t 3. Let r be the set of columns of H ; then IF( = r i due to the aforementioned assumption. The ivei,oh/ of a column h of H is defined to be the confidence value (magnitude of the log likelihood ratio) of the bit associated with h. The weigh/ of a subset @ of r is the sum of the weights of the elements of @. A n algorithm [ I ] for carrying out maximum likelihood decoding may now be phrased as follows: I ) among all the /-patterns, with 1 ranging up to ni, find (the usually unique) one with least weight, then 2) complement v at the locations identified by the pattern that was found. Since the population of patterns is rather vast (for most codes), adequate procedures have to be devised for efficient implementation of this conceptually clear multiple-check generalization of the single-check Wagner rule [2]. The two methods of [ I ] are efficient enough only for rather small values of ni. In [3] an approach is presented for preparation of sustantially reduced lists of patterns. The lists are based on the knowledge of a11 or part of y I . y2.....,y,, where y , is the least weighing column of H and y j for each o f .; = 2.3, ...., ni is the least weighing member of r-l\~y,.y,......yj.l),with L s standing for linear span. Let R, = ( y l . y 2......y,) where r s ni. A pattern @ is said to be R.-e[iniirin/ed if there exist a patter; A, A # @, such that A-@ c R,, and a) b) a one-to-one mapping II, from A-@ into @-A such that II,(yj)E I'-L.>(Rj.l) for each y j e A-@, A pattern A for which conditions a) and b) are fulfilled is called an R,-eliminator of 9. A pattern that has no R,-eliniinator is R , - t u / . ~ w vor. The definition implies that maximum likelihood decoding may be accomplished by scoring only the R,-survivor patterns; the other patterns may be deleted on the basis of the knowledge OF 0, only. The number of R,-survivors for an I' close to ni is usually a rather small fraction of the total number of patterns. For some codes it is possible to specify with a few pwdererniiried lists the set of survivors for all the syndromes. In particular, is essentially for a Hamming code a single list of R,.l-survi\.ors sufficient. Given the syndrome z , the most likely pattern is then obtained by scoring the entries of one such list, the one associated with z. This procedure allow, particularly for some high rate codes, significant savings in computatioo. For example. in decoding the (31,26,3) code a gain factor exceeding 5 was achieved with respect to all previous decoding methods, e.g. [I], [4], 151, 16). Also, a gain factor close to two was obtained by applying coset decoding, iniplemented with reduced lists of patterns, to the (32,16,8) Reed-Muller code.

An 0,-survivor (m-jl-pattern, where 0 5 j < ni/2, contains at least ni-2j elements of R,. By this result many, usually the vast majority, of patterns with cardinality exceeding n i / 2 are purged. Many additional patterns, even those with small cardinality, may be deleted by applying other necessary conditions, two of which now follow. I.

11.

Let @ be an R,-survivor with cardinality of at least 3 . Then no two or more elements of Q sum up to any y j , .i 5 r .

111.

Let an R,-survivor @ contain h

.E

y , where y , ,jcT and yb are distinct members of Or-@ and ( y j : .;er)C R,n@. Then .j < min(a.h) for all jeT ( T is possibly empty). =

ya+yb+

With the aid of such necessary conditions one may delete many of those patterns that are not survivors. The elinintion may then be completed, if desired, by checking each remaining entry for survivability, Of course, the elimination procedure has to be carried out only once for each code, when the decoder is being devised. Based on the necessary condititions and by additional checks, the essentially single list of survivors for the (63,57,3) code was readily obtained. A quite brief representation of that relatively lengthy list is made possible with the aid of permutations. For a still longer Hamming code the list of survivors is of course obtainable in the same manner, and similar permutation rules for its consice representation apply. W i t h the aid of the list of survivors, maximum likelihood decoding of the (63,57,3) code is performable by no more than 800 and sometimes by only 62 real additions (both extremes are rather unlikely for a model-ately noisy channel). I n contrasf, all previously published decoding methods require several thousand additions in any case. For large mid-rate codes knowledge of the lists of survivors may provide a means for devising suboptimal decoders with improved performance relative to limited-distance decoders (see e.g.[71.[81). References J. Snyders and Y . Be'ery, "Maximum likelihood soft decoding a binary block codes and decoders for the Golay codes," IEEE Tt'ujlr. /,if(Jl'nl. T ~ c ' I > Ivol. . ~ ' IT-35, , pp.963-975, 1989. R.A. Silverman and M . Baker, "Decoding for a constant data rate source," IEEE Ti~nri\. / t i / < ~ i . n i .T ~ C I J I vol. . J ) , PG IT-4, pp.50-63, 1954. J. Snyders, "Reduced lists of error patterns for maximum likelihood soft decoding," I E E E T m i c . //i/orm. T h ~ o r j ' .to be published. J.K. Wolf, "Efficient maximum likelihood decoding of linear rm, vol. IT-24, pp. block codes," / € € E Tr.nm. / ~ ~ j ~ TAeorj., 76-80,1978, J.H. Conway and N.J.A. Sloane, "Decoding techniques for codes and lattices, including the Golay code and the Leech I > , IT-32, pp.41 -50, lattice," / E E E Trot?\. / r i / o i . m . T / W J ~ .vol. 1986. G.D. Forney, Jr., "Coset codes II : Binary codes, lattices and related codes," / € E € Trmr\. /!r/o/.n7. Thwvjl, vol. IT-34, pp.I152-ll87, 1988.

[7]

[SI

192

E.R. Berlekamp, "The technology of error-correcting codes," / € € E Proccedi/igr, vol. 68, pp.564-592, 1980. E.R. Berlekamp, "The construction of fast, high-rate, soft decision block decoders," / € E € Tratic. /ri/orn~.T/i?ow. vol. IT-29, pp.372-377, 1983.

Suggest Documents