Mar 31, 2005 - In this correspondence we investigate soft-decision decoding of binary ... reliable basis, among all possible bases, from the probabilistic view.
Soft-Decision Decoding using Ordered Recodings on the Most Reliable Basis∗ Yingquan Wu and Christoforos N. Hadjicostis Coordinated Science Laboratory and Department of Electrical and Computer Engineering University of Illinois at Urbana-Champaign March 31, 2005
Abstract In this correspondence we investigate soft-decision decoding of binary linear block codes using ordered recodings of test error patterns (TEPs) on the so-called “most reliable basis.” We demonstrate the optimality of the most reliable basis, among all possible bases, from the probabilistic view. We then propose an suboptimal algorithm which utilizes the reprocessing ordering. In particular, the proposed algorithm incorporates three techniques that render it computationally very efficient: (i) an iterative reference recoding technique which simplifies the recoding operation required for each TEP; (ii) a adaptive skipping rule which can significantly reduce the average number of recodings. (iii) a preprocessing that significantly reduces the number of additions in the evaluation of likelihood. Simulation results with codes of relatively large length show that the proposed heuristic algorithm is computationally very efficient in comparison with existing algorithms in the literature.
∗
Part of this work was presented at the 2001 IEEE Global Telecom. Conf., San Antonio, USA, Nov. 2001. This
work has been supported in part by NSF Career Award 0092696 and in part by the Motorola Research Center at the University of Illinois at Urbana-Champaign.
1
I.
Introduction
Maximum-likelihood (ML) soft-decision decoding provides asymptotically 3 dB gain in comparison to bounded-distance hard-decision decoding; however, ML soft-decision decoding has been shown to be NP-hard, whereas, for many existing good codes, bounded-distance hard-decision decoding can be effectively achieved with quadratic complexity [1]. Finding computationally efficient and practically implementable soft-decision decoding algorithms is a topic that has been investigated extensively and remains an open and challenging problem [2]– [13]. In [2], it is shown that all linear block codes have a trellis structure and thus the Viterbi algorithm can be utilized for ML decoding. However, in order to achieve ML performance, the Viterbi algorithm employs (average) computational complexity and space (memory) complexity that are exponential in the code length. Thus, the Viterbi algorithm is only applicable for codes with small redundancy or for codes with a small number of codewords. Another approach for performing soft-decision decoding is successive algebraic decoding [3], [4]. In [3], Forney provided for the first time a soft-decision decoding approach that utilizes the concept of a generalized minimum distance (GMD). In [4], Chase presented an algorithm which searches for codewords by successively applying algebraic decoding to candidate test error patterns (TEPs) corresponding to certain least reliable bit positions. This technique was shown to be asymptotically optimal as the SNR goes to infinity in the AWGN channels. Recently there has been interest in algorithms which perform soft-decision decoding by utilizing recodings of TEPs on the so-called “most reliable basis” [5]– [10]. These algorithms first determine the most reliable basis (MRB) for the received word and construct a new, systematic generator matrix ˜ = [I P] ˜ that is associated to the MRB. Then, TEPs (associated with the MRB) are generated G ˜ In [5], TEPs were ordered according to decreasing iteratively and recoded into codewords using G. likelihood in j-ary output channels. In [6, 7], TEPs were ordered in increasing Hamming weight, up to a pre-determined maximum weight. The algorithm was shown to be practically optimal when the maximum Hamming weight of TEPs is min{"d/4 − 1$, k}, where d denotes the minimum Hamming distance of the code and k denotes the information length. A pure ML decoding approach based on an efficient breadth-first search was presented by Gazelle and Snyders in [10]. Another class of algorithms which also utilize the most reliable basis was studied in [11, 12]. These algorithms implemented softdecision decoding through the use of a generalized Dijkstra graph searching technique. In this paper we investigate ordered recodings on the MRB given a constraint on the maximum number of recodings. We show that the likelihood ordering proposed in [5], when constrained by the maximum number of TEPs, minimizes the list decoding error probability. We then justify the use of the MRB by showing that, for recodings of TEPs in any “plausible” ordering (as will be discussed in the context), the MRB achieves smaller list error probability than other bases. We proceed to present an efficient sub-optimal algorithm which compromises the optimal ordering to eliminate large memory 2
and complex sorting, and furthermore, to eliminate unpromising TEPs dynamically. More specifically, the algorithm utilizes the fixed reprocessing ordering which is independent of specific bit reliabilities, as introduced in [6, 10], while incorporating three novel and efficient tactics, namely, a probabilistic skipping rule to eliminate unpromising TEPs, a preprocessing rule to discard unpromising TEPs by computing only partial WHD, and a reference recoding scheme to simplify the recoding operation. The rest of the paper is organized as follows. In Section II we present background information. In Section III we carry out ordering analysis and in Section IV we present an improved order reprocessing algorithm. Simulation studies are included in Section V, and conclusions are drawn in Section VI.
II.
Preliminaries
Let C(N, K) be a binary linear block code of length N and dimension K that is used for error control over the additive white Gaussian noise (AWGN) channel, under binary-phase-shift-keying (BPSK) signaling of unit energy. More specifically, a bipolar version of a codeword c = [c1 c2 . . . cN ], i.e., [(−1)c1 (−1)c2 . . . (−1)cN ] is used for transmission. At the output of the demodulator, the received unquantized word r = [r1 r2 . . . rN ] takes the form ri = (−1)ci + ni , i = 1, 2, . . . , N , where ni are independent and identically distributed Gaussian random variables with zero mean and variance N0 /2. We adopt the common assumption that codewords are equally probable, i.e., Pr(c) = 2−K for any "
i =0|ri ) c ∈ C. Under this model, the ith-bit log-likelihood-ratio, which is formally defined as δi = ln Pr(c Pr(ci =1|ri ) ,
can be simplified to δi = 4ri /N0 , i.e., the received symbol ri is simply a scaled log-likelihood-ratio. Bit hard-decision sets yi to 0 if ri > 0 and 1 otherwise; the reliability of the corresponding bit decision, which is defined as a scaled version of the magnitude of the log-likelihood-ratio, is conveniently represented by αi = |ri |. Suppose we are given a binary block code C(N, K) with generator matrix G. Upon receiving r (in ˜ associated with the form explained above), one can construct a new systematic generator matrix G the most reliable (information) basis (MRB), via a transformation from the original generator matrix G. The following greedy search algorithm can be used to obtain the MRB [6]. Greedy Algorithm for Obtaining the MRB 1. Sort bit indices in decreasing order of reliability, i1 , i2 , . . . , iN (a permutation of 1, 2, . . . , N ). 2. Set χ = ∅. 3. For l = 1, 2, . . ., till |χ| = K, do: Check whether the il -th column vector of G is independent of the column vectors of G that are associated to the indices in χ. If so, add il to χ, otherwise discard il . 4. Permute (column-wise) the original matrix G in accordance with χ and then systematize the resulting matrix.
3
Note that when
K N
>
1 2
the above procedure (Steps 3 and 4 in particular) can be made compu-
tationally more efficient by operating on the parity check matrix instead of the generator matrix [6]. Note that the re-ordered reliabilities in the MRB and the redundancy part satisfy decreasing order respectively, i.e., α ˜1 ≥ α ˜2 ≥ . . . ≥ α ˜ K and α ˜ K+1 ≥ α ˜ K+2 ≥ . . . ≥ α ˜N .
(1)
In the sequel, “∼ ” stands for the ordering associated the MRB; subscript “B ” stands for the index set {1, 2, 3, . . . , K} associated with the basis and “R ” stands for the index set {K + 1, K + 2, . . . , N } ˜ is in systematic form, we will also write it associated with the redundancy part. Since matrix G ˜ = [IK P], ˜ where IK denotes the K × K identity matrix and P ˜ reflects the dependency of the as G (unreliable) redundancy bits on the (reliable) information bits. ˜ y Having obtained the reordered G, ˜ and α ˜ associated with the MRB, one can immediately generate the first candidate codeword by recoding the bit information y ˜B "
˜ = [˜ ˜ ˜ c0 = y ˜B G yB y ˜B P].
(2)
Thus, if no errors occur in the MRB, then the above recoding operation successfully retrieves the transmitted codeword. When only a few errors are present in the MRB, one can develop searching strategies that iteratively flip bits of y ˜B (the information bits corresponding to the MRB), each time ˜ and evaluating its likelihood. recoding the resulting information vector into a codeword (using G) The action of flipping bits in y ˜B is equivalent to adding to y ˜B a binary test error pattern (TEP) "
e = [e1 e2 . . . eK ] and performing a recoding operation with respect to the TEP e as ! " " ˜ = y ˜ , ˜ ce = (˜ yB ⊕ e)G ˜B ⊕ e (˜ yB ⊕ e)P
(3)
where ⊕ denotes the bit-wise XOR operator and y ˜B denotes the information corresponding to the MRB. The weighted Hamming distance (WHD) with respect to a TEP e, denoted by D(α, ˜ e), is defined as the sum of the reliabilities corresponding to the entries in which the recoded codeword ˜ ce differs from y ˜, i.e., #
"
D(e) =
α ˜i.
(4)
1≤i≤N ce,i %=yi
It is well-known that Maximum-Likelihood (ML) soft-decision decoding is equivalent to searching for the TEP that minimizes the WHD D(α, ˜ ·). It can be verified that when α ˜ i = 1, i = 1, 2, . . . , N , the above formula evaluates to the Hamming distance between ˜ ce and y ˜. The WHD D(α, ˜ e) can be naturally decomposed into two parts, the TEP likelihood and the redundancy WHD, defined respectively as follows "
L (e) =
#
"
α ˜i,
DR (e) =
1≤i≤K ei =1
#
K+1≤i≤N c˜e,i =1
4
α ˜i.
(5)
III.
A Probabilistic View of the Most Reliable Basis
In [9], it is shown that the probability that a TEP e truly reflects the error pattern over the MRB is determined by $ tr % exp (−L (e)) Pr ˜ cB = e ⊕ y ˜B | α ˜ = &K . αi )) i=1 (1 + exp(−˜
(6)
where ctr denotes the transmitted codeword and Pr(e | α) ˜ denotes the conditional probability that ˜ ctr ˜B = e conditioned on the reliability vector α. ˜ Clearly, the optimal ordering of TEPs in the B ⊕y sense of decreasing probability of retrieving the transmitted codeword, i.e., ' ( ' ( ' ( ' K ( Pr e(0) |α ˜ ≥ Pr e(1) | α ˜ ≥ Pr e(2) | α ˜ ≥ . . . ≥ Pr e(2 −1) | α ˜ ,
(7)
satisfies L (e(0) ) ≤ L (e(1) ) ≤ L (e(2) ) ≤ . . . ≤ L (e(2
K −1)
).
(8)
The ordered recoding has been extensively studied in soft-decision decoding [5]– [10]. We next establish the superiority of the MRB against other bases in light of two characteristics: (i) list error probability when TEPs are likelihood-ordered; (ii) list error probability when TEPs are heuristicallyordered in a “plausible” manner (to be defined shortly). we first present the following lemmas. Lemma 1 Let α ˜1 ≤ α ˜2 ≤ . . . ≤ α ˜ K be the reliabilities associated to the MRB. Let α ¯1 ≤ α ¯2 ≤ . . . ≤ α ¯K be the reliabilities associated to any other basis. Then, α ˜i ≥ α ¯ i , i = 1, 2, . . . , K. The proof of the lemma is by straightforward contradiction (see e.g., [9]). "
"
Lemma 2 Let a = {a1 , a2 , . . . , aK } and b = {b1 , b2 , . . . , bK } be two sets such that 0 ≤ bi ≤ ai ≤ i = 1, 2, . . . , K. Let ∆ be a set of subsets of the index set {1, 2, 3, . . . , K} satisfying: if χ ∈ ∆ & & " ) then all subsets of χ are also contained in ∆. Define ηs (∆) = χ∈∆ i∈χ si / K j=1 (1 + sj ), where 1,
"
s = {s1 , s2 , . . . , sK }. We have
ηa (∆) ≤ ηb (∆). Proof: If ai = bi , i = 1, 2, . . . , K, then the result is trivial. "
Let al += bl , i.e., al > bl . We define a' = {a1 , . . . , al−1 , bl , al+1 , . . . , aK }. For any χ ∈ ∆ such "
that l ∈ χ, we observe χ − {l} ∈ ∆. If we define two subsets of ∆, ∆1 = {χ ∈ ∆ : l ∈ χ} and "
∆2 = {χ − {l} : χ ∈ ∆1 }, then we can decompose ηa (∆) in the following manner. ) ) ) & & & χ∈∆1 ∪∆2 χ∈∆−∆1 −∆2 χ∈∆ i∈χ ai i∈χ ai i∈χ ai = + ηa (∆) = &K &K &K (1 + ai ) i=1 (1 + ai ) i=1 (1 + ai ) )i=1 & ) & (1 + al ) χ∈∆2 i∈χ ai χ∈∆−∆1 −∆2 i∈χ ai + = &K &K (1 + ai ) i=1 (1 + ai ) & ) ) i=1& a i∈χ ai χ∈∆−∆1 −∆2 i∈χ i χ∈∆2 + . =& &K 1≤i≤K,i%=l (1 + ai ) i=1 (1 + ai ) 5
We observe that in the last equality al is independent of the first term and appears only in the denominator of the second term. This indicates ηa (∆) ≤ ηa" (∆). "
Therefore, if we define a sequence a(i) = {b1 , . . . , bi , ai+1 , . . . , aK }, i = 1, 2, 3, . . . , K, then, we can iteratively apply the above to obtain ηa (∆) ≤ ηa(1) (∆) ≤ ηa(2) (∆) ≤ . . . ≤ ηa(K) (∆) = ηb (∆). This concludes the proof of the lemma.
!!
By establishing maps ai = exp(−¯ αi ), bi = exp(−˜ αi ), and ∆ =
{S(ei )
: i = 0, 1, 2, . . . , M − 1} for
arbitrary TEP sequence {ei }, where S(·) denotes the set of indices of nonzero positions in a vector, % ) −1 $ i we have M ˜ = ηb (∆). Thus, we obtain i=0 Pr e | α M −1 # i=0
M −1 # $ % $ % Pr ei | α ¯ = ηa (∆) ≤ ηb (∆) = Pr ei | α ˜ ,
(9)
i=0
−1 for any TEP sequence {ei }M i=0 such that the corresponding ∆ satisfies the condition in Lemma 2. ¯ be respectively the sets corresponding to the likelihood-ordered TEPs on the MRB and Let ∆ and ∆
¯ = {S(¯ another basis such that ∆ = {S(e(i) ) : i = 0, 1, 2, . . . , M − 1} and ∆ e(i) ) : i = 0, 1, 2, . . . , M − 1}. Then, it is easily seen that Lemma 2 is invoked to conclude M −1 # i=0
−1 −1 ' ( M ' ( M ' ( # # (i) (i) Pr ¯ e |α ¯ ≤ Pr ¯ e |α ˜ ≤ Pr e(i) | α ˜ . i=0
(10)
i=0
Note that the condition for ∆ plays a significant role for the superiority of the MRB. In terms of TEPs, this condition means that if a TEP e is a candidate, then all its sub-TEPs ¯ e such that S(¯ e) ⊂ S(e) must be candidates as well. This general model is often used as foundation of analysis (see e.g., [9], [14]) As a matter of fact, all orderings in the literature (e.g., [5]– [10]) satisfy the condition for ∆. We summarize the above characterizations in the following theorem. Theorem 1 (i) When the optimal ordering in (7) is applied for recodings of TEPs, the use of the MRB minimizes the list error probability, as indicated by (10). (ii) For any ordering of TEPs where a TEP ¯ e has higher priority than a TEP e if S(¯ e) ⊂ S(e), the use of the MRB minimizes the list error probability, as indicated by (9).
IV.
An Improved Order Reprocessing Algorithm
In this section, we first overview the order−w reprocessing which is proposed in [6], then improve its efficiency in three aspects, namely, a adaptive skipping rule to eliminate unpromising TEPs, a 6
Table 1: Order−w Reprocessing of TEPs. Weight 1 → Weight 2 → . . . → Weight w 00 . . . 0000 00 . . . 0001
00 . . . 0011
...
00 . . . 00 *1 . .+, . 111w
00 . . . 0010
00 . . . 0101
...
00 . . . 010 *1 . +, . . 11w−1
00 . . . 0100
00 . . . 0110
...
.. .
.. .
.. .
100 . . . 000
110 . . . 000
...
00 . . . 0110 *1 .+, . . 1.. .
w−2
. . 111- 0 . . . 00 *11 . +, w
preprocessing rule to discard unpromising TEPs by computing only partial WHD, and a reference recoding scheme to simplify the recoding operation. Define "
N (e) = 2Kw(e)
K #
2−i ei ,
(11)
i=1
where w(·) denotes (Hamming) weight. Order−w reprocessing sorts STEPs up to weight w in increasing order of this numerical function N , as shown in Table 1. Notice that N exhibits a two-level hierarchy: (i) weight dominates ordering, i.e., if w(e1 ) < w(e2 ) then N (e1 ) < N (e2 ); (ii) when two TEPs have equal weight, dictionary ordering applies, e.g., N ([0 1 0 0 0 1 1 1]) > N ([0 0 1 1 1 1 0 0]). We propose an auxiliary rule to help in ruling out TEPs that are not promising without explicitly performing the recoding: For a given TEP e, we estimate a lower bound on the redundancy discrepancy likelihood associated to its codeword ˜ ce by )N ˜j " j=K+1 α ˆ , DR (e) = L (e)λ )K ˜i i=1 α
(12)
ˆ where λ > 0 is a parameter to be determined. We discard e if D(e) = L (e) + DˆR (e) is greater than D(e∗ ), where e∗ denotes the currently ML TEP (i.e., it corresponds to the codeword that minimizes the WHD D(·) so far). A simpler implementation is to define the threshold ∗
∗
L = ρD(e ),
)K
˜i i=1 α )N ˜ i + λ j=K+1 α ˜j i=1 α
"
ρ = )K
(13)
and discard e if L (e) ≥ L ∗ . This rule is adaptive, depending on the currently ML TEP e∗ . Our simulations suggest that this adaptive skipping rule can be extremely efficient in ruling out TEPs that are not promising, resulting in a significantly lower average complexity.
7
We also propose a preprocessing rule to relieve the computational burden of fully computing the WHD. More specifically, if the WHD over the MRB and the τ most reliable positions of redundancy part, denoted by D[1, K+τ ] (e) (following the order in (1)), the TEP e is discarded without further consideration. Note that D[1, K+τ ] (e) = L (e) + D[K+1, K+τ ] (e). As will be shown shortly, the evaluation of the TEP likelihood L (e) requires only one addition. Also, we attempt to choose τ . N − K. Thus, the computation of D[1, K+τ ] (e) requires much less real additions than that of D(e). We next develop an explicit evolution function from a TEP to next one. Let I(e) denote the array of the nonzero indices of a TEP e such that I1 > I2 > . . . > Iw(e) . Let e(κ) denote the TEP corresponding to index array [I1 , . . . , Iκ−1 , Iκ − 1, Iκ+1 , . . . , Iw(e) ]. Let ¯ e denote the preceding TEP (herein we abuse the notation “ ¯ ” to mean the preceding one). We observe that the regular evolution can be explicitly expressed in the form I = [K, K − 1, . . . , K − κ + 2, I¯κ − 1, I¯κ+1 , . . . , I¯w(¯e) ]
(14)
where κ denotes the key index such that κ = min{k : I¯k+1 < I¯k − 1}.
(15)
We next discuss a TEP skipping criterion given that the preceding TEP ¯ e is invalid in the sense L (¯ e) ≥ L ∗ . The essential idea is to avoid generating TEPs e whose associated reliabilities are increased bitwise, i.e., to avoid the case I¯i ≥ Ii , i = 1, 2, . . . , w(¯ e) (recall that α ˜1 ≥ α ˜2 ≥ . . . ≥ α ˜ K ). As we can see, this kind of skipping only rules out invalid TEPs and is independent of specific reliability values. In the context of dictionary ordering, the new TEP e is associated with I = [K, K − 1, . . . , K − κ + 2, I¯κ − 1, I¯κ+1 , . . . , I¯w(¯e) ]
(16)
where κ denotes the key index such that κ = min{k : I¯k+1 < I¯k − 1, I¯k−1 < K − k + 2}.
(17)
Note the first condition indicates the I¯κ can be decreased (corresponding to increasing reliability) by 1, whereas the second condition indicates I¯κ−1 is not the maximum and thus can be increased (corresponding to decreasing reliability). We note the skipping evolution expression is otherwise identical the regular one except this second condition term. This can be easily justified: to make the next TEP potentially valid, there must be an index m such that I¯m < Im . To obtain the next TEP with the smallest order, the corresponding positions Im should be the largest. The following examples
8
shed some light on typical skipping evolutions. ¯ e = [0 0 1 0 1 1 1 0 0] → e = [0 0 1 1 0 0 0 1 1], ¯ e = [0 1 0 0 1 0 0 0 0] → e = [1 0 0 0 0 0 0 0 1], ¯ e = [0 0 1 1 1 0 0 1 1] → e = [0 1 0 0 0 1 1 1 1], ¯ e = [0 0 0 1 0 0 0 1 1] → e = [0 0 0 0 0 1 1 1 1], where the one being underscored indicates the key index. Note in the last example the key index does not exist. We observe that in both evolutions the essential idea is to reduce the key index by one while setting the larger indices to the maximum values. This insightful observation leads to an efficient reference recoding operation in which a TEP is treated virtually as a unit vector (i.e., involving only single bit flipping). We present the complete algorithm in the following while justifying the reference recoding thereafter. Improved Order Reprocessing Algorithm ˜ y P, ˜, α ˜ satisfying α ˜1 ≥ α ˜2 ≥ . . . α ˜ K−1 ≥ α ˜ K and α ˜ K+1 ≥ α ˜ K+2 ≥ . . . ≥ α ˜N
• Input: λ, τ , Θ.
• Precomputation:
K i=1
Compute ρ = Compute ˜ c0,R
Set qi = p ˜i ⊕ p ˜ i+1 , βi = α ˜i − α ˜ i+1 , for i = 1, 2, . . . , K − 1
K i=1
α ˜i
α ˜ i +λ
N j=K+1
α ˜j
˜ and D(0) =y ˜B P
Compute ˜ cuK ,R = ˜ c0,R ⊕ p ˜ K and L (uK ) = α ˜K Compute ˜ cuK ⊕uK−1 ,R = ˜ cuK ,R ⊕ p ˜ K−1 and L (uK ) = α ˜K + α ˜ K−1 save the two terms in the (K + 1)-th memory block • Initialization:
Set m = 0, e = uK−1 , κ = K − 1, ˜ c∗ = arg min{D(0), D(uκ )},
• While m + + < M ,
L ∗ = ρD(e∗ )
do:
– Case 1: κ = 1 ∗ Compute ˜ ce,R = ˜ c¯e,R ⊕ qI1 and L (e) = L (¯ e) + βI1 . – Case 2: κ > 1 and Iκ = K − κ + 1 ∗ Fetch ˜ ce,R and L (e) from the (K + 1)-th memory block. ∗ If L (e) ≥ L ∗ , then terminate the process. ∗ Else compute ˜ ce(0) ,R = ˜ ce,R ⊕ p ˜ K−κ−1 and L (e(0) ) = L (e) + α ˜ K−κ−1 , save into the (K + 1)-th memory block – Case 3: κ > 1 and Iκ = Iκ−1 − 2 ∗ Compute ˜ ce,R = ˜ c¯e,R ⊕ qIκ and L (e) = L (¯ e) + βIκ . ∗ If Iκ+1 < Iκ − 1 and L (e) < L ∗ , then Compute ˜ ce(κ) ,R = ˜ ce,R ⊕qIκ −1 and L (e(κ) ) = L (e)+βIκ −1 , save into the (Iκ −1)-th memory block
9
– Case 4: κ > 1 and Iκ < Iκ−1 − 2 ∗ Fetch ˜ ce,R and L (e) from the Iκ -th memory block. ∗ If Iκ+1 < Iκ − 1 and L (e) < L ∗ , then Compute ˜ ce(κ) ,R = ˜ ce,R ⊕qIκ −1 and L (e(κ) ) = L (e)+βIκ −1 , save into the (Iκ −1)-th memory block – Compute D[1, K+τ ] (e) = L (e) + D[K+1, K+τ ] (e) – If D[1, K+τ ] (e) < Θ, then ∗ Evaluate D(e) = D[1, K+τ ] (e) + D[K+τ, N ] (e). ∗ If D(e) < D(e∗ ), then set ˜ c∗ ← ˜ ce , L ∗ ← ρD(e). – Generate the next TEP e and determine the key-index κ via ∗ skipping evolution if L (e) < L ∗ ∗ regular evolution otherwise. endwhile • Output:
the most likely codeword ˜ c∗
Proposition 1 (i) The proposed reference recoding truly recodes the given TEP. (ii) The average number of reference recodings is precisely 1. Proof: (i) When κ = 1, the evolution follows directly from (14). When κ > 1 and Iκ = Iκ−1 − 2, we observe Iκ−1 = K − κ + 2. Thus, the preceding index I¯κ = K − κ + 1 which further indicates I¯1 , I¯2 , . . . , I¯κ−1 are maximum positions and unchanged during the evolution. This concludes the third case. When κ > 1 and Iκ < Iκ−1 − 2, we observe that the TEP corresponding to [K, K − 1, . . . , K − κ + 2, Iκ + 1, Iκ+1 , . . . , Iw(e) ] is less reliable than e bitwise and thus has higher order. On the other hand, I1 , . . . , Iκ−1 are the maximum indices, hence Iκ+1 is the key index of that TEP. Therefore, the information of e was indeed stored in the memory. We next show the information is not overwritten by another TEP. This is because, when another index κ$ += κ such that Iκ" = Iκ , the corresponding TEP has either higher order than the stored one (when κ$ > κ) and thus was overwritten, or has lower order than the current one (when κ$ < κ) and thus is not recoded yet. When κ > 1 and Iκ = K − κ + 1, the TEP has the index array [K, K − 1, . . . , K − κ + 1]. Clearly this TEP has one more ”1” than the precding TEP (i.e., transited frome one weight class to next weight class) and thus its recoding information is constructed during the last transition. Finally when κ > 1 and Iκ+1 = Iκ − 1, e(κ) does not exist and thus no recoding is needed. (ii) follows from the fact that each stored recoding information is precisely fetched once.
!!
We observe that the memory complexity is O(K ×(N −K)), which is the minimum order since it is necessary ˜ (or essentially P). ˜ for the storage of the permuted generator matrix G
10
Simulation Results and Comparisons
V.
We measure performance in terms of false decoding rate (block-error-rate) and complexity in terms of the number of recodings, by recording 1000 false decodings. Our first case study focuses on the soft-decision decoding of the (128, 64, 22) extended BCH code. We present simulations for the performance of the naturally-ordered algorithm and the likelihood-ordered algorithm. We set the maximum number of recodings to M = 30, 000 for both algorithms. Figure ?? shows that the naturally-ordered algorithm performs very closely to the likelihood-ordered algorithm, while effectively avoiding the cumbersome requirements of the likelihood-ordered algorithm. For the naturally-ordered algorithm, we choose parameters θ = 2.82, λ = 3.50, wb = 4 for all SNR’s. To compare the naturally-ordered algorithm to order-i re-processing [?], we set M =
)4
j=0
$64% j
= 679, 121
(i.e., the number of recodings performed by order-4 re-processing). Figure ?? (a) shows that the naturallyordered algorithm performs better than but very closely to order-4 re-processing. However, as shown in Figure ?? (b), the naturally-ordered algorithm exhibits significant savings on average computation when compared to order-4 re-processing; this is due to the probabilistic skipping rule. For the naturally-ordered algorithm, θ is chosen to 3.82, wb is set to 5, and λ is chosen to be 2.5 based on training. We do not address the complexity of the likelihood-ordered algorithm, since the recoding operation does not dominate the computational cost due to the sorting of TEPs that is involved. The performance of the proposed naturally-ordered algorithm is within 0.05 dB of the lower bound of strict ML decoding obtained using the approach in [5], i.e., by considering only the number of the best–effort codewords which have lower codeword discrepancy likelihood than the codeword discrepancy likelihood of the transmitted codeword while performing the naturally-ordered algorithm. Note that pure ML decoding has never been achieved with the known algorithms (e.g., [2, 13]). The maximum number of codewords constructed is approximately 14 orders of magnitude smaller than the total number of codewords. As we can see from Figure 2 (b), the gain over hard-decision decoding is about 3.0 dB. Our second case study involves the (256, 131, 38) extended BCH code. In this case, our purpose is to compare the proposed method to order-3 re-processing in terms of performance and computational complexity )3 $ % under an identical (maximum) number of recodings M = l=0 131 = 374, 792. As shown in Figure ?? (where l
the parameters are chosen as θ = 2.88, λ = 9.5, wb = 4 for the naturally-ordered algorithm), the proposed method achieves better performance with notably less computational cost. In particular, the average number of recodings is reduced by roughly a factor of 40 when γb = 3.0 dB. This is attributed to the probabilistic skipping rule. The performance of hard-decision decoding and the proposed likelihood-ordered algorithm is also compared in Figure ?? (a). It shows that the naturally-ordered algorithm performs very closely to the likelihood-ordered one.
VI.
Conclusions
In this correspondence we have investigated soft-decision decoding for binary linear block codes using ordered recodings on the most reliable basis (MRB). We demonstrated the optimality of the MRB in the sense that it minimizes the list decoding error probability. We also presented an efficient sub-optimal decoding algorithm.
11
More specifically, the proposed algorithm utilizes reprocessing ordering while incorporating three novel techniques, namely, an adaptive skipping rule to eliminate unpromising test error patterns (TEPs), a preprocessing rule to discard unpromising TEPs by computing only partial WHD, and a reference recoding that treats a TEP virtually as a single-bit pattern. It is not hard, though tedious, to explicitly express out the list error probability, and thereafter compute the numerical result, of the proposed preprocessing rule using the analytical approach developed in [15]. In future the authors wish to analyze the list error probability of the adaptive skipping rule.
References [1] S. G. Wilson, Digital Modulation and Coding. Upper Saddle River, NJ: Prentice-Hall, 1996. [2] J. K. Wolf, “Efficient maximum likelihood decoding of linear block codes using a trellis,” IEEE Trans. Inform. Theory, vol. 24, pp. 76–80, Jan. 1978. [3] G. D. Forney, Jr., “Generalized minimum distance decoding,” IEEE Trans. Inform. Theory, vol. 12, pp. 125–131, Apr. 1966. [4] D. Chase, “A class of algorithms for decoding block codes with channel measurement information,” IEEE Trans. Inform. Theory, vol. 18, pp. 170–181, Jan. 1972. [5] B. G. Dorsch, “A decoding algorithm for binary block codes and J-ary output channels,” IEEE Trans. Inform. Theory, vol. 20, pp. 391–394, May 1974. [6] M. P. C. Fossorier and S. Lin, “Soft-decision decoding of linear block codes based on ordered statistics,” IEEE Trans. Inform. Theory, vol. 41, pp. 1379-1396, Sept. 1995. [7]
, “Computationally efficient soft decision decoding of linear block codes based on ordered statistics,” IEEE Trans. Inform. Theory, vol. 42, pp. 738-750, May 1996.
[8]
, “Complementary reliability-based decodings of binary linear block codes,” IEEE Trans. Inform. Theory, vol. 43, pp. 1667–1672, Sept. 1997.
[9] A. Valembois and M. Fossorier, “A comparison between ‘most reliable basis reprocessing’ strategies,” IEICE Tran. Fundamentals, vol. E85–A, pp. 1727–1741, July 2002. [10] D. Gazelle and J. Snyders, “Reliability-based code-search algorithms for maximum-likelihood decoding of block codes,” IEEE Trans. Inform. Theory, vol. 43, pp. 239–249, Jan. 1997. [11] Y. S. Han, C. R. P. Hartmann, and C.-C. Chen, “Efficient priority-first search maximum-likelihood soft-decision decoding of linear block codes,” IEEE Trans. Inform. Theory, vol. 39, pp. 1514–1523, Sept. 1993. [12] C.-C. Shih, C. R. Wulff, C. R. P. Hartmann, and C. K. Mohan, “Efficient heuristic search algorithms for soft-decision decoding of linear block codes,” IEEE Trans. Inform. Theory, vol. 44, pp. 3023–3038, Nov. 1998. [13] A. Vardy and Y. Be’ery, ”Maximum-likelihood soft decision decoding of BCH codes,” IEEE Trans. Inform. Theory, vol. 40, pp. 546–554, March 1994. [14] Y. Wu and D. Pados, “An adaptive two-stage algorithm for ML and sub-ML decoding of binary linear block codes,” IEEE Trans. Inform. Theory, vol. 49, pp. 261-269, Jan. 2003. [15] D. Agrawal and A. Vardy, “Generalized minimum distance decoding in Euclidean-space: Performance analysis,” IEEE Trans. Inform. Theory, vol. 46, pp. 60–83, Jan. 2000.
12