IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 26, NO. 2, FEBRUARY 2008
281
Low Complexity Soft-Input Soft-Output Block Decision Feedback Equalization Jingxian Wu, Member, IEEE, and Yahong R. Zheng, Senior Member, IEEE,
Abstract—A low complexity soft-input soft-output (SISO) block decision feedback equalizer (BDFE) is presented for turbo equalization. The proposed method employs a sub-optimum sequence-based detection, where the soft-output of the equalizer is calculated by evaluating an approximation of the sequencebased a posteriori probability (APP) of the data symbol. The sequence-based APP approximation is enabled by the adoption of both soft a priori information and soft decision feedback, and it leads to better performance and faster convergence compared to symbol-based detection methods as used by most other low complexity equalizers. The performance and convergence property of the proposed algorithm is analyzed by using extrinsic information transfer (EXIT) chart. Both analytical and simulation results show that the new equalizer can achieve a performance similar to that of trellis-based equalization algorithms, with a complexity similar to linear SISO minimum mean square error equalizers. Index Terms—Turbo equalization, sequence-based detection, BDFE, soft decision feedback, EXIT.
I. I NTRODUCTION
T
URBO equalization is a joint equalization and decoding scheme used in communication systems with coded data transmitted over inter-symbol interference (ISI) channel [1], [2]. Turbo equalizers achieve performance improvement over conventional equalizers and decoders by iteratively exchanging soft extrinsic information between a soft-input soft-output (SISO) equalizer and a SISO decoder. Trellis-based soft decoding algorithms, such as Bahl-Cocke-Jelinek-Raviv (BCJR) algorithm [3], or soft output Viterbi algorithm (SOVA) [1], [4], are usually employed in SISO equalizer/decoder to maximize the sequence-based symbol a posteriori probability (APP). For systems with high modulation levels and/or long channel length, the trellis-based SISO equalizers become prohibitively complex. Therefore, the design of low complexity SISO turbo equalizers has attracted considerable interests recently [5] – [14]. Current approaches to low complexity algorithms can be roughly classified into three categories. First, reducing the number of states in the trellis structure of an ISI channel leads to sub-optimum trellis-based equalizers [5], [6]. In [5], the BCJR algorithm is performed over a reduced state trellis, which is formulated by considering only dominant taps of the equivalent channel. ISI introduced by non-dominant channel taps are canceled by hard decisions [5],
Manuscript received March 31, 2007; revised November 1, 2007. Part of this paper was previously presented at the 2007 IEEE Global Telecommunication Conference (Globecom’07), Washington D.C., USA. J. Wu is with the Department of Engineering Science, Sonoma State University, Rohnert Park, CA 94928, USA (e-mail:
[email protected]). Y. R. Zheng is with the Department of Electrical and Computer Engineering, Missouri University of Science & Technology (former University of Missouri), Rolla, MO 65409 USA (e-mail:
[email protected]). Digital Object Identifier 10.1109/JSAC.2008.080205.
or soft decisions [6], from previous iterations. The complexity of such equalizers grows exponentially with the number of dominant channel taps. Second, linear minimum mean square error (MMSE) based SISO equalization algorithms [7] – [12] provide a reasonable trade-off between complexity and performance. The MMSE equalizers treat the data symbols as random variables with mean and variance computed from the a priori information. Complexity reduction in linear SISO MMSE algorithms is mainly due to symbol-by-symbol detection, i.e., the APP of each symbol is calculated individually based on a Gaussian assumption of the MMSE filter output. Third, non-linear SISO equalizers are designed by employing tentative hard decisions from the same iteration [13], or previous iteration [14], for ISI cancellation. To reduce error propagation arising from tentative hard decisions, a soft decision feedback equalizer is proposed in [15], which employs the a posteriori mean of the data symbols for ISI cancellation. Similar to linear SISO equalizers, symbol-based detection is used in the existing non-linear equalizers. Motivated by the respective advantages and limitations of the methods in the literature, we develop a new, non-linear, sub-optimum sequence-based block decision feedback equalizer (BDFE). Many of the performance enhancing features from the existing methods are used as building blocks of the new algorithm. The BDFE filters are dynamically formulated with the help of the a priori mean and variance of the data symbols as in [7]-[12]; the decision feedback is used for ISI cancellation as in [13]-[16]; specifically, the soft decision feedback is employed to reduce error propagation as in [15]. The main contribution of the proposed method is the employment of a sub-optimum sequence-based detection. In the proposed equalizer, the APP of each data symbol is calculated by collecting information from all related samples at the receiver, whereas no trellis structure is used. The sequence-based detection is made possible by utilizing soft decisions from both current iteration and previous iteration, and it improves over the symbol-by-symbol processing adopted by most existing methods. The non-linearity of the proposed method arises from decision feedback as well as the sub-optimum sequence-based detection. The combination of the sequence-based detection, along with those proven effective equalizer features, result in an equalizer with better performance and faster convergence rate. The performance of the proposed algorithm is analyzed by using the extrinsic information transfer (EXIT) chart [18], [19]. Both analytical results and simulation results show that the proposed algorithm can achieve a performance similar
c 2008 IEEE 0733-8716/08/$25.00
282
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 26, NO. 2, FEBRUARY 2008
Transmitter
bn
ˆbn
channel encoder
signal mapper
interleaver Π
LI (cn )
SISO channel decoder
de−interleaver Π−1
LO (xn )
ISI channel
+ Λ(xn ) −
zn
BDFE Equalizer
Λ(cn ) − +
Fig. 1.
LO (cn )
interleaver Π
LI (xn )
Receiver
Block diagram of a communication system with turbo equalization.
to that of trellis-based algorithms, with a complexity on the same order as linear MMSE equalizers. In addition, the new algorithm converges at a rate much faster compared to other equalization algorithms, and this will further reduce the overall complexity of the system due to the reduction of the number of iterations. The remainder of this papers is organized as follows. System model with a brief introduction of turbo principle is presented in Section II. The new SISO BDFE algorithm with sequence-based detection is developed in Section III. The performance of the proposed algorithms is analyzed in Section IV with the tool of EXIT chart. Bit error rate (BER) results obtained from simulations under practical system configurations are given in Section V, and Section VI concludes the paper. II. S YSTEM M ODEL The block diagram of a communication system employing turbo equalization is shown in Fig. 1. At the communication transmitter, the binary information, bn ∈ {−1, +1}, is passed through a convolutional encoder followed by an interleaver, and no trellis termination is used in the convolutional encoder. The output of the interleaver is divided into blocks with length K · N , where K = log2 M , M is the modulation level, and N is the number of modulated symbols per block. The output of the interleaver can be denoted in vector form as x = T [x1 , x2 , · · · , xN ] ∈ B KN ×1 , where xn = [x1 , · · · , xK ] ∈ B 1×K , and AT is the transpose of matrix A. The modulator maps K-bit data, xn , to one modulation symbol, sn ∈ S, where S is the modulation constellation set with cardinality M. The modulated data symbols are distorted by the ISI channel and additive white Gaussian noise (AWGN). The discrete-time representation of the communication system is yn =
L−1
hl sn−l + zn , for n = 1, 2, · · · , N.
(1)
l=0
where yn is the symbol-spaced sample at the output of the receive filter, zn is the zero-mean AWGN sample, and hl is the equivalent discrete-time composite channel impulse response resulted from the cascade of transmit filter, ISI channel, and
receive filter [17]. The system model described in (1) can be represented in matrix format as y = Hs + z,
(2)
where y = [y1 , · · · , yN ]T ∈ C N ×1 , s = [s1 , · · · , sN ]T ∈ C N ×1 , and z = [z1 , · · · , zN ]T ∈ C N ×1 are the receive sample vector, modulation symbol vector, and noise vector, respectively, and the channel matrix H is defined by ⎛ ⎞ h0 0 0 ··· ··· 0 ⎜ .. .. ⎟ .. .. ⎜ . . . . ⎟ ⎜ ⎟ ⎟ ∈ C N ×N . (3) 0 h · · · h · · · 0 H=⎜ L−1 0 ⎜ ⎟ ⎜ .. .. ⎟ .. .. ⎝ . . . . ⎠ 0 ··· 0 hL−1 · · · h0 The system equation of (2) and the channel matrix defined in (3) indicate that there is no inter-block interference (IBI). The IBI-free communication is achieved by using a short guard interval between blocks. Either zeros or known pilot symbols with length L can be transmitted during the guard interval. The guard interval results in a rate loss of L/(N + L) × 100%. Turbo equalizer consists of a SISO equalizer and a SISO convolutional decoder, which are separated by an interleaver, Π(·), and a de-interleaver, Π−1 (·). During each iteration, the SISO equalizer calculates the a posteriori log likelihood ratio (LLR) for each code bit xn as Λ(xn |y) = ln
P (xn = +1|y) . P (xn = −1|y)
(4)
Using Bayes’ Rule, we can write (4) as P (xn ) ∀x:xn =+1P (y|x) n =n Λ(xn |y)=ln +LI (xn ) ∀x:xn =−1P (y|x) n =n P (xn )
LO (xn )
(xn =+1) where LI (xn ) ln P P (xn =−1) is the a priori information of bit xn , and it is obtained by interleaving the soft-output of the convolutional decoder from the previous iteration. The a priori information is assumed to be independent with each other. In the first iteration, there is no a priori information available and we have LI (xn ) = 0, ∀n. The extrinsic LLR, LO (xn ) = Λ(xn |y) − LI (xn ), will be de-interleaved to LI (cn ) = Π−1 (LO (xn )), which is used as
WU and ZHENG: LOW COMPLEXITY SOFT-INPUT SOFT-OUTPUT BLOCK DECISION FEEDBACK EQUALIZATION
H¯ sn
The operation of the SISO MMSE-BDFE requires the a
−
y
Wn
˜s
r −
LLR Evaluation
Lo (xn,k ) priori mean vector, ¯s = E [s], and the a priori covariance
matrix, Φ = E ssH , of the data symbols, where E denotes mathematical expectation. The values of ¯s and Φ can be calculated from the bit a priori LLR as [9]
¯ sn
−
Bn Fig. 2.
283
ˆ s
Soft Decision
s¯n
SISO block decision feedback equalizer.
the a priori information, or, soft-input, of the convolutional decoder. The convolutional decoder calculates the extrinsic LLR for each code bit by exploiting the code trellis structure as well as the a priori information LI (cn ). The extrinsic LLR for the code bit cn can be expressed as
σs2n
= =
M
χm P (sn = χm ),
m=1 M
(χm − s¯n )2 P (sn = χm ).
(5)
m=1
where χm ∈ S is an M -ary modulated symbol mapped to the binary vector b = [b1 , · · · , bK ], P (sn = χm ) = K k=1 P (xn,k = bk ) is the symbol a priori probability, and P (xn,k = bk ) is the bit a priori probability that can be derived P (cn=+1|LI (c1 ),· · ·, LI (cNc )) P (cn = +1) from the bit a priori LLR, LI (xn ), as −ln , LO (cn )=ln P (cn=−1|LI (c1 ),· · ·, LI (cNc )) P (cn = −1) −1
LI (xn ) = −1) = 1 + e , (6a) P (x n L (c ) I
n
where Nc = N/R is the number of code bits in each block, with R being the code rate, and N the number of bits per block. The extrinsic LLR at the output of the convolutional decoder is interleaved as LI (xn ) = Π (LO (cn )), which is fed back to the equalizer as the a priori information for the next iteration. At the final iteration, the SISO convolutional decoder estimates the original binary information, bn ∈ {−1, 1}, as ˆbn= argmax P bn |LI (c1 )+LO (c1 ),· · ·, LI (c )+LO (c ) . Nc NC bn ∈{−1,1}
The statistically independent a priori LLRs LO (xn ) and LO (cn ) are fed back to each other iteratively and lead to improvement in BER performance. This essential feature achieves the turbo principle. In both the equalizer and convolutional decoder, the exact a posteriori LLR can be calculated by using the BCJR algorithm or SOVA algorithm, which can effectively exploit the trellis structure of the ISI channel and convolutional code. However, the complexity of the trellisbased equalizers increases exponentially with the modulation level M and channel length L. The computational complexity makes it prohibitively expensive to implement the trellis-based BCJR or SOVA algorithm in the equalizer. III. T URBO E QUALIZATION U SING SISO MMSE-BDFE In this section, a SISO minimum mean square error based block decision feedback equalizer (MMSE-BDFE) is proposed for low complexity turbo equalization. A. Development of SISO MMSE-BDFE MMSE-BDFE algorithm was originally developed in [16] for hard-input hard-output equalization. In [16], the data symbols are assumed to be zero mean since no statistical data information is available at the input of the equalizer. With soft a priori information, a SISO MMSE-BDFE algorithm with non-zero mean data vectors and statistical filters are derived in this subsection. The block diagram of the SISO MMSEBDFE is shown in Fig. 2.
P (xn = +1) = 1 − P (xn = −1).
(6b)
The modulated information symbols, sn , for n = 1, · · · , N , can be assumed to be independent due to the bit interleaving performed before modulation. Therefore, the covariance matrix, Φ, is a diagonal matrix with σs2n on its diagonal. To avoid instability caused by positive feedback during the iterative operation, the a priori information of symbol sn should not be used during the detection of sn itself. For this reason, during the equalization of sn , define the a priori mean vector, ¯sn , and a priori covariance matrix, Φn , as ¯sn = [¯ s1 , · · · , s¯n−1 , 0, s¯n+1 , · · · , s¯N ]T , Φn = diag[σs21 , · · · , σs2n−1 , 1, σs2n+1 , · · · , σs2 ]. N
(7)
In the definition, the mean and variance of sn , which is the symbol to be detected, are assumed to be 0 and 1, respectively [9]. The MMSE-BDFE performs detection of the nth symbol, sn , through two block filters: a feedforward filter, Wn ∈ C N ×N , and a zero diagonal feedback filter, Bn ∈ C N ×N . From Fig. 2, the decision vector before LLR calculation can be written as ˜s = Wn [y − H¯sn ] − Bn [ˆs − ¯sn ] + ¯sn ,
(8)
where ¯sn is the a priori mean vector of s as defined in (7). The vector, ˆs = [ˆ s1 , sˆ2 , · · · , sˆN ]T ∈ C N ×1 , contains tentative soft decisions from both current iteration and previous iteration. The soft decision is defined as the a posteriori mean of the symbol [15], and it can be calculated as sˆn =
M
P (sn = χm |y) χm ,
(9)
m=1
where χm ∈ S is the modulation symbol, and P (sn = χm |y) is the APP at the output of the equalizer. The adoption of a posteriori mean based soft decision will reduce the effects of error propagation, thus leading to better system performance. The BDFE filters, Wn and Bn , are derived by following the MMSE criterion. With the common assumption of perfect
284
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 26, NO. 2, FEBRUARY 2008
decision feedback, i.e., ˆs = s, the error vector of BDFE can be written as en = Wn (y − H¯sn ) − (Bn + IN )(s − ¯sn ).
(10)
matrix. The orthogonality where IN is an N × N identity to minimize principle, E en (y − H¯sn )H = 0, is used the mean square error, E en 2 = E eH n en , with the expectation operation performed over both the data symbols and noise, and the result is [16] Wn = Gn Φn HH (HΦn HH+N0 IN )−1 .
(11)
where Gn = Bn + IN , Φn is the a priori covariance matrix as defined in (7), and N0 is the AWGN variance. Combining (10) and (11), we have the error vector as en = Gn n , with n = Φn HH (HΦn HH+N0 IN )−1 (y − H¯s) − (s − ¯s) (12) being the error vector of a linear MMSE equalizer. The covariance matrix, Φ = E[n H n ], is −1 Φ = Φn − Φn HH HΦn HH + N0 IN HΦn , −1 1 H = Φ−1 H H , (13) n + N0 where the identity, A−1 + A−1 C(G − DA−1 C)−1 DA−1 = (A − CG−1 D)−1 , is used in the above equation. It should be noted that the inverse of the a priori covariance matrix, Φn , is used in (13). Since the uncertainty of each symbol decreases with the evolution of iterations, the a priori symbol variances, which are on the diagonal of Φn , might tend to zero after several iterations. To avoid numerical instability caused by the inverse of zero, a small threshold value, e.g., σ02 = 10−5 , is used to replace the diagonal elements of Φn with values less than σ02 . This approximation doesn’t apparently affect the performance of the system due to the fact that, the matrix 1 H Φ depends on the inverse of Φ−1 n + N0 H H. Therefore, a small enough σ0 will lead to an approximation error that is acceptable under certain numerical precision limits. From (13), the covariance matrix Φee = E[en eH n ] can be expressed by −1 1 H −1 Φee = Gn Φn + H H GH (14) n. N0 Since E(en 2 ) = trace (Φee ), the solution of the MMSE problem is equivalent to find unit diagonal matrix Gn = Bn + IN such that trace (Φee ) is minimized. In addition, suboptimum decision feedback equalization requires the matrix Gn be in its minimum phase form, i.e., the power of the matrix is concentrated on elements close to the main diagonal. The solution satisfying the above conditions can be obtained from 1 H the Cholesky decomposition of Φ−1 n + N0 H H as [16] Φ−1 n +
1 H H H = LH n Ln , N0
(15)
where Ln is an upper triangular matrix. Normalizing Ln with respect to its main diagonal leads to H LH n Ln = Un Dn Un
(16)
where Un ∈ C N ×N is an upper triangular matrix with unit N ×N diagonal is a diagonal matrix, and √ elements, Dn ∈ R Ln = Dn Un . With the Cholesky decomposition described in (15), the feedforward matrix, Wn , and feedback matrix, Bn , are Bn
=
Wn
=
Un − IN ,
−1 Un Φn HH HΦn HH + N0 IN .
(17)
It should be noted that Cholesky decomposition was also adopted in [12] for a linear extending window MMSE method. The Cholesky decomposition in [12] was used to assist the recursive calculation of matrix inversion, and its function is quite different from the Cholesky decomposition used in this paper. With the filters given in (17), the wireless communication system described in (2) is converted to an equivalent system as (c.f. (10)) r = Wn (y − H¯sn ) = Gn (s − ¯sn ) + en ,
(18)
T
where r = [r1 , r2 , · · · , rN ] and en are the receive sample vector and noise sample vector of the equivalent system, respectively, and Gn = Bn +IN is the equivalent channel matrix. The covariance matrix of the equivalent noise component is Φee = D−1 n . Since Dn is diagonal, the noise components are uncorrelated. B. Sequence-based A Posteriori Probability Evaluation The calculation of the soft output of the equalizer is described in this subsection. The equivalent system described in (18) enables a sequence-based detection, i.e., one symbol, sn , can be detected by collecting information of the entire data block, r = Wn (y − H¯sn ). This is different from the classical decision feedback equalizers, e.g., [15], [16], where each data symbol is detected by using only the current equalizer output. From (18), define the sequence-based symbol APP, (i) P sn |r , of the ith iteration as |r P s(i) n
=
P (r|sn )P (sn ) , p(r)
(19)
Since the sample sequence r depends on [s1 , · · · , sN ], the likelihood function, P (r|sn ), can be expressed as P (r|sn ) = P r|ˆs(i) , (20) n (i)
(i−1)
(i−1)
(i)
where ˆsn = [sn , sˆ1 , · · · , sˆn−1 , sˆn+1 , · · · , sˆ(i) ], with N (i) [ˆ sn+1 , · · · , sˆ(i) ] being soft decisions from the current iteraN (i−1) (i−1) tion, and [ˆ s1 , · · · , sˆn−1 ] soft decisions from the previous iteration. The equation holds since all the soft decisions are deterministic. In the equivalent system representation given in (18), the noise samples are uncorrelated with a diagonal covariance matrix Φee = D−1 n . Based on the assumption that the equivalent noise vector, en , is Gaussian distributed, the likelihood (i) function, P r|ˆsn , can be approximated as N 1 |ρn,k (sn )|2 (i) p r|ˆsn ≈ exp − , πσk σk2 k=1
(21)
WU and ZHENG: LOW COMPLEXITY SOFT-INPUT SOFT-OUTPUT BLOCK DECISION FEEDBACK EQUALIZATION
⎧ (i) ⎪ n < k ≤ N, rk − N gk,l (n) sˆl − s¯l , ⎪ l=k ⎪ ⎨ N (i) rn − gn,n (n)sn − l=n+1 gn,l (n) sˆl − s¯l , k = n, ρn,k (sn ) = ⎪ ⎪ ⎪ (i−1) (i) n−1 N ⎩ rk − gk,n (n)sn − ˆl − s¯l − k=n+1 gk,l (n) sˆl − s¯l , 1 ≤ k < n. l=k gk,l (n) s
where σk2 = d−1 k,k , with dk,k being the mth diagonal element of the diagonal matrix Dn , and the values of ρn,k (sn ) is shown in (22). In (22), gk,l (n) is the element on the kth row and lth column of the equivalent channel matrix Gn . From (19) and (21), the APP of symbol sn during the ith iteration can be calculated as N 1 |ρn,k (χm )|2 P (sn ) , (23) = χ |r = exp − P s(i) m n πσk σk2 p(r) k=1
where P (sn ) is the symbol a priori probability, and the value of p(r) can be easily obtained by using the normalization M (i) P sn = χm |r = 1. The sequence-based symbol m=1 (i) (i) APP, P sn = χm |r = P sn = χm |y , is used to calculate the tentative soft decision as described in (9). Eqn. (23) gives the APP for M -ary modulated symbol. To obtain the APP for binary code bit xn,k , we define (b)
Sk {χm |χm ∈ S : xn,k = b} , for k = 0, · · · , log2 M, where b ∈ {−1, +1}, and xn,k is the kth bit of the binary vector xn mapped to the symbol sn . The APP of the bit xn,k in a system with M -ary modulation during the ith iteration can be expressed as (i) P xn,k = b|r = P (sn )/p(r). (24) p r|s(i) n (b)
sn ∈Sk
Combining (21) and (24) , we can write the LLR of the bit xn,k as ⎤ ⎡ n 1 exp ⎣− |ρ (s )|2 ⎦ P (sn ) σj2 n,j n (+1) sn ∈Sk j=1 (i) ⎤ ⎡ Λ xn,k |r =ln . (25) n 1 exp ⎣− |ρ (s )|2 ⎦ P (sn ) σ2 n,j n (−1)
sn ∈Sk
j=1
j
It should be noted that the terms, ρn,j (sn ), for j > n, are independent of sn , and they are canceled during the LLR evaluation. (i) The extrinsic information, LO (xn,k ), which is the softoutput of the equalizer after the ith iteration, can then be calculated as (i) (i) (26) LO (xn,k ) = Λ xn,k |r − LI (xn,k ), where LI (xn,k ) is the a priori LLR for the ith iteration. In the proposed algorithm, the soft-output is calculated by collecting information from the entire sample sequence, r, as described in (22) and (25). The sequence-based equalization is enabled by tentative soft decisions from both previous (i−1) (i) iteration, sˆn , and the current iteration, sˆn . Errors usually exist in the tentative decisions, especially during the first few
285
(22)
iterations. Inevitably, the performance of the algorithm will be negatively affected by error propagation, where an error in one tentative decision might lead to errors in subsequent decisions. The adopted soft decision will reduce the effects of error propagation, thus leads to better system performance. In addition, the number of errors in tentative decisions decreases with the increase of iterations, and this will reduce the performance gap between the proposed algorithm with its optimum counterpart. C. A Simplified Algorithm A simplified version of the SISO MMSE-BDFE algorithm is presented in this section to reduce the computational complexity of the proposed equalizer. Compared to the classical symbol-based equalization methods, the proposed sequence-based detection incurs extra complexity during the calculation of the soft output. From (25), the calculation of each bit LLR requires the evaluation of nM metrics, |ρn,k (sn )|2 , for k = 1, · · · , n, and sn ∈ S, while only M metrics are required by symbol-based algorithms. The complexity caused by sequence-based soft output calculation can be reduced by utilizing the structure of the equivalent channel matrix Gn . The matrix Gn = Un is obtained from Cholesky decomposition, and the power of the matrix is concentrated on the elements that are close to its main diagonal. Fig. 3(a) showed the power distribution of the equivalent channel matrix Gn for a static channel with channel impulse response h = [0.227, 0.46, 0.688, 0.46, 0.227] [10]. In Fig. 3(a), the average power of the lth tap of the equivalent channel matrix is defined as p¯l =
N −l 1 |gk,k+l |2 , for l = 0, 1, · · · , N − 1. N −l
(27)
k=1
It’s apparent from the figure that, for the equivalent channel matrix Gn , the power of the zero-th tap (main diagonal elements) is significantly larger than power of the remaining taps (off diagonal elements). In other words, the equivalent system is in its minimum phase state. Therefore, we can simplify the LLR calculation by discarding channel taps with negligible power without apparently affecting system performance. The fifth iteration bit error rate (BER) of the above system at Eb /N0 = 11 dB is shown in Fig. 3(b) as a function of the number of equivalent channel taps, J, used during detection. From Fig. 3(b), we have three observations. First, the proposed sequence-based detection (J > 1) outperforms classical symbol-based detection (J = 1). Second, for this example, no additional performance gain can be achieved when J > 4. Third, increasing J from 1 to 2 doesn’t introduce apparent performance gain. This can be intuitively explained by the fact that the tentative soft decisions used by the second
IEEE JOURNAL ON SELECTED AREAS IN COMMUNICATIONS, VOL. 26, NO. 2, FEBRUARY 2008
rn = W[y − H¯sn ] = G[s − ¯sn ] + e.
5 0 −5 Relative Tap Power (dB)
channel tap has low qualities due to the negligence of the third and fourth channel taps. Comparing the BER results in Fig. 3(b) with the channel tap power in Fig. 3(a), we conclude that neglecting equivalent channel taps with power at least 25 dB below the dominant tap power renders almost the same performance as considering all the channel taps. Thus the algorithm can be simplified by considering only dominant channel taps, without negatively affecting system performance. The complexities of the original method and simplified method are analyzed in the next subsection. Another source of computational burden is from the evaluation of the MMSE-BDFE filters, which has similar complexities as the symbol-based methods. From (17), the formulation of each pair of Wn and Gn requires two matrix inversions and one Cholesky decomposition. Noting the fact that the calculation of the matrices, Wn and Gn , requires only the second order a priori information, Φn , we can reduce the computational complexity by using a fixed covariance matrix for all symbols. If Φ = diag[σs21 , · · · , σs2N ] is used for the formulation of W and G, then only two matrix inversions and one Cholesky decomposition are needed for all the symbols within one block. From (18), we have an equivalent system with simplified calculations as
−10 −15 −20 −25 −30 −35 −40 −1
1
2 3 4 Equivalent Channel Tap
5
6
7
(a) Tap Power of Equivalent System BPSK
10
(28)
In this simplified system representation, fixed matrices, W and G, are used in combination with symbol dependent mean vectors, ¯sn . Analysis and simulation show that the performance loss due to replacing dynamic filters with fixed filters is negligible for practical system configurations. On the other hand, as shown in the next subsection, significant amount of computational efforts are saved by using fixed filters for all data symbols within one block.
−1
10
−2
10
1
2
3 4 5 6 Number of equivlant channel taps used during detection
7
8
(b) BER v.s. Number of Taps used in Detection
D. Complexity Analysis The computational complexity of the proposed SISO equalizer is analyzed and compared to that of linear MMSE algorithm. The complexity of the equalizer is mainly incurred by two sources: the sequence-based calculation of the soft output, and the formulation of the MMSE-BDFE filters. From (22), the calculation of each metric, |ρn,k (sn )|2 , requires N − m + 2 complex multiplications. The original sequence-based method requires nM such metrics for the evaluation of each bit LLR. Therefore, for a block with N data symbols, the number of complex multiplications required in N n one iteration is in the order of: M log2 M n=1 m=1 (N − 3 2 m + 2) + O(1) = M log2 M N +3N3 +2N + O(1) = MN 3 log2 M ). O( 3 For the simplified algorithm, only JM metrics are required for each bit LLR, with J