Performance of Adaptive Filtering Algorithms: A ... - CiteSeerX

13 downloads 262 Views 75KB Size Report
School of Electrical and Computer Engineering. RMIT University, Melbourne, Victoria, Australia. Email: [email protected], ... trade-off between the MSE performance and algorithm executive time. Results also showed that the ...
Performance of Adaptive Filtering Algorithms: A Comparative Study Yuu-Seng Lau+ , Zahir M. Hussian++ and Richard Harris+++ Centre for Advanced Technology in Telecommunications (CATT) School of Electrical and Computer Engineering RMIT University, Melbourne, Victoria, Australia. Email: + [email protected], ++ [email protected], +++ [email protected]

Abstract— This paper presents a comparative performance study between the recently proposed time-varying LMS (TVLMS) algorithm and other two main adaptive approaches: the least-mean square (LMS) algorithm and the recursive leastsquares (RLS) algorithm. Three performance criteria are utilized in this study: the algorithm execution time, the minimum meansquared error (MSE), and the required filter order. The study showed that the selection of the filter order is based on a trade-off between the MSE performance and algorithm executive time. Results also showed that the execution time of the RLS algorithm increases more rapidly with the filter order than other algorithms.

for better estimation accuracy [2]. In other words, we set the convergence parameter to a large value in the initial state in order to speed up the algorithm convergence. As time passes, the parameter will be adjusted to a smaller value so that the adaptive filter will have a smaller mean-squared error. In this paper we conduct a comparative study of the conventional LMS algorithm, the TV-LMS algorithm, and the RLS algorithm in terms of their execution time, filter order, and MSE performance. II. A DAPTIVE A LGORITHM A. The Adaptive LMS Algorithm

I. I NTRODUCTION Adaptive algorithms have been extensively studied in the past few decades and have been widely used in many arenas including biomedical, image and speed processing, communication signal processing and many other applications [1], [2]. The most popular adaptive algorithms are the least mean square (LMS) algorithm and the recursive least square (RLS) algorithm. In the communication industry, there is a lot of literature that proposed the use of LMS and RLS algorithm for channel estimation, equalization, and demodulation [3], [4], [5]. The performance of these adaptive algorithms is highly dependent on their filter order and signal condition. Furthermore, the performance of the LMS algorithm also depends on the selected convergence parameter µ. As for the RLS algorithm, it is also dependend on λ (commonly known as ”forgetting factor” or ”exponential weighting factor” [2]). In this work we will specifically use the version of RLS named as the ”growing window” RLS algorithm (with λ = 1) [2]. Recently, a new version of the LMS algorithm with timevarying convergence parameter has been proposed [6]. The time-varying LMS (TV-LMS) algorithm has shown better performance than the conventional LMS algorithm in terms of faster convergence and less MSE. The TV-LMS algorithm is based on utilizing a time-varying convergence parameter µn with a general power decaying law for the LMS algorithm. The basic idea of this TV-LMS algorithm is to utilize the fact that the LMS algorithms needs a larger convergence parameter value to speed up the convergence of the filter coefficients to their optimal values. After the coefficients converge to their optimal values, the convergence parameter should be small

In the conventional adaptive LMS algorithm, the weight vector coefficients w(n) for the FIR filter are updated accordin to the formula [1], [2], [4], [5]: w(n) = w(n − 1) + µe(n)y(n)

(1)

where w(n) = [wo (n) w1 (n)...wM (n)] (M + 1 being the filter length), µ is the convergence parameter (sometimes referred to as step-size), e(n) = d(n) − z(n) is the output error (z(n) being the filter output), and d(n) is the reference signal). Note that z(n) = w(n−1)yT (n) = x ˆ(n), where x ˆ(n) is the original signal and y(n) = [y(n) y(n − 1)...y(n − M )] is the input signal to the filter. B. The Time-Varying LMS (TV-LMS) Algorithm The time-varying LMS algorithm works in the same manner as the conventional LMS algorithm, except for a timedependent convergence factor µn . For narrow band signals, we must first define the optimal µo for the centre frequency fo . To do so, the conventional LMS algorithm is used (with a single-tone of frequency fo ) to find the optimal value of µ at that frequency. This optimal value µo is used to update the time-varying convergence parameter µn according to the following formula [6]: µn = α n × µ o

(2)

where αn is a decaying factor. We will consider the following decaying law [6]: 1

αn = C 1+anb

(3)

0.14

where C, a, b are positive constants that will determine the magnitude and the rate of decrease for αn . According to the above law, C has to be a positive number larger than 1. When C = 1, αn will be equal to 1 and the new algorithm will be the same as the conventional LMS algorithm. An ouline of the TV-LMS algorithm is shown in the following steps [6]: z(n) = w(n − 1)y (n)

(4)

e(n) = d(n) − z(n)

(5)

0.12

0.1

0.08

MSE

T

0.06

0.04

1

αn = C 1+anb

(6)

µn = α n × µ o

(7)

w(n) = w(n − 1) + µn e(n)y(n)

(8)

LMS, M = 10 LMS, M = 25 LMS, M = 50 LMS, M = 75 LMS, M = 100

0.02

0

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2 −3

µ

x 10

Fig. 1. Mean-squared error (MSE) performance of the LMS algorithm

with different µ (SNR = 2 dB, single-tone at fo = 100 Hz).

where

T

indicates matrix transposition.

0.11 TV−LMS, M = 10 TV−LMS, M = 25 TV−LMS, M = 50 TV−LMS, M = 75 TV−LMS, M = 100

0.1

C. The RLS Algorithm

0.09

T

z(n) = w(n − 1)y (n)

(9)

e(n) = d(n) − z(n)

(10)

0.08

0.07

MSE

Compared to the LMS algorithm, the RLS algorithm has the advantage of fast convergence but this comes at the cost of increasing the complexity. Hence, the RLS algorithm requires longer computation time (see section - III) as well as a higher sensitivity to numerical instability [1], [2], [3]. In this paper, the RLS algorithm takes the following form [1], [2], [5]:

0.06

0.05

0.04

0.03

0.02

k(n) =

0.01

P(n − 1)z(n) λ + z H (n)P(n − 1)z(n)

P(n − 1) − P(n − 1)z (n)k(n) λ

w(n) = w(n − 1) + e(n)k(n) where

H

0.6

0.8

1

1.2

1.4

1.6

1.8

2 −3

x 10

(11) Fig. 2. Mean-squared error (MSE) performance for the time-varying

(12) (13)

III. S IMULATION R ESULTS In this simulation the input signal for all algorithms has the form y(t) = x(t) + n(t), n(t) being white Gaussian noise with 2 dB power and x(t) is the original signal assumed to be a single-tone sinusoid. Fig. 1 and Fig. 2 show the TV-LMS and the conventional LMS algorithm mean-squared error performance with different values of µ and different filter order used. The mean squared error (MSE) for each convergence parameter is calculated as follows: N 1 X [x(n) − x ˆ(n)]2 N n=0

0.4

o

indicates the Hermitian property.

M SE =

0.2

µ

H

P(n) =

0

(14)

LMS (TV-LMS) algorithm with different µo (SNR = 2 dB, singletone at fo = 100 Hz, C = 2, a = 0.01, b = 0.7).

where x(n) is the original signal and x ˆ(n) is the filter output, which represents an estimate of the input signal. Fig. 1 and Fig. 2 show that both algorithms provide similar performance results. Their optimal µo is sitting at smaller µ region (< 0.4×10−3) and the performance is better when the filter order is a larger. Hence, for both TV-LMS and conventional LMS algorithms, we should use a higher filter order to provide a better MSE performance for the system. Fig. 2 also shows that the overall performance of the TV-LMS algorithm is better than that of the conventional LMS algorithm. Fig. 3 and Fig. 4 show the MSE performance of the TV-LMS algorithm and conventional LMS algorithm with different filter orders. Again, both LMS algorithms provide similar performance results. The TV-LMS algorithm performs much better than the conventional LMS algorithm in the low filter order region. Fig. 3 and Fig. 4 also show that both algorithms provide better MSE performance when filter order increases.

0.14

0.055 LMS, µ = 0.00005 LMS, µ = 0.0001 LMS, µ = 0.0003

0.05

0.12

0.045 0.1 0.04

MSE

MSE

0.08 0.035

0.06 0.03 0.04 0.025

0.02

0

0.02

0

50

100

0.015

150

0

50

Filter order

100

150

Filter Orders

Fig. 3. MSE performance of the LMS algorithm with different filter orders (SNR = 2 dB, single-tone at fo = 100 Hz).

Fig. 5. MSE for the RLS algorithm with different filter orders (SNR

= 2 dB, single-tone at fo = 100 Hz, λ = 1). 3

0.12 Time−Varying LMS, µ = 0.00005 Time−Varying LMS, µ = 0.0001 Time−Varying LMS, µ = 0.0003

2.5

0.1

2

Computation Time

MSE

0.08

0.06

1.5

1

0.04

0.5 0.02

0 0

0

50

100

LMS

Time−Varying LMS

RLS

150

Filter order

MSE for the time-varying LMS (LMS) algorithm with different filter orders (SNR = 2 dB, single-tone at fo = 100 Hz, C = 2, a = 0.01, b = 0.7). Fig. 4.

Fig. 6. Computation time for different adaptive algorithms with filter order M = 10. 5

4.5

4

Fig. 6, Fig. 7, and Fig. 8 show the computational time for different algorithms with different filter orders. The computation time for the conventional LMS algorithm and the TVLMS algorithm is relatively similar and much less than that of the RLS algorithm. The above figures also show that the RLS computation time is increasing rapidly and non-linearly with the filter order.

Computation Time

3.5

Fig. 5 shows the MSE performance for the RLS (λ = 1) algorithm with different filter orders. As the RLS is highly sensitive to numerical instability [1], [2], [3], the filter order will severely affect the performance of the algorithm. Fig. 5 shows that the RLS performance does not improve when the filter order increases. Its optimal filter order in this case is around 50. Hence, a careful selection of the filter order is needed for optimal performance.

3

2.5

2

1.5

1

0.5

0

LMS

Time−Varying LMS

RLS

Fig. 7. Computation time for different adaptive algorithms with filter order M = 50.

25

R EFERENCES

Computation Time

20

15

10

5

0

LMS

Time−Varying LMS

RLS

Fig. 8. Computation time for different adaptive algorithms with filter

order M = 100.

IV. C ONCLUSIONS A new structure for the least mean-squared (LMS) algorithm with a decaying, time-varying law for the convergence parameter has recently been proposed and shown to be of better performance than the conventional LMS algorithm in terms of faster convergence and less mean squared error. In this work we presented a comprehensive comparative study between the above time-varying LMS (TV-LMS) algorithm and other two well-known algorithms: the conventional LMS and the recursive least-squared (RLS) algorithm. The study is based on utilizing three performance criteria: the execution time, the minimum mean-squared error (MSE), and the required filter order. In a stationary white Gaussian noise environment with filter order M is set at a larger value (e.g., M = 100), simulations showed that the TV-LMS algorithm provides the best MSE than the conventional LMS and the RLS algorithms. However, when we choose to use a smaller filter order (e.g., M = 10), the RLS algorithm provides the best MSE performance as compared to TV-LMS and the conventional LMS algorithms. In some applications in communication industry, the complexity and the computational time of the algorithm is an issue of consideration. When the computational time is vital to the application, the TV-LMS and the conventional LMS algorithms will be a better choice than the RLS algorithm. Both the TV-LMS and the conventional LMS algorithms provide less computational time than the RLS. In narrow-band applications, like single-tone noise reduction scheme, the RLS does not provide any further benefit when increasing the filter order, but on the contrary, it requires more computational time in this case. According to our simulations, the best scenario is to use the TV-LMS algorithm with a larger filter order; this provides an optimal MSE performance with a computation time close to that of the LMS algorithm. As a result, a careful selection of filter order is needed depending on the signal condition and the trade-off between the algorithm complexity and its computational time.

[1] S. Haykin, Adaptive Filter Theory, Prentice Hall, 1986. [2] M. H. Hayes, Statistical Digital Signal Processing and Modeling, John Wiley & Sons, 1996. [3] H. Leung, and J. Lam, “Design of demodulator for the chaotic modulation communication system,” IEEE Transactions on Circuits and Systems-I: FUNDAMENTAL THEORY AND APPLICATIONS, vol. 44, 1997. [4] S. Nahm, and W. Sung, “Time-domain equalization for orthogonal multicarrier CDMA system,” Global Telecommunications Conference, vol. 30, 1996. [5] D. N. Kalofonos, M. Stojanovic, and J. G. Proakis, “On the performance of adaptive MMSE detectors for MC-CDMA system in fast fading Rayleigh channels,” The Ninth IEEE International Symposium on Personal, Indoor and Mobile Radio Communications,, vol. 3, 1998. [6] YS. Lau, Z. M. Hussain, and R. Harris, “A time-varying convergence parameter for the LMS algorithm in the presence of white Gaussian noise,” Submitted to the Australian Telecommunications, Networks and Applications Conference (ATNAC), Melbourne, 2003.

Suggest Documents