A New Variable Step-size Block LMS Algorithm for a Non-stationary Sparse Systems Cemil Turan1,2, Mohammad Shukri Salman1, Alaa Eleyan1 1 Electrical and Electronic Engineering Department, Mevlana (Rumi) University, Konya, Turkey {cturan,mssalman,aeleyan}@mevlana.edu.tr 2
Department of Computer Engineering, Suleyman Demirel University, Almaty, Kazakhstan
[email protected]
Abstract—The conventional LMS algorithm has been successfully used in adaptive filtering for system identification (SI) problem. In telecommunications, acoustic echo SI problems usually have relatively large filter lengths that take a long time to be estimated. To overcome this problem, the block least-meansquare algorithm (BLMS) has been proposed. In BLMS, the filter coefficients are updated for blocks of input instead of each sample of input data. Using this advantage, we propose a new block-LMS algorithm with a function controlled variable stepsize LMS (FC-VSSLMS) for non-stationary sparse systems identification. The performance of proposed algorithm is compared to those of the original BLMS and reweighted zeroattracting block LMS (RZA-BLMS), in terms of convergence rate and mean-square-deviation (MSD) in additive white Gaussian noise (AWGN) and additive uniformly distributed noise (AUDN). Simulations show that the proposed algorithm has a better performance than those of the other algorithms. Keywords—System Identification, Sparse stationary Systems, Block LMS Algorithm.
I.
Systems,
Non-
such systems can be employed to improve the performance of the adaptive filter [8].
v(n) x(n)
unknown system
+
y(n) +
d(n)
+
adaptive filter
ye(n)
e(n) Fig. 1. Block diagram of a system identification process
INTRODUCTION
LMS-type algorithms are widely used in various applications of adaptive filtering due to their simplicity, ease of implementation, computational efficiency, and high performance under variety of operating conditions [1]. However, they usually have a trade-off between the convergence speed and the misadjustment because of the constant step-size [2]. In order to obtain a better performance of the LMS algorithm, several variable step-size LMS-type algorithms have been developed [3], [4], [5]. One of these is the function controlled variable step-size LMS (FC-VSSLMS) algorithm which is proposed in [6]. The algorithm is based on selecting an appropriate function to control the step-size. System identification (SI) is one of the frequently used applications in adaptive filtering (see Fig. 1). Besides, echo cancellation in telephony networks is an important issue of adaptive system identification which can be assumed sparse; that is contains a few non-zero coefficients [7]. The sparsity of
A sparse function controlled variable step-size LMS (SFCVSSLMS) algorithm is proposed in [9]. The algorithm combines the advantages of sparsity and variable step-size. Very remarkable results had been obtained by the algorithm in sparse system identification. The conventional LMS algorithm has a high computing time in some adaptive filtering applications with large filter lengths. For instance, active noise control, channel equalization and acoustic echo cancellation require a sufficiently high-order finite impulse response (FIR) filter. For such adaptive filtering applications, a block-LMS (BLMS) algorithm has been proposed in [10], [11], [12] to improve the performance of the LMS algorithm. In this algorithm, the adaptive filter is realized in block-wise processing of the data in order to gain computational advantages. Whereas, in the conventional LMS algorithm, filter parameters are updated for each data sample. Unlike the LMS algorithm, the BLMS algorithm adjusts the weights once per block of data.
x(k)
serial-toparallel converter
.
.. .
.
. block FIR filter w
mechanism for performing block correlation and weight update
.. .
mechanism for sectioning
parallel-toserial converter
.. .
y(k)
-
serial-toparallel converter
.. .
.
e (k)
d(k) +
Fig. 2. Block diagram of a general BLMS algorithm.
In [13], a new algorithm that combines the advantages of sparsity, variable step-size and block-LMS algorithm has been proposed. The proposed algorithm imposes the block implementation of the SFC-VSSLMS algorithm in which an approximate l0 − norm penalty function is imposed to the cost function of the FC-VSSLMS algorithm. In this paper, we investigate the performance of the recently proposed [13] sparse block function controlled variable step-size least-meansquare (SBFC-VSSLMS) algorithm in non-stationary environments under different parameters and noise types. This paper is organized as follows. In Section II, brief reviews of the BLMS and SBFC-VSSLMS algorithms are provided. In Section III, simulation results that compare the performances of the proposed algorithm to those of the other algorithms in a non-stationary system identification setting with additive white Gaussian noise (AWGN) and additive uniformly distributed noise (AUDN) are discussed. Finally, in Section IV, conclusions and recommendations are drawn. II.
THE PROPOSED ALGORITHM
In system identification, a linear system with input-tap vector x(n) [ x(n), x(n 1),..., x(n N 1)T has a desired output
d (n) as:
d (n) hT x(n) (n)
where h [h 0 ,..., h N 1 ]T is the unknown system coefficients with length N. T is the transposition operator and (n) is the observation noise. Considering the update equation of the well-known LMS algorithm,
w(n 1) w(n) e(n)x(n)
where the filter-tap weights vector is w(n)=[w 0 (n),w1 (n),...,w N -1 (n)] and the instantaneous error is
e(n) d (n) - wT (n)x(n) . Generally, in a BLMS algorithm (see Fig. 2), the input signal vector x(n) is subdivided into blocks of L samples to form X(n) input matrix that contains the L-length vectors and the filter tap weights are updated once after the collection of every block’s data samples. Using k to denote the block index, the formulation of the BLMS algorithm is becomes as:
X(k ) [x(kL), x(kL 1),..., x(kL L -1)]T where X(k ) is the data matrix. Then, the column vectors of the desired response d(k ) respectively, written as:
and the error
e( k )
are,
d(k ) [d(kL), d(kL 1),..., d(kL L -1)]T e(k ) [e(kL), e(kL 1),..., e(kL L -1)]T where e(k ) d(k ) - X(k )w(k ). Then, the update equation of the BLMS algorithm can be written as [14]:
w(k 1) w(k ) B XT (k )e(k ).
The block length is naturally chosen as the same as that of the filter length in most applications. Because, when L is greater than N, then the gradient estimation uses more information than the filter, resulting in redundant operations. For L less than N, the filter length is larger than the input block being processed, which is a waste of filter weights and hence a waste of the number of computations (i.e., Mults./Divs. and/or Adds./Subs.). So we use the same value for both the block-length and filter-length in our simulations. The update equation of the SFC-VSSLMS algorithm proposed in [9] have been modified and given as: w ( n) . w(n 1) w(n) (n)e(n)x(n) (n)sgn[w(n)]e
Similar to the block implementation of the LMS algorithm, the update equation of the proposed algorithm can be written as:
w(k 1) w(k ) B (k )XT (k )e(k ) B (k )sgn[w(k )]e
w ( k )
where B (k ) is the block sparsity aware parameter and depends on the positive constant and B (k ) which is the block variable step-size and given in [6] as
B (k 1) B B (k ) sB f (k )
e( k )
2
2
ems (k ) where 0 B 1, sB 0 are some positive constants and 2
e(k ) is mean value of error vector. ems (k ) is the estimated mean-square-error (MSE) and defined as 2 ms
2 ms
2
e (k ) B e (k 1) (1 B )e(k )
where B is a weighting factor given as 0 B 1 and f (k ) is a control function given in [6]. A summary of the algorithm is given in Table I. TABLE I SUMMARY OF THE SBFC-VSSLMS ALGORITHM.
define N , L, B , B , B , sB and B initialize w (0) 0, for k 1, 2,... w (k 1) w (k ) B (k ) XT ( k )e( k ) B ( k ) sgn[ w ( k )]e
w (k )
where
s(n) [s0 (n), s1 (n),..., sN 1 (n)] is a random sequence with elements drawn from a normal distribution with zero mean and unit variance. In the first experiment, the performance of the SBFCSSLMS algorithm is compared to those of the BLMS and RZA-BLMS algorithms with a filter length of 20 coefficients. The unknown system is assumed to be initially sparse with 2 coefficients set to ‘1’ and 18 coefficients set to ‘0’ (90 % sparsity) and then is fitted to (8). The observed noise ( (n) in Fig. 1) and the input signal are assumed to be a white Gaussian random sequences with 10 dB signal-to-noise ratio (SNR). The performance measures used here are the meansquare
MSD E h w(k ) 2
and
the
convergence rate. Simulations are done with the following parameters. For the BLMS: µ = 0.001. For the RZA-LMS: µ = 10and ρ = 10−3. For the SBFC-VSSLMS algorithm: 0.001, α = 0.99, β = 0.99, γ = 0.0001, L = 400, λ = 8 and 5 105 . The optimum ρ for each algorithm is calculated by extensive simulations and shown in Fig. 3 (no ρ parameter in the BLMS algorithm, hence the graph of the MSD is almost constant). Fig. 4 provides the MSD vs. iteration number of the three algorithms. It can be seen from the figure that although the SBFC-VSSLMS algorithm has the same convergence rate as the algorithms but it has much lower MSD than the others. In addition to that, the same experiment is repeated with the same parameters but with different levels of sparsity. Table II shows that the SBFC-VSSLMS algorithm always outperforms the other algorithms.
-30 -30.5
e( k )
2
-31
2
MSD (dB)
2 ms
deviation
-29.5
e ms (k ) 2 ms
And
T
where e( k ) d ( k ) - X( k ) w ( k ) and
B (k 1) B B (k ) sB f (k )
0.99999, h(n) [h0 (n), h1 (n),..., hN 1 (n)].
2
e (k ) B e (k 1) (1 B )e(k ) .
-31.5 -32 -32.5 -33 -33.5
III.
SIMULATION RESULTS
In this section, the performance of the SBFC-VSSLMS algorithm is compared to those of the BLMS and RZA-BLMS algorithms in non-stationary sparse system identification settings in the presence of AWGN and AUDN sequences. All the experiments are implemented with 200 independent runs. The system is assumed to be slowly changing in time to represent a time-varying unknown system defined in [15] as:
h(n) h(n 1) 1 2 s(n)
-34 -7 10
BLMS RZA-BLMS SBFC-VSSLMS -6
10
-5
-4
10
10
-3
10
Fig. 3. MSD vs. ρ of the SBFC-VSSLMS, BLMS and RZA-BLMS algorithms.
In the second experiment, the SBFC-VSSLMS algorithm is compared to the other algorithms with the same settings and parameters in experiment 1 but with the additive uniformly distributed random noise (AUDN). Fig. 5 gives the MSD vs. iteration number for the three algorithms. Although the MSD performance of all algorithms is worse than that of the first experiment (due to the nature of the additive noise), still the SBFC-VSSLMS algorithm outperforms the other algorithms.
TABLE II CONVERGENCE RATE AND MSD COMPARISONS OF THE ALGORITHMS FOR DIFFERENT SPARSITY LEVELS IN AWGN.
BLMS RZA-BLMS SBFC-VSSLMS
95% sparsity Conv. rate (itr.) MSD (dB) 250 −29.8 250 −31.3 250 −33.8
75% sparsity Conv. rate (itr.) MSD (dB) 280 −29.8 300 −30.5 280 −32.9
algorithm has approximately the same convergence rate, but it has much better MSD than those of the other algorithms in non-stationary sparse system identification settings.
5 BLMS RZA-BLMS SBFC-VSSLMS
0
MSD (dB)
-5
REFERENCES
-10
[1]
-15
[2]
-20
[3]
-25 -30 -35
[4] 0
50
100
150
200
250 300 iteration
350
400
450
500
[5]
Fig. 4. Steady state behavior of the SBFC-VSSLMS, BLMS and RZA-BLMS algorithms in AWGN with 90% sparsity. [6] 5
[7]
BLMS RZA-BLMS SBFC-VSSLMS
0
[8]
MSD (dB)
-5
-10
[9]
-15
[10] -20
[11]
-25
-30
50% sparsity Conv. rate (itr.) MSD (dB) 280 −29.8 320 −30.1 290 −32.2
0
50
100
150
200
250 300 Iteration
350
400
450
500
Fig. 5. Steady state behavior of the SBFC-VSSLMS, BLMS and RZA-BLMS algorithms in AUDN with 90% sparsity.
I.
[12]
[13]
CONCLUSIONS
In this paper, the performance of the recently proposed SBFC-VSSLMS algorithm was investigated in nonstationary sparse system identification settings with AWGN and AUDN sequences. The SBFC-VSSLMS algorithm showed significant performance compared to those of the BLMS and RZA-BLMS algorithms in terms of convergence rate and MSD. The performances of all algorithms were studied and demonstrated by simulations. The results showed that although the proposed
[14] [15]
D. G. Manolakis, V. K. Ingle and S. M. Kogon, Statistical and Adaptive Signal Processing, Artech House, London, 2005. D. Bismor, “Extension of LMS stability condition over a wide set of signals,” International Journal of Adaptive Control and Signal Processing, 2014, DOI: 10.1002/acs.2500. W. Y. Chen and R. Haddad, “A variable step size LMS algorithm,” IEEE Proceedings of the 33rd Midwest Symposium on Circuits and Systems, vol. 1, Calgary, pp. 423-426, August 1990. Y. K. Won, R. H. Park, J. H Park and B. U. Lee, “Variable LMS algorithms using the time constant concept,” IEEE Transactions on Consumer Electronics, vol. 40, no. 4, pp. 1083-1087, 1994. S. Zhang and J. Zhang, “A noise constrained VS-LMS algorithm,” IEEE Information Systems for Enhanced Public Safety and Security, Munich, pp. 29-33, 2000. M. Li, L. Li and H-M. Tai, “Variable step size LMS algorithm based on function control,” Circuits, Systems and Signal processing, vol. 32, no. 6, pp. 3121-3130, 2013. P. A. Naylor, J. Cui and M. Brookes, “Adaptive algorithms for sparse echo cancellation,” Applied Speech and Audio Processing, vol. 86, no. 6, pp. 1182-1192, 2006. Y. Chen, Y. Gu and A. O. Hero, “Sparse LMS for system identification,” IEEE International Conference on Acoustic, Speech and Signal Processing, Taipei, pp. 3125-3128, April 2009. C. Turan and M. S. Salman, “Zero-attracting function controlled VSSLMS algorithm with analysis,” Circuits, Systems, and Signal Processing, 2015, DOI 10.1007/s00034-015-9996-5. G. A. Clark, S. K. Mitra and S. R. Parker, “Block implementation of adaptive digital filters,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 29, no. 3, pp. 744-752, 1981. J. C. Lee and C. K. Un, “Block realization of multirate adaptive digital filters ,” IEEE Transactions on Acoustics, Speech and Signal Processing, vol. 34, no. 1, pp. 105-117, 1986. M. Lapointe, P. Fortier and H. T. Huynh, “A very fast digital realization of a time-domain block LMS filter,” IEEE International Conference on Acoustics, Speech, and Signal Processing, Toronto, vol. 3, pp. 21012104, May 1991. C. Turan, M. S. Salman and A. Eleyan, “A block LMS-type algorithm with a function controlled variable step-size for sparse system identification,” 57th International Symposium of the Croatian Society Electronics in Marine, (ELMAR-2015), Zadar, Croatia, September 2015. B. F. Boroujeny, Adaptive Filters: Theory and Applications, Wiley, USA, 2013. P. Loganathan, “Sparseness Controlled Adaptive Algorithms for Supervised and Unsupervised System Identification,” Thesis submitted in fulfilment of requirements for the degree of Doctor of Philosophy of Imperial College, London, 2011.