Improving Convergence of the NLMS Algorithm Using ... - IEEE Xplore

2 downloads 0 Views 203KB Size Report
Improving Convergence of the NLMS Algorithm. Using Constrained Subband Updates. Kong A. Lee and Woon S. Gan, Senior Member, IEEE. Abstract—In this ...
736

IEEE SIGNAL PROCESSING LETTERS, VOL. 11, NO. 9, SEPTEMBER 2004

Improving Convergence of the NLMS Algorithm Using Constrained Subband Updates Kong A. Lee and Woon S. Gan, Senior Member, IEEE

Abstract—In this letter, we propose a new design criterion for subband adaptive filters (SAFs). The proposed multiple-constraint optimization criterion is based on the principle of minimal disturbance, where the multiple constraints are imposed on the updated subband filter outputs. Compared to the classical fullband leastmean-square (LMS) algorithm, the subband adaptive filtering algorithm derived from the proposed criterion exhibits faster convergence under colored excitation. Furthermore, the recursive tapweight adaptation can be expressed in a simple form comparable to that of the normalized LMS (NLMS) algorithm. We also show that the proposed multiple-constraint optimization criterion is related to another known weighted criterion. The efficacy of the proposed criterion and algorithm are examined and validated via mathematical analysis and simulation. Index Terms—Acoustic echo cancellation, adaptive filtering, convergence, subband updates.

I. INTRODUCTION

A

MONG various adaptive filtering algorithms, the LMS algorithm is the most popular and widely used because of its simplicity and robustness. However, the LMS algorithm suffers from slow convergence when the input signal is highly correlated. Adaptive filtering in subbands has been proposed to improve the convergence behavior of the LMS algorithm [1]–[6]. In subband adaptive filtering, the input signal and desired response are band-partitioned into almost mutually exclusive subband signals. This feature of the SAF permits the manipulation of each subband signal, in such a way that their properties (e.g., variance) can be exploited [6], allowing each subband to converge almost separately for various modes [1], and thus improving the overall convergence behavior. Yet, band edge effects limit the convergence rate of the conventional SAFs [5], [7]. The principle of minimum disturbance [6] states that: from one iteration to the next, the tap weights of an adaptive filter should be changed in a minimal manner, subject to a constraint imposed on the updated filter output. Applying this principle, we formulate a novel design criterion for the SAF as a constrained optimization problem involving multiple constraints imposed on the updated subband filter outputs. These multiple subband constraints force each of the subbands to converge almost independently. Furthermore, by virtue of critical sub-sampling, the computational load of the proposed algorithm remains almost Manuscript received October 20, 2003; revised January 16, 2004. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Jonathon A. Chambers. The authors are with the Digital Signal Processing Laboratory, School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798 (e-mail: [email protected]; [email protected]). Digital Object Identifier 10.1109/LSP.2004.833445

Fig. 1. Subband structure showing the subband desired responses, subband filter outputs, and subband error signals.

unchanged when the number of subbands is increased. Hence, the subband algorithm derived from this design criterion, called the normalized SAF (NSAF) algorithm, is expected to possess attractive convergence behavior, and remain computationally efficient. Note that the major issue considered in this letter is to improve the convergence behavior of the LMS algorithm rather than to reduce its computational complexity as emphasized in the conventional SAF structure [3]–[5]. Another unique characteristic of the proposed NSAF algorithm is that the error signals are estimated in subbands, whereas the coefficients that are explicitly adapted are the fullband tap weights of the modeling filter. This adaptive weight-control mechanism is different from that in the conventional SAF structure where each subband has its own sub-filter and adaptation loop. II. DECIMATED SUBBAND SIGNALS Fig. 1 shows a subband structure where the desired response and filter output are partitioned into subbands . These subby mean of analysis filters and for , are critiband signals, cally sub-sampled to a lower rate commensurate with their bandwidth. Note that we use the variable to index the original sequences, and to index the decimated sequences. Fig. 2 shows an equivalent structure for the selected portion in Fig. 1. Asis stationary (i.e., adaptation of its suming that the filter

1070-9908/04$20.00 © 2004 IEEE

IEEE SIGNAL PROCESSING LETTERS, VOL. 11, NO. 9, SEPTEMBER 2004

737

tions. Putting these equations into matrix form and solving for the Lagrange multipliers, we have (7) where

Fig. 2. Input signal is split into subband signals before going through the filters ^ (k ) and decimators.

w

tap weights is frozen) we can transpose it to follow the analysis filter bank as depicted in Fig. 2. Hence, the decimated filter output at each subband can be written as

Lagrange vector, data matrix, and is the error vector. It is shown in the Appendix that if the frequency responses of the analysis filters do not significantly overlap, are negthe off-diagonal elements of the matrix ligible. With this diagonal assumption, (7) essentially reduces to a simple form

(1) where

is the is the

(8) Combining the results in (6) and (8), we obtain the recursive relation for updating the tap-weight vector

(2)

(9)

is the input data vector for the th subband, is the fullband adaptive denotes matrix tap-weight vector, and the superscript transposition. The decimated subband error signal is then de, as shown in Fig. 1. fined as Equations (1) and (2) indicate that, at every time instant , each is packed with new samples and data vector old samples to produce a single sample of . These definitions of subband signals facilitate the formulation of the proposed criterion in the next section. In this and subsequent sections, we assume that all signals and filter coefficients are real.

A positive step-size parameter is introduced in (9) to exercise control over the change in the tap weights. Clearly, this adaptation equation is in a simple form comparable to that of the NLMS algorithm. Note that the constrained optimization criequality constraints; thus, the terion defined above involves number of subbands (i.e., number of constraints) must be smaller than the length of the adaptive tap-weight vector . This requirement sets an upper limit on the number of allowable subbands in the NSAF algorithm.

III. PROPOSED NSAF ALGORITHM

In this section, the convergence behavior of the proposed algorithm is analyzed based on the mean-square deviation

Based on the principle of minimum disturbance [6], we formulate the proposed criterion as a multiple-constraint optimization problem as follows. Minimize the squared Euclidean norm of the change in the tap-weight vector (3) subject to the set of filter output

A. Convergence Analysis

(10) where is the optimum tap, weight vector. To start the analysis, we subtract (9) from and take the squared Euclidean norm of both sides. Rearranging terms, and taking the expectation, the update equation can be represented in terms of the mean-square deviation

constraints imposed on the decimated (4)

Applying the method of Lagrange multipliers, we combine (3) and (4) forming the Lagrangian function

(5) are the Lagrange multipliers In this quadratic function, the pertaining to the multiple constraints described in (4). Taking the derivative of (5) with respect to the tap-weight vector and setting this derivative equal to zero, we get (6) To solve for the unknown multipliers , we substitute (6) into the constraints of (4) forming a linear system of equa-

(11) We have made the diagonal assumption again in the previous step. For the algorithm to be stable, the mean-square deviation . must decrease iteratively, implying that Thus the step size has to fulfill the condition

(12) is the undisIn (12), turbed error signal for the th subband. If we consider the sitterm in Fig. 1] is neguation where the disturbance [the is equal to the ligible, the undisturbed error signal . Hence, in the absence decimated subband error signal

738

IEEE SIGNAL PROCESSING LETTERS, VOL. 11, NO. 9, SEPTEMBER 2004

TABLE I SUMMARY OF THE NSAF ALGORITHM

of the disturbance, the necessary and sufficient condition for the convergence in the mean-square is that the step-size parameter must satisfy the double inequality (13) B. Computational Complexity The proposed NSAF algorithm is summarized in Table I. Note that a small positive constant is introduced in the tapweight adaptation equation to avoid numerical difficulties when the input signal level is too low. By virtue of critical sub-sampling, the tap-weight vector is adapted at a lower rate compared to the fullband sampling rate . Hence, the number of multiplications incurred for error estimation and tap-weight adaptation during a single sampling period is always for an arbitrary number of subbands . Apart from multiplications for this, the NSAF requires an additional the analysis and synthesis filter banks, giving us a total of multiplications during a single sampling period . Hence, compared to the fullband NLMS algorithm, the NSAF algorithm multiplications. In acoustic echo requires an additional is typically more than one thoucancellation applications, product in most cases. sand which will be larger than the IV. SAF STRUCTURES Recently, an innovative SAF structure has been proposed in [1], [2] where the subband algorithms are derived by optimizing a weighted criterion defined as (14) The scaling factors are proportional to the inverse of the power of the subband input signals. The adaptive weight-control mechanisms proposed in [1], [2] are similar to that proposed in this letter, where the fullband tap weights of the modeling are adapted instead of subfilters as in the convenfilter tional SAF structure (in [2], the fullband filter is decomposed into polyphase components). If we set the scaling factors equal to , where is the variance estimate of the th subband, the tap-weight adaptation equation of

the NSAF coincides with that presented in [1], [2]. This implies that the multiple-constraint optimization criterion proposed in this letter is related to the weighted criterion (14). Both criteria bring insight into subband adaptive filtering from different perspectives. Particularly, the proposed criterion reveals the necessary conditions (i.e., the subband input signals must be orthogonal at zero lag, and the number of subbands ) to be imposed on the filter banks, whereas the weighted criterion shows that the error performance surface is characterized by a weighted correlation matrix [1], [2]. Our proposed criterion also reveals that the appropriate scaling so that the SAF is stable as long as the factor is step-size is bounded within (13). Now, we take a look at the conventional SAF structure [3]–[5]. For an oversampled scheme, the band edges of the analysis filters introduce small eigenvalues in each subband correlation matrix, thus degrading the overall convergence rate [5], [7]. For the critically sampled scheme, these band edges are indeed shaved down by the decimation operation. However undesirable aliasing components are introduced into the subband signals. Adaptive cross-filters are necessary to cancel these aliasing components for correct modeling of the unknown system [3]–[5]. It has been concluded in [4] that the critically sampled scheme with cross-filters does not give better convergence performance than the fullband adaptive filter. In [5], it has been shown that the oversampled scheme does not provide satisfactory convergence improvement either. The NSAF algorithm does not encounter these structural problems, thus promising better convergence.

V. SIMULATIONS Consider the system identification problem as shown in Fig. 1. The unknown system to be identified is modeled with the acoustic response of a room with 300 ms reverberation time and truncated to 2048 taps. The length of the adaptive is 1024 taps. The unmodeled tail of the tap-weight vector acoustic response forms a disturbance to the system, which dB. This limits the final misalignment to more than simulation setup is realistic for the context of acoustic echo cancellation. The adaptive identification system is excited with . an AR(2) random process with coefficients We use cosine modulated filter banks for the subband structure. To maintain 60 dB stopband attenuation, the length is 16, 32, and 64, respectively, for of the prototype filter subbands. High stopband attenuation ensures that the cross-correlation between nonadjacent subbands can be completely neglected. The cross-correlation between adja, is not negligible. cent subbands However, for cosine modulated filter bank with high stopband attenuation prototype filter, the cross-correlation at zero lag is essentially zero, as shown in Fig. 3. Recall that the diagonal matrix requirement is fulfilled if the subband signals . Note that the cross-correlation are orthogonal at may be large at . Fig. 4 shows the normalized misalignment (i.e., the norm of the weight-error vector normalized by the norm of the

IEEE SIGNAL PROCESSING LETTERS, VOL. 11, NO. 9, SEPTEMBER 2004

739

be comparable to an alternative class of generalized NLMS (GNLMS) algorithms [8], which uses a set of time domain constraints [6] instead of the multiple subband constraints as proposed in this letter. APPENDIX THE DIAGONAL ASSUMPTION Consider the subband structure in Fig. 2. For two arbitrary and , where , subband signals can be formulated as their cross-correlation at zero lag

Fig. 3. Normalized cross-correlation function between the first and second subband signals (l), estimated by exciting the analysis filter bank with the AR(2) random signal. (a) N = 2 channels with L = 16. (b) N = 4 channels with L = 32. (c) N = 8 channels with L = 64.

Fig. 4. Normalized misalignment learning curves for the NLMS and NSAF (for N = 2; 4, and 8) algorithms. The step-size is set at  = 1:0.

optimum tap-weight vector ) learning curves of the NLMS , and ) algorithms. The and the proposed NSAF (for learning rate for both algorithms is chosen to be at the middle of . This choice of step-size the stability bound (13), i.e., setting enables a fair comparison between the NSAF and NLMS algorithms. From the learning curves, it can be noted that the NSAF algorithm provides faster convergence than the NLMS algorithm. Furthermore, with an increased number of subbands, the convergence rate improves considerably. Hence, the proposed algorithm is an improvement over the fullband NLMS algorithm. VI. CONCLUSION A novel multiple-constraint optimization criterion for the SAF is proposed in this letter. Compared to the fullband NLMS algorithm, the NSAF algorithm derived from this criterion exhibits faster convergence under colored excitation. Regarding computational complexity, the NSAF and NLMS algorithms require almost the same number of multiplications per sampling period. The efficacy of the proposed NSAF algorithm would

(15) and are the magnitude responses of where is their phase difthe analysis filters, is the power spectrum of the input signal ference, and which is assumed to be stationary. For analytical simis white giving us a flat specplicity, we further assume that . Assume ergodicity, can be approxtrum . From imated with the time average (15), it is clear that if the magnitude responses of the analysis filters do not significantly overlap, the cross-correlation is negligible compared to the auto-correlation , which implies that the off-diagonal elements of the matrix are much smaller than its diagonal . Hence, can be elements well approximated by a diagonal matrix. This requirement can be achieved by restricting the analysis filters to have slightly overlapping frequency responses and high stopband rejection. A filter banks design example is presented in Section V. ACKNOWLEDGMENT The authors wish to express their thanks to the anonymous reviewers for providing many useful comments and suggestions. REFERENCES [1] M. De Courville and P. Duhamel, “Adaptive filtering in subbands using a weighted criterion,” IEEE Trans. Signal Processing, vol. 46, pp. 2359–2371, Sept. 1998. [2] S. S. Pradhan and V. E. Reddy, “A new approach to subband adaptive filtering,” IEEE Trans. Signal Processing, vol. 47, pp. 655–664, Mar. 1999. [3] J. J. Shynk, “Frequency domain and multirate adaptive filtering,” IEEE Signal Processing Mag., vol. 9, pp. 14–37, Jan. 1992. [4] A. Gilloire and M. Vetterli, “Adaptive filtering in subbands with critical sampling: Analysis, experiments, and application to acoustic echo cancellation,” IEEE Trans. Signal Processing, vol. 40, pp. 1862–1875, Aug. 1992. [5] P. L. De Leon and D. M. Etter, “Acoustic echo cancellation using subband adaptive filtering,” in Subband and Wavelet Transforms, A. N. Akansu and M. J. T. Smith, Eds. Boston, MA: Kluwer, 1996. [6] S. Haykin, Adaptive Filter Theory. Englewood Cliffs, NJ: PrenticeHall, 2002. [7] D. R. Morgan, “Slow asymptotic convergence of LMS acoustic echo cancellers,” IEEE Trans. Speech Audio Processing, vol. 3, pp. 126–136, Mar. 1995. [8] D. R. Morgan and S. G. Kratzer, “On a class of computationally efficient, rapidly converging, generalized NLMS algorithms,” IEEE Signal Processing Lett., vol. 3, pp. 245–247, Aug. 1996.

Suggest Documents