and Computer Engineering, Pohang University of Science and Technology,. Pohang ... University, Baltimore, MD 21205 USA (
IEEE SIGNAL PROCESSING LETTERS, VOL. 17, NO. 3, MARCH 2010
245
A Subband Adaptive Filtering Algorithm Employing Dynamic Selection of Subband Filters Seong-Eun Kim, Student Member, IEEE, Young-Seok Choi, Member, IEEE, Moon-Kyu Song, and Woo-Jin Song, Member, IEEE
Abstract—We present a novel normalized subband adaptive filter (NSAF) which dynamically selects subband filters in order to reduce computational complexity while maintaining convergence performance of conventional NSAF. The selection operation is performed to achieve the largest decrease between the successive mean square deviations at every iteration. As a result, an efficient and competent NSAF algorithm is derived. The experimental results show that the proposed NSAF algorithm gains an advantage over the conventional NSAF in that it leads to a similar convergence performance with a substantial saving of overall computational burden. Index Terms—Adaptive filters, dynamic selection of subband filters, subband adaptive filter (SAF).
I. INTRODUCTION
T
HE normalized least mean-squares (NLMS) algorithm is one of the most popular adaptive filtering algorithms due to its simple implementation and robustness. However, its poor convergence rate for correlated input signals remains as a major drawback [1]–[3]. To address this problem, the recursive least squares (RLS) [1]–[3] and affine projection algorithm (APA) [4] have been developed and used. Alternatively, another class of adaptive filtering to improve convergence speed has been presented, referred to as the subband adaptive filter (SAF) [5]–[8]. The SAF literature is based on the property that LMS-type adaptive filters converge faster for white input signals than for colored input signals [2], [3]. By performing a “pre-whitening” procedure on the input signals, it accomplishes an improvement in convergence behavior. In spite of the virtue of the SAF, the initial SAF has been hampered by the structural problem such as aliasing and band-edge effects since the adaptation is performed independently in each subband [5]. The following SAF schemes have incorporated the fullband weight model which do not part the adaptive filter weights into each subband, coping with the structural problems [6], [7]. Most recently, use of multiple constraint optimization
Manuscript received October 10, 2009; revised November 20, 2009. First published December 08, 2009; current version published December 31, 2009. This work was supported by the Brain Korea (BK) 21 Program funded by the MEST and by the IT R&D program of MKE/MCST/IITA [2008-F-031-01], Korea. The associate editor coordinating the review of this manuscript and approving it for publication was Prof. Jen-Tzung Chien. S.-E. Kim, M.-K. Song and W.-J. Song are with Division of Electrical and Computer Engineering, Pohang University of Science and Technology, Pohang, Kyungbuk 790-784, Korea (e-mail:
[email protected];
[email protected];
[email protected]). Y.-S. Choi is with the Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD 21205 USA (e-mail:
[email protected]). Color versions of one or more of the figures in this paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/LSP.2009.2038109
criterion into formulation of a cost function has resulted in the normalized SAF (NSAF) [8], whose update equation is similar to those in [6] and [7]. By increasing the number of subband filters, the convergence speed of the NSAF algorithm can be accelerated while maintaining the same level of steady-state error [9]. However, it suffers from huge complexity when used in adapting an extremely long unknown system such as acoustic echo cancellation application. Abadi and Husøy [10] have proposed the simplified selective partial-update subband adaptive filter (SSPU-SAF) algorithm as a low-complexity SAF. To reduce computational complexity, the SSPU-SAF updates only a subset of the filter coefficients at each iteration. In this letter, we propose a novel normalized subband adaptive filter that sorts out a subset of the subband filters contributing to convergence performance and utilizes those in updating the adaptive filter weight. The proposed NSAF dynamically selects the subband filters so as to fulfill the largest decrease of the successive mean square deviations (MSDs) at every iteration. Thus, the proposed algorithm is referred to as dynamic selection NSAF (DS-NSAF). Consequently, the proposed structure can reduce computational complexity of the conventional SAF with critical sampling while maintaining its convergence performance. We demonstrate that the proposed DS-NSAF is comparable to the conventional NSAF in terms of convergence performance, while reducing computational complexity. Compared to the SSPU-SAF, the proposed DS-NSAF exhibits greater efficiency and superior performance. This letter is organized as follows. In Section II, the NSAF is reviewed and the proposed DS-NSAF is formulated. Section III deals with the experimental results which describe the convergence performance of the proposed algorithm and complexity issue. Finally, conclusions are presented in Section IV. II. DYNAMIC SELECTION NSAF (DS-NSAF) Consider a desired signal that originates from an unknown linear system (1) where is an unknown column vector to be identified with an adaptive filter, corresponds to measurement noise with zero denotes a row input (regressor) mean and variance , and vector with length as follows:
A. NSAF Fig. 1 shows the structure of the NSAF, where the desired and output signal are partitioned into signal subbands by the analysis filters . The resulting subband signals, and for , are critically decimated to a lower sampling rate commensurate
1070-9908/$26.00 © 2009 IEEE Authorized licensed use limited to: POSTECH. Downloaded on March 09,2010 at 00:33:21 EST from IEEE Xplore. Restrictions apply.
246
IEEE SIGNAL PROCESSING LETTERS, VOL. 17, NO. 3, MARCH 2010
(5) where
denotes the difference of MSDs between successive iterations, i.e., and . The maximum leads to the fastest convergence, because the MSD undergoes the largest decrease from . is the th subband signal iteration to iteration of being partitioned and decimated, and is the corre, sponding noise variance. Since can be rewritten as
Fig. 1. Structure of the NSAF.
with their bandwidth. Here we use the variable to index the original sequences, and to index the decimated sequences for all signals. Then, the decimated filter output signal at each subband is defined as , where is row vector such that
and denotes an estiwith length . Hence, the decimated subband error mate for signal can be obtained by
(6) where we assume that the noise signal is identically and independently distributed (i.i.d.) and statistically independent of , and we ignore the dependency of on the input data past noise [11]. For a high-order adaptive filter, the fluctuations of from one iteration to the next can be assumed to be small, so the following approximation can be acceptable [12]:
and (7)
(2) denotes the decimated desired signal where at each subband. Then, the NSAF can be written as
Then, the estimate of
in (6),
, can be obtained by (8)
(3) where
is a step-size.
B. Selecting the Subband Filters Our objective is to address the tradeoff of the conventional NSAF algorithm: reducing the computational complexity while achieving the convergence rate comparable to the NSAF. For this, we present a new NSAF scheme which utilizes the subset of subband filters in updating the filter weight. In this regard, the selection of a subset of subband filters is carried out to achieve the largest decrease of the MSDs between successive iterations. Using the weight-error vector, , (3) can be rewritten as (4) By squaring both sides, taking expectations and using the diagonal assumption [8], we can derive MSD that satisfies
[8], Since the step-size of the NSAF satisfies is always positive. From (8), we can find the following facts. If , then the corresponding input contributes to maximizing . On the other hand, if , then results in decreasing of . Along this line of thought, the proposed NSAF incorporates the selection of the subband filters satisfying at every iteration for the largest decrease of MSD. Accordingly, the number of subband filters which are used to update the weight at every iteration is less than the conventional NSAF. For implementing the proposed algorithm, we should take account of estimation of . Since it is not feasible to calculate the exact expected value, we replace it by an instantaneous value as follows: (9) In practice, the noise variance, , can be easily estimated during silences [13], [14] and online [15]. Let denote a subset with members of the set , where means the index of the selected subband filters, and is defined as the number of the selected subband filters at iteration . Finally, the proposed DS-NSAF becomes
Authorized licensed use limited to: POSTECH. Downloaded on March 09,2010 at 00:33:21 EST from IEEE Xplore. Restrictions apply.
KIM et al.: SUBBAND ADAPTIVE FILTERING ALGORITHM
247
TABLE I COMPUTATIONAL COMPLEXITY
Fig. 2. Measured room acoustic impulse response.
(r (k ) = N (k )=N )
if otherwise (10) where and
.
C. Stability and Computational Complexity In [8], the authors have proved that the NSAF is stable if its step-size satisfies regardless of the number of subbands. Therefore, the proposed algorithm is also stable with a step-size between the range. Table I shows the computational complexity of the conventional NSAF [8], SSPU-SAF [10] and the proposed DS-NSAF per iteration. In the SSPU-SAF, the weight vector is partitioned into blocks. The blocks selected from blocks using the second criterion [10] in each subband are updated at every iteration. As shown in Table I, the conventional NSAF requires multiplications, while the SSPU-SAF and the proposed DS-NSAF and where requires , respectively. As the adaptive filter converges, the computational complexity of the proposed DS-NSAF is per iteration compared reduced at the ratio of to the NSAF, reflecting the efficacy of the proposed algorithm over the SSPU-SAF. Therefore, it has a low computational complexity due to the use of fewer subband filters.
Fig. 3. Normalized MSD curves of the NSAF (N = 4 and 8) and the proposed DS-NSAF. The step-size = 1:0 is used.
III. EXPERIMENTAL RESULTS We demonstrate the performance of the proposed algorithm by carrying out experiments in the system identification configuration. The unknown system to be identified is an acoustic echo response of a room truncated to 1024 taps with a 8-kHz sampling rate, as shown in Fig. 2. The adaptive filter is designed to have the same length with the unknown system, . The input signal is obtained by filtering a white, zero-mean, Gaussian random sequence through a first-order system . The signal-to-noise ratio (SNR) is calculated by , where . is added to such that The measurement noise . The normalized MSD, , is evaluated by ensemble averaging over 30 independent trials. Also, we assume that the noise variance, , is known a priori [13], [14]. The cosine-modulated filter banks [16] with subband numbers , and 8 are used in the experiments. The length of the prototype filter is 32. Fig. 3 shows the normalized MSD curves for the proposed DS-NSAF and the conventional NSAF for the number of subbands and 8. The step-size is set to . As can be seen, the proposed DS-NSAF has a convergence performance
Fig. 4. (a) Number of selected subbands over single trial, and (b) average number of selected subbands over 30 independent trials, in the proposed DS-NSAF when N = 8, = 1:0.
comparable to the conventional NSAF in terms of the convergence speed and the steady-state error with reduced computational complexity. Fig. 4(a) shows the number of selected sub. bands over 1 trial for the proposed DS-NSAF with The number of used subbands dynamically varies for the largest MSD decrease, and the average number of selected subbands gets small, as represented in Fig. 4(b). As a result, the proposed DS-NSAF has a low overall computational complexity because the number of used subbands becomes small. Fig. 5 shows the normalized MSD curves for the proposed , 8), the SSPU-SAF ( , , 2, 3), DS-NSAF ( and the NLMS for the same experimental scenario with Fig. 3. To get the similar steady-state MSD, the step-sizes are set to
Authorized licensed use limited to: POSTECH. Downloaded on March 09,2010 at 00:33:21 EST from IEEE Xplore. Restrictions apply.
248
IEEE SIGNAL PROCESSING LETTERS, VOL. 17, NO. 3, MARCH 2010
Fig. 5. Normalized MSD curves of the SSPU-SAF (P the proposed DS-NSAF ( = 4, 8) and the NLMS.
N
= 4, S = 3, 2, 1) and
TABLE II COMPUTATIONAL COMPLEXITY OF SSPU-SAF, DS-NSAF, AND NLMS UNTIL CONVERGENCE
Fig. 6. Normalized MSD curves of the SSPU-SAF (P = 4, S = 3, 2, 1) and the proposed DS-NSAF ( = 4, 8) for speech input signals and colored noise (Gaussian AR(1), pole at 0.9).
N
DS-NSAF exhibits greater performance than the SSPU-SAF algorithms. REFERENCES
0.2, 0.2, 0.16, 0.11, 0.06, and 1.0 for the proposed DS-NSAF , 8), the SSPU-SAF ( , 2, 1), and the NLMS, re( spectively. Table II exhibits the numerical complexity of the algorithms to achieve the similar steady-state error in the scenario multiplications for shown in Fig. 5. In Table II, we use the NLMS [3]. From Fig. 5 and Table II, the SAFs have faster convergence speed than the NLMS. In addition, we can observe that the proposed DS-NSAF is superior to the SSPU-SAF in that the DS-NSAF not only converges faster than the SSPU-SAF , 2, 3) but is also more cost-effective. ( In Fig. 6, we carried out an acoustic echo cancellation with speech input signals sampled at 8 KHz and colored noise (Gaussain AR(1), pole at 0.9). Fig. 6 shows the MSD curves of the , 8) and the SSPU-SAF ( , proposed DS-NSAF ( 2, 1). To get the similar steady-state MSD, the step-sizes are set to 0.2, 0.2, 0.15, 0.1, and 0.05 for the proposed DS-NSAF , 8) and the SSPU-SAF ( , 2, 1), respectively. As ( shown in Fig. 6, the proposed DS-NSAF works well for nonstationary signals such as speech signals. IV. CONCLUSIONS We have presented a low-complexity subband adaptive filtering algorithm by dynamically selecting a subset of subband filters at every iteration, which is named as dynamic selection NSAF (DS-NSAF). The optimal selection is derived to maximize the difference of successive MSDs. By dynamically selecting effective subband filters, the proposed DS-NSAF not only achieves convergence behavior comparable with the conventional NSAF, but also lessens the computational burden. In addition, the experimental results show that the proposed
[1] B. Widrow and S. D. Sterns, Adaptive Signal Processing. Englewood Cliffs, NJ: Prentice-Hall, 1985. [2] S. Haykin, Adaptive Filter Theory, 4th ed. Upper Saddle River, NJ: Prentice-Hall, 2002. [3] A. H. Sayed, Fundamentals of Adaptive Filtering. New York: Wiley, 2003. [4] K. Ozeki and T. Umeda, “An adaptive filtering algorithm using an orthogonal projection to an affine subsapce and its properties,” Electron. Commun. Jpn., vol. 67-A, no. 5, pp. 19–27, 1984. [5] A. Gilloire and M. Vetterli, “Adpative filtering in subbands with critical sampling: Analysis, experiments, and application to acoustic echo cancellation,” IEEE Trans. Signal Process., vol. 40, no. 8, pp. 1862–1875, Aug. 1992. [6] M. D. Courville and P. Duhamel, “Adaptive filtering in subbands using a weighted criterion,” IEEE Trans. Signal Process., vol. 46, no. 9, pp. 2359–2371, Sep. 1998. [7] S. S. Pradhan and V. U. Reddy, “A new approach to subband adaptive filtering,” IEEE Trans. Signal Process., vol. 47, no. 3, pp. 655–664, Mar. 1999. [8] K. A. Lee and W. S. Gan, “Improving convergence of the NLMS algorithm using constrained subband updates,” IEEE Signal Process. Lett., vol. 11, no. 9, pp. 736–739, Sep. 2004. [9] K. A. Lee and W. S. Gan, “Inherent decorrelating and least perturbation properties of the normalized subband adaptive filter,” IEEE Trans. Signal Process., vol. 54, no. 11, pp. 4475–4480, Nov. 2006. [10] M. S. E. Abadi and J. H. Husøy, “Selective partial update and set-membership subband adaptive filters,” Signal Process., vol. 88, no. 10, pp. 2463–2471, Oct. 2008. [11] H.-C. Shin and A. H. Sayed, “Mean-square performance of a family of affine projection algorithms,” IEEE Trans. Signal Process., vol. 52, no. 1, pp. 90–102, Jan. 2004. [12] A. Tandon, M. N. S. Swamy, and M. O. Ahmad, “Partial-update -norm based algorithms,” IEEE Trans. Circuits Syst. I, vol. 54, no. 2, pp. 411–419, Feb. 2007. [13] N. R. Yousef and A. H. Sayed, “A unified approach to the steady-state and tracking analyses of adaptive filters,” IEEE Trans. Signal Process., vol. 49, no. 2, pp. 314–324, Feb. 2001. [14] J. Benesty, H. Rey, L. R. Vega, and S. Tessens, “A nonparametric VSS NLMS algorithm,” IEEE Signal Process. Lett., vol. 13, no. 10, pp. 581–584, Oct. 2006. [15] C. Paleologu, S. Ciochin˘a, and J. Benesty, “Variable step-size NLMS algorithm for under-modeling acoustic echo cancellation,” IEEE Signal Process. Lett., vol. 15, pp. 5–8, 2008. [16] P. P. Vaidyanathan, Mulirate Systems and Filterbanks. Englewood Cliffs, NJ: Prentice-Hall, 1993.
L
Authorized licensed use limited to: POSTECH. Downloaded on March 09,2010 at 00:33:21 EST from IEEE Xplore. Restrictions apply.