IEICE TRANS. FUNDAMENTALS, VOL.E96–A, NO.6 JUNE 2013
1482
LETTER
Partial-Update Normalized Sign LMS Algorithm Employing Sparse Updates∗ Seong-Eun KIM†a) , Nonmember, Young-Seok CHOI††b) , Member, Jae-Woo LEE†††c) , Nonmember, and Woo-Jin SONG†††d) , Member
SUMMARY This paper provides a novel normalized sign least-mean square (NSLMS) algorithm which updates only a part of the filter coefficients and simultaneously performs sparse updates with the goal of reducing computational complexity. A combination of the partial-update scheme and the set-membership framework is incorporated into the context of L∞ norm adaptive filtering, thus yielding computational efficiency. For the stabilized convergence, we formulate a robust update recursion by imposing an upper bound of a step size. Furthermore, we analyzed a mean-square stability of the proposed algorithm for white input signals. Experimental results show that the proposed low-complexity NSLMS algorithm has similar convergence performance with greatly reduced computational complexity compared to the partial-update NSLMS, and is comparable to the set-membership partial-update NLMS. key words: adaptive filter, normalized sign LMS (NSLMS), partial-update, sparse updates, mean-square stability
1.
Introduction
The normalized least-mean square (NLMS) algorithm has been widely used due to its robustness and ease of use. However, the usefulness of the NLMS algorithm may be diminished for a system with a large number of coefficients, which is causing increasing complexity [1]. Recently, a L∞ -norm Manuscript received October 17, 2012. Manuscript revised January 8, 2013. † The author is with the Future IT Center, Samsung Advanced Institute of Technology (SAIT), Samsung Electronics, Yongin, 446-712 Republic of Korea. †† The author is with the Department of Electronic Engineering, Gangneung-Wonju National University, Gangneung, 210-702 Republic of Korea. ††† The authors are with the Division of Electronic and Computer Engineering, Pohang University of Science and Technology (POSTECH), Pohang, 790-784 Republic of Korea. ∗ This work was supported in part by the Brain Korea (BK) 21 Project in 2012, in part by the IT R&D program of MKE/MCST/IITA (2008-F-031-01, Development of Computational Photography Technologies for Image and Video Contents) and in part by MKE, Korea, under the C-ITRC (Convergence Information Technology Research Center) support program (NIPA2013-H0401-13-1008) supervised by the NIPA (National IT Industry Promotion Agency). This work was supported by National Research Foundation of Korea (NRF) grant funded by the Korea goverment (MEST) (2012R1A2A2A01011112). This work was also supported by the STRL (Sensor Target Recognition Laboratory) program of Defense Acquisition Program Administration and Agency for Defense Development. a) E-mail:
[email protected] b) E-mail:
[email protected] c) E-mail:
[email protected] d) E-mail:
[email protected] (Corresponding author) DOI: 10.1587/transfun.E96.A.1482
based adaptive filtering algorithm, termed the normalized sign LMS (NSLMS), has been developed with a quantized update, resulting in reduced computation compared with the NLMS algorithm [2], [3]. However, the use of the NSLMS algorithm is still limited in identifying systems with a larger number of filter coefficients which are common in acoustic echo cancellation, speech processing and so on. As an alternative for reducing computational burden, a family of partial update schemes which utilize only a subset of the filter coefficients has been proposed [4]–[7]. These algorithms lead to efficient updating in the sense that a subset of filter coefficients is updated at every iteration. Being associated with the L∞ -norm adaptive filtering framework, the partial-update NSLMS (PU-NSLMS) algorithm has been presented as a low-complexity implementation of the NSLMS [8]. Another approach to reduce complexity uses the sparse updating in time, which is advantageous in terms of average complexity [9]–[11]. For the sparse updating in time, the framework of set-membership filtering (SMF) has been incorporated by imposing a bounded error constraint on the filter output. The sparse updating in time can provide substantial savings in computations because it enables an efficient use of processor capacity [10]. With this in mind, the NSLMS with sparse updates algorithm [12] has been proposed. Recent studies on a combination of selective partial update with SMF have shown their effectiveness in reducing computational complexity [13], [14]. Considering this promising efficiency, here we present a low-complexity L∞ norm based adaptive filtering algorithm which executes both the partial-update scheme and sparse-updates, being referred to as partial-update NSLMS employing sparse updates (PU-NSLMS-SU) algorithm. The additional contributions are a development of a novel updating scheme that guarantees stability and a derivation of a mean square analysis to prove its stability for white input signals. Through experimental studies, we demonstrate that the proposed PUNSLMS-SU has similar performance to the PU-NSLMS [8] while reducing computational complexity, and is comparable to the partial-update NLMS algorithm with dataselective updating (SM-PU-NLMS) [13] in terms of convergence performance. This letter is organized as follows: Sect. 2 describes the system model and reviews the PU-NSLMS algorithm. In Sect. 3, we develop a novel PU-NSLMS-SU algorithm and describe step-size adjustment for stability. Section 4 con-
c 2013 The Institute of Electronics, Information and Communication Engineers Copyright
LETTER
1483
tains the derivation of the mean-square stability analysis for the PU-NSLMS-SU. Section 5 deals with the experimental results, which describe the convergence performance of the proposed algorithm. Finally, conclusions are presented in Sect. 6. 2.
Partial-Update Normalized Sign LMS (PU-NSLMS)
This section reviews the partial-update normalized sign LMS (PU-NSLMS) algorithm proposed in [7], [8]. Consider a desired signal d(i) which is obtained from a linear system d(i) = uTi wo + v(i)
ui = [u(i) u(i − 1) · · · u(i − N + 1)]T .
(2)
Then, the estimation error signal is given by uTi wi .
(3)
Let the L coefficients (L ≤ N) to be updated at time instant i be determined by an index set TL (i) defined by TL (i) = {t1 (i), t2 (i), · · · , tL (i)}
(4)
L is taken from the set {1, 2, · · · , N}. We furwhere {tk (i)}k=1 ther define a coefficient selection matrix STL (i) as a diagonal matrix having L elements equal to one in the positions indicated by TL (i) and zero elsewhere. From the coefficient selection matrix STL (i) and the error control procedure developed in [3], two PU-NSLMS algorithms, the sequential PU-NSLMS and the M-Max PU-NSLMS, are derived in [8]. The update equations for the PU-NSLMS algorithm can be written as μe(i) sign{STL (i) ui }. (5) wi+1 = wi + STL (i) ui 1 +
where μ is a step-size parameter and is a regularization parameter to avoid division by zero. Stability of the PUNSLMS algorithm is guaranteed, if 0 < μ < μmax where [8] 2L 2 2 ζ , μmax = (6) N π and ζ=
L E{S
TL (i) ui 1 }
N
E{ui 1 }
.
(7)
For μ > (μmax /2), neither the convergence rate improves nor the steady-state error decreases. Hence, only the range 0 < μ ≤ (μmax /2) is of practical interest. 3.
3.1 NSLMS with Sparse Updates In SMF context, the filter coefficient vector w is obtained to achieve a specified bound on the magnitude of the estimation error [9]. The feasibility set Θ is defined as the set of w satisfying the estimation error bounded by γ as follows: {w ∈ RN : |d − uT w| ≤ γ} (8) Θ= (u,d)∈S
(1)
where wo is an unknown column vector to be identified with an adaptive filter wi , v(i) is a measurement noise with zero mean and variance σ2v , and ui denotes the N × 1 input vector as follows
e(i) = d(i) −
adaptive filtering algorithm by benefiting from both the partial update and the sparse update.
Partial-Update NSLMS Employing Sparse Updates (PU-NSLMS-SU)
This letter is aimed at developing a computationally efficient
where S is a set containing all possible {u, d}. In order to apply this concept to the adaptive filters, the constraint set Hi is defined as a set containing any vector w for {ui , d(i)} pair which is given by Hi = {w ∈ RN : |d(i) − uTi w| ≤ γ}.
(9)
For the sparse updates of the filter coefficient vector, the adaptive filter schemes employ the constraint set to seek filter coefficients at present instant that minimize the change from ones of the previous iteration in the L∞ -norm sense, i.e., wi+1 − wi ∞ , subject to the constraint wi+1 ∈ Hi . Then, the constraint minimization problem can be written as wi+1 = arg min w − wi ∞ w
(10)
subject to |d(i)−uTi w| ≤ γ. By the procedure proposed in [3], the filter coefficient vector wi+1 can be obtained by adjusting wi with a vector Δi to minimize the error signal e(i), which results in the NSLMS with sparse updates algorithm (see [12] for more detail derivation) as follows: wi+1 = wi + where μ(i) =
μ(i)e(i) sign{ui } ui 1 +
1 − γ/|e(i)|, 0,
where |e(i)| > γ otherwise.
(11)
(12)
3.2 Proposed Algorithm Derivation Our objective is to combine the partial scheme and the sparse scheme in the L∞ -norm sense. To achieve the goal, we introduce the additional constraint of updating only L coefficients, i.e., S˜ TL (i) (w − wi ) = 0 where S˜ TL (i) = I − STL (i) is a complementary matrix which is used to represent that only L coefficients are updated. Then, the proposed constraint minimization problem is given by wi+1 = arg min w − wi ∞ w
(13)
subject to |d(i) − uTi w| ≤ γ and S˜ TL (i) (w − wi ) = 0. Following the error control procedure [3], we can obtain the filter coefficient vector wi+1 by adjusting wi with a vector STL (i) Δi to minimize the error signal e(i). Let the a posteriori error
IEICE TRANS. FUNDAMENTALS, VOL.E96–A, NO.6 JUNE 2013
1484
signal r(i) be defined as r(i) = d(i) − (wi + STL (i) Δi )T ui .
(14)
In [3], to minimize the error signal, the relation between the estimation error signal e(i) and a posteriori error signal r(i) is given by r(i) = (1 − μ(i))e(i)
(15)
where μ(i) is a parameter to be selected in the interval (0, 1). By (14) and (15), we arrive at μ(i)e(i) = ΔTi STL (i) ui .
(16)
The minimum L∞ -norm solution vector of Δi , i.e., min Δi ∞ subject to (16), is given by Δi =
μ(i)e(i) sign{STL (i) ui }. STL (i) ui 1
(17)
Then, the updating equation is obtained as follows: wi+1
μ(i)e(i) sign{STL (i) ui } = wi + STL (i) ui 1 +
where μ(i) is data dependent and given by 1 − γ/|e(i)|, where |e(i)| > γ μ(i) = 0, otherwise
(18)
(19)
Further, we investigate the step-size parameter μ(i) in association with the stability of the proposed algorithm, since the step size μ of the PU-NSLMS algorithm is limited to μmax depending on L as presented in Sect. 2. We consider that if wi+1 is updated to be closest to wNLMS,i+1 , it guarantees the stability of the proposed algorithm and leads to an improved convergence rate. Motivated from this, we consider following update strategies when is zero. Let θ shown in Fig. 1 denote the angle between the direction of sign{STL (i) ui } and ui . Then cos θ is given by (20)
where the wSM-NLMS,i+1 is obtained by wSM-NLMS,i+1 = wi +
μ(i)e(i)ui ui 2
(21)
in the SM-NLMS algorithm [9]. We define θ⊥ as the angle θ in the case when (wi+1 − wi )⊥(wi+1 − wNLMS,i+1 ) where the wNLMS,i+1 is obtained by wNLMS,i+1 = wi +
e(i)ui ui 2
in the case of unit step-size [1]. If θ is larger than θ⊥ , the error bound γ temporarily is increased to γi at ith iteration so as to be
(22)
(23)
as illustrated in Fig. 1. Then, w i+1 is given by w i+1 = wi +
3.3 Step-Size Adjustment for Stability
wSM-NLMS,i+1 − wi wi+1 − wi
Geometric interpretation of the proposed algorithm.
(w i+1 − wi )⊥(w i+1 − wNLMS,i+1 )
L and TL (i) = {tk (i)}k=1 collects indexes indicating the first L largest maxima of |u(i − k + 1)| where k = 1, 2, · · · , N, which is used in the M-Max PU-NSLMS algorithm [8].
cos θ =
Fig. 1
μ (i)e(i) sign{STL (i) ui } STL (i) ui 1
(24)
where μ (i) = 1 − γi /|e(i)|. In this regard, cos θ can be also represented by cos θ =
w i+1 − wi . wNLMS,i+1 − wi
(25)
From the equality between (20) and (25), the step-size μ (i) can be calculated using (18), (21), (22), and (24), which is given by μ (i) =
STL (i) ui 21 . Lui 2
(26)
If μ(i) > μ (i), i.e., STL (i) ui 21 < μ(i)Lui 2 , at every iteration, then μ(i) is substituted with μ (i) to guarantee stability. 4.
Steady-State Performance Analysis
For the tractable analysis, the input signal and the measurement noise are assumed as follows [1]: Assumption I The input u(i) and the noise v(i) are zero-mean white Gaussian with a variance σ2u and σ2v , respectively. Assumption II The noise v(i) is statistically independent of the input u(i). ˜ i = wi − wo . The Let the coefficient error be defined as w following is the often realistic assumption [15]: ˜ i is independent of ui . Assumption III w In order to account for the sparse updates, we have the following assumption [8], [13]: Assumption IV The filter is updated with the probability Pe,i = P[|e(i)| > γ], and P[e(i) > γ] = P[e(i) < −γ].
LETTER
1485
By using the Assumption IV, (18) can be rewritten with ˜ i + v(i) by e(i) = −uTi w wi+1 = wi +
˜ i + v(i)) μ(i)Pe,i (−uTi w sign{STL (i) ui } STL (i) ui 1
From (27), the coefficient error vector is given by sign{STL (i) ui }uTi ˜i ˜ i+1 = I − Pe,i μ(i) w w STL (i) ui 1 v(i)sign{STL (i) ui } +Pe,i μ(i) STL (i) ui 1
(27)
(28)
For convenient writing, we denote gi instead of sign{STL (i) ui }, i.e., gi sign{STL (i) ui }. The instantaneous excess MSE, εi+1 , for white Gaussian input, is defined as [1] ˜ Ti+1 Ru w ˜ Ti+1 w ˜ i+1 ] = σ2u E[w ˜ i+1 ], εi+1 = E[w
(29)
where Ru = E[ui+1 uTi+1 ]. From (28) and (29), the excess MSE is obtained by ˜ Ti [ui gTi + gi uTi ]w ˜i Pe,i μ(i)w εi+1 = εi − σ2u E STL (i) ui 1 ⎤ ⎡ 2 2 T ˜ i ui gTi gi uTi w ˜ i ⎥⎥⎥ ⎢⎢ Pe,i μ (i)w 2 ⎢ ⎥⎥⎦ ⎢ + σu E ⎢⎣ STL (i) ui 21 ⎡ ⎤ ⎢⎢ 2 2 v2 (i)gTi gi ⎥⎥⎥ 2 ⎢ ⎢ ⎥⎦ + σu E ⎣Pe,i μ (i) STL (i) ui 21 = εi + σ2u (−ρ1 + ρ2 + ρ3 )
(30) STL (i) ui 21 /Lui 2
For further computation, we set μ(i) = every iteration as the worst case. Substituting the modified μ(i) into (30) and using gTi gi = L, ρ1 , ρ2 and ρ3 are given by ⎤ ⎡ ⎢⎢⎢ STL (i) ui 1 w ˜ i ⎥⎥⎥ ˜ Ti ui gTi+gi uTi w ⎥⎥⎦⎥ , ρ1 = Pe,i E ⎢⎢⎣⎢ (31) 2 Lui ⎡ ⎤ ˜ Ti ui uTi w ˜ i ⎥⎥⎥ ⎢⎢⎢ STL (i) ui 21 w 2 ⎢ ⎥⎦ , (32) ρ2 = Pe,i E ⎣ Lui 4 and
⎡ ⎤ ⎢⎢⎢ STL (i) ui 21 v2 (i) ⎥⎥⎥ 2 ⎥⎦ , ρ3 = Pe,i E ⎣⎢ Lui 4
(33)
respectively. 2 Assuming that N is a large value, ui can be consid ered a reasonable estimate of NE u2 (i) [3], [13], so that we approximate the followings: ui 2 = Nσ2u and ui 4 = N 2 σ4u . By means of the Assumption II and the above approximations, we thus rewrite ρ1 , ρ2 and ρ3 as follows: ˜i ˜ Ti ui gTi+gi uTi w E STL (i) ui 1 w (34) ρ1≈Pe,i LNσ2u ˜ Ti ui uTi w ˜i E STL (i) ui 21 w (35) ρ2≈P2e,i LN 2 σ4u
ρ3≈P2e,i
E STL (i) ui 21 LN 2 σ4u
σ2v
(36)
At the steady-state, we could assume that ui 2 is un˜ i [1]. This assumpcorrelated with e2a (i) where ea (i) = uTi w tion allows us to separate the expectation E ui 2 e2a (i) into the product of two expectations: E ui 2 e2a (i) = E ui 2 E e2a (i) (37) ˜ Ti w ˜ i . From the separation assumpand E e2a (i) = σ2u E w tion and the Assumption III, ρ2 can be rewritten as E STL (i) ui 21 2 ˜ Ti w ˜i . ρ2 ≈ Pe,i E w (38) 2 2 LN σu To evaluate ρ1 , it is necessary to compute the elements of matrix A which is given by A = E STL (i) ui 1 ui gTi + gi uTi . (39) Using the assumptions for u(i), ρ1 can be derived by (see Appendix) E STL (i) ui 21 T ˜ ˜ ρ1 ≈ 2Pe,i E w (40) w i i LN 2 σ2u Substituting (40), (38), and (36) into (30) results in ⎤ ⎡ ⎢⎢⎢ E STL (i) ui 21 ⎥⎥⎥ ⎥⎥⎥ εi ⎢ εi+1 = ⎢⎣⎢1 − Pe,i (2 − Pe,i ) ⎦ LN 2 σ2u E STL (i) ui 21 +P2e,i σ2v LN 2 σ2u Since Pe,i (2 − Pe,i ) is always stable. 5.
E STL (i) ui 21 2 2 LN σu
(41)
< 1, the proposed algorithm
Computational Complexity
The computational complexities per iteration in terms of the number of multiplications, additions, divisions, and comparisons for the NLMS, NSLMS, PU-NSLMS, SM-NSLMS, SM-PU-NLMS, and the proposed PU-NSLMS-SU algorithms are shown in Table 1. The computational complexity of the proposed PU-NSLMS-SU algorithm depends on the number of coefficients to be updated and the search technique for finding the L elements of ui with the largest absolute value. Although the PU-NSLMS and PU-NSLMS-SU Table 1 Algorithm NLMS NSLMS PU-NSLMS SM-NSLMS SM-PU-NLMS Proposed
Computational complexity.
Mult. 2N + 4 N+2 N+2 N+2 N+L+2 N+2
Add. 2N + 4 2N + 2 N +L+2 2N + 3 N +L+3 N +L+3
Div. 1 1 1 2 2 2
Comp. 2 log2 (N) + 2 2 log2 (N) + 2 2 log2 (N) + 2
IEICE TRANS. FUNDAMENTALS, VOL.E96–A, NO.6 JUNE 2013
1486
Fig. 2 MSE curves of the proposed PU-NSLMS-SU for N=256, L=32, 64, 128, and the algorithm described in (18) for L = 32 [Input: white Gaussian].
algorithms have a similar complexity per iteration, the gain of applying the PU-NSLMS-SU algorithms comes through the reduced number of required updates, which cannot be accounted for a priori. The subset of the filter coefficients to be updated at each iteration is obtained by sorting the elements of |ui | in ascending order and then selecting the filter coefficients corresponding to the L largest elements of that vector. Fast sorting algorithms such as the one in [16] require 2 log2 (N) + 2 comparisons for sorting. 6.
Experimental Results
We demonstrate the performance of the proposed algorithm by carrying out experiments in a system identification configuration. The unknown system to be identified is an acoustic echo response of a room truncated to 256 taps with a 8-kHz sampling rate. The adaptive filter and the unknown system are assumed to have the same number of taps, i.e., N = 256. The input signal u(i) is a white Gaussian signal with zero mean and unit variance. The signal-to-noise ratio (SNR) is calculated by SNR = 10 log10 E[y2 (n)]/E[v2 (n)] , where y(i) = uTi wo . The measurement noise v(i) is added to y(i) such that SNR = 30 dB. We shall assume that the noise variance, σ2v , is known because it can be easily estimated during silences or online in many practical applications. The mean square error (MSE), E{|e(i)|2 }, is taken by averaging over 1000 independent trials. Figure 2 shows the MSE curves of the proposed PUNSLMS-SU algorithms with N = 256 and L= 32, 64, 128, respectively. The error bound γ is set to 5σ2v [9], [11]. As shown in Fig. 2, we can see that the convergence performance of the proposed PU-NSLMS-SU gets worse with decreasing values of L. In particular, while the update recursion described in (18) and (19) diverge for L = 32 (see (d) of Fig. 2), the proposed PU-NSLMS-SU converges well because the step-sizes are adjusted not to exceed the limitation for convergence [8].
Fig. 3 MSE curves for the NSLMS (μ = 0.7), the PU-NSLMS (μ = 0.5), the SM-PU-NLMS, and the proposed PU-NSLMS-SU with N = 256 and L = 32 [Input: white Gaussian]. Table 2 Iteration 0∼2000 2000∼4000 4000∼6000 6000∼8000 8000∼10000
Number of updates and update ratio.
PU-NSLMS-SU (L=32) Updates (ratio) 1747 (87%) 759 (38%) 424 (21%) 348 (17%) 311 (15%)
SM-PU-NLMS (L=32) Updates (ratio) 1698 (85%) 770 (38%) 388 (19%) 358 (18%) 332 (16%)
Figure 3 compares the MSE curves of the NSLMS, the PU-NSLMS [8], the SM-PU-NLMS [13], and the proposed PU-NSLMS-SU with N = 256, L = 32. To get a similar steady-state error level, we set the step-sizes of the NSLMS and the PU-NSLMS algorithm as 0.7 and 0.5, respectively, and choose 5σ2v as γ. In Fig. 3 and Table 2, we can see that the proposed algorithm has better convergence performance than the PU-NSLMS with reduced computational complexity and is comparable to the SM-PU-NLMS algorithm. In addition, the number of times that an update took place for the PU-NSLMS-SU and SM-PU-NLMS with L = 32 was only 3589 and 3546, respectively, in 10000 iterations, so that the computational complexity was greatly reduced compared with the 10000 updates required by the PU-NSLMS. 7.
Conclusion
We have presented a low complexity L∞ -norm based NSLMS adaptive algorithm which employs both partial updates of a filter and sparse updates in time. Following control of a step size prevents the adaptive filter from diverging, ensuring the stability of the proposed algorithm. Additionally, the mean-square stability of the proposed algorithm for a white Gaussian input has been provided. Experimental results demonstrate that the proposed PU-NSLMS-SU has similar convergence performance to the PU-NSLMS with much reduced computational complexity and is comparable to the SM-PU-NLMS.
LETTER
1487
References [1] A.H. Sayed, Fundamentals of Adaptive Filtering, Prentice Hall, Englewood Cliffs, NJ, 2003. [2] J. Nagumo and A. Noda, “A learning method for system identification,” IEEE Trans. Autom. Control, vol.AC-12, no.3, pp.282–287, June 1967. [3] S.H. Cho, Y.S. Kim, and J.A. Cadzow, “Adaptive FIR filtering based on minimum L∞ -norm,” Proc. IEEE Pacific Rim Conf. Commun., Comput., Signal Process., vol.2, pp.643–646, B.C., Canada, May 1991. [4] S.C. Douglas, “Adaptive filters employing partial updates,” IEEE Trans. Circuits Syst. II, Analog Digit. Signal Process., vol.44, no.3, pp.209–216, March 1997. [5] K. Doˇganc¸ay and O. Tanrıkulu, “Adaptive filtering algorithms with selective partial updates,” IEEE Trans. Circuits Syst. II, Analog Digit. Signal Process., vol.48, no.8, pp.762–769, Aug. 2001. [6] M. Godavarti and A.O. Hero, “Partial update LMS algorithms,” IEEE Trans. Signal Process., vol.53, no.7, pp.2382–2399, July 2005. [7] K. Doˇganc¸ay, Partial-Update Adaptive Signal Processing: Design, Analysis and Implementation, Academic Press, Oxford UK, 2008. [8] A. Tandon, M.N.S. Swamy, and M.O. Ahmad, “Partial-update L∞ norm based algorithms,” IEEE Trans. Circuits Syst. I, Regular Papers, vol.54, no.2, pp.411–419, Feb. 2007. [9] S. Gollamudi, S. Nagaraj, S. Kapoor, and Y.-F. Huang, “Setmembership filtering and a set-membership normalized LMS algorithm with an adaptive step size,” IEEE Signal Process. Lett., vol.5, no.5, pp.111–114, May 1998. [10] S. Gollamudi, S. Nagaraj, S. Kapoor, and Y.-F. Huang, “Setmembership adaptive equalization and updator-shared impementation for muliple channel communications systems,” IEEE Trans. Signal Process., vol.46, no.9, pp.2372–2384, Sept. 1998. [11] P.S.R. Diniz and S. Werner, “Set-membership binormalized datareusing LMS algorithm,” IEEE Trans. Signal Process., vol.51, no.1, pp.124–134, Jan. 2003. [12] J.E. Lee, Y.S. Choi, and W.J. Song, “A low-complexity L∞ -norm adaptive filtering algorithm,” IEEE Trans. Circuits Syst. II, Express Briefs, vol.54, no.12, pp.1092–1096, Dec. 2007. [13] S. Werner, M.L.R. Campos, and P.S.R. Diniz, “Partial-update NLMS algorithms with data-selective updating,” IEEE Trans. Signal Process., vol.52, no.4, pp.938–949, April 2004. [14] M.S.E. Abadi and J.H. Husøy, “Selective partial update and setmembership subband adaptive filters,” Signal Process., vol.88, no.10, pp.2463–2471, Oct. 2008. [15] H.-C. Shin, and A.H. Sayed, “Mean-square performance of a family of affine projection algorithms,” IEEE Trans. Signal Process., vol.52, no.1, pp.90–102, Jan. 2004.
[16] I. Pitas, “Fast algorithm for running ordering and max/min calculation,” IEEE Trans. Circuits Syst. II, vol.36, no.6, pp.795–804, June 1989.
Appendix In this Appendix, how to obtain A = E STL (i) ui 1 B is shown, and matrix B is given by B = ui sign{STL (i) ui }T + sign{STL (i) ui }uTi . (A· 1) For large values of N and L, the fluctuation of ui 1 due to u(i) is small enough to assume that ui 1 is uncorrelated with ui . Thus, A can be rewritten by A = E STL (i) ui 1 E [B] . (A· 2) Since we assume that input signals are to be i.i.d., the off diagonals of matrix B will average zero. The diagonal will be an average over the L largest elements only. Let pi denote the probability for representing that one of the L largest components will contribute to the ith element on the diagoN define elements of the input vector nal. Moreover, let {yi }i=1 ui sorted in magnitude such that y1 ≤ y2 ≤ · · · ≤ yN . For a given L, the diagonal elements of B can be calculated as follows: Bm,m = 2
L−1
E pm |yN−k+1 |
k=0
2 = E STL (i) ui 1 N
(A· 3)
where Bm,m means the (m, m)th element of the matrix B and input signals are i.i.d., pm = 1/N. Thus, ρ1 can be obtained by ˜i ˜ Ti Bw STL (i) ui 1 w ρ1 = Pe,i E Nui 2 ⎡ 2⎤ ⎢⎢⎢⎢ STL (i) ui 1 ⎥⎥⎥⎥ T ˜i . ˜i w ≈ 2Pe,i E ⎣ (A· 4) ⎦E w LN 2 σ2u