Hindawi Wireless Communications and Mobile Computing Volume 2017, Article ID 8140702, 10 pages https://doi.org/10.1155/2017/8140702
Research Article Sparse Multipath Channel Estimation Using Norm Combination Constrained Set-Membership NLMS Algorithms Yanyan Wang and Yingsong Li College of Information and Communications Engineering, Harbin Engineering University, Harbin 150001, China Correspondence should be addressed to Yingsong Li;
[email protected] Received 22 December 2016; Accepted 16 February 2017; Published 28 February 2017 Academic Editor: Daniele Pinchera Copyright © 2017 Yanyan Wang and Yingsong Li. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. A norm combination penalized set-membership NLMS algorithm with 𝑙0 and 𝑙1 independently constrained, which is denoted as 𝑙0 and 𝑙1 independently constrained set-membership (SM) NLMS (L0L1SM-NLMS) algorithm, is presented for sparse adaptive multipath channel estimations. The L0L1SM-NLMS algorithm with fast convergence and small estimation error is implemented by independently exerting penalties on the channel coefficients via controlling the large group and small group channel coefficients which are implemented by 𝑙0 and 𝑙1 norm constraints, respectively. Additionally, a further improved L0L1SM-NLMS algorithm denoted as reweighted L0L1SM-NLMS (RL0L1SM-NLMS) algorithm is presented via integrating a reweighting factor into our L0L1SM-NLMS algorithm to properly adjust the zero-attracting capabilities. Our developed RL0L1SM-NLMS algorithm provides a better estimation behavior than the presented L0L1SM-NLMS algorithm for implementing an estimation on sparse channels. The estimation performance of the L0L1SM-NLMS and RL0L1SM-NLMS algorithms is obtained for estimating sparse channels. The achieved simulation results show that our L0L1SM- and RL0L1SM-NLMS algorithms are superior to the traditional LMS, NLMS, SM-NLMS, ZA-LMS, RZA-LMS, and ZA-, RZA-, ZASM-, and RZASM-NLMS algorithms in terms of the convergence speed and steady-state performance.
1. Introduction Broadband communication technology has attracted much attention for modern wireless communications, and signal transmission on such a broadband wireless channel usually encounters a frequency selecting phenomenon [1–5]. Moreover, the frequency-selective behavior often leads to sparse properties in such broad channels [1, 6]. In those broadband wireless multipath communication channels, most of these unknown channels are often sparse [1–3], in which only a fewer coefficients are large in magnitude. In the sparse channel, only fewer channel taps are dominant, indicating that a number of channel taps are close to zeros or zeros. On the other hand, signal though such frequency selecting sparse channels and noise environments may reduce the communication quality. Therefore, precision channel estimation methods are desired to recovery unknown channel which is often illustrated by finite impulse response (FIR) [1–3]. Adaptive filter techniques are regarded to be useful
channel estimation methods owing to their fast convergence speed and good estimation behavior [7–9]. After that, various adaptive channel estimation algorithms were presented in order to guarantee the stable propagation and effective signal transmission [8–13], such as LMS and SM-NLMS algorithms [10–13]. Although these adaptive filtering algorithms can achieve robust estimation performance, they cannot deal with the sparse channel estimation well. In order to make use of the prior sparseness characteristics of the multipath broadband channels, several sparse LMS algorithms were reported by modifying the cost function of the conventional LMS algorithms [14–18], which is inspired by the compressive sensing (CS) [19, 20]. In [14], a sparse LMS algorithm has been reported by using the modified cost function via a 𝑙1 -norm penalty, resulting in zero-attraction (ZA) LMS (ZA-LMS) algorithm. ZA-LMS can provide a good behavior for sparse adaptive channel estimations by exerting a zero-attraction on all channel taps. The estimation behavior of the ZA-LMS may be reduced
2 because of the uniform penalty. Then, a reweighted ZALMS (RZA-LMS) algorithm was presented by considering a log-sum function as a constraint of the channel coefficient vector [14]. Subsequently, the RZA-LMS achieved an improved channel estimation behavior in accordance with convergence and estimation misalignment. After that, the zero-attracting technique has been realized by using 𝑙𝑝 -norm [21], smooth approximation 𝑙0 -norm [15], and nonuniform constraints [18] to further develop the sparse properties in broadband channel. However, their estimation behavior may be degraded since the LMS is sensitive to the input signal scaling. Then, the zero-attracting methods were expanded to the NLMS [22], LMF [23], LMS/F [24–27], leaky LMS [28], and affine projection algorithms [29–33]. However, some of them are high in complexity and others need an extra balance parameter to adjust the combination of the LMF and LMS algorithms. Recently, the zero-attraction methods and set-membership filtering theory have been used to take use of the spareness of the sparse channel to reduce computational complexity [10, 34, 35]. In [35], the ZA and RZA methods are further investigated on the SM-NLMS to develop RZASMand ZASM-NLMS algorithms. As a result, the RZASMand ZASM-NLMS algorithms can provide a better channel estimation behavior than NLMS and its variants. However, these two sparse SM-NLMS adaptive filtering algorithms cannot effectively adjust the zero-attracting according to the channel coefficients in real time. We propose a norm combination penalized SM-NLMS algorithm with 𝑙0 and 𝑙1 independently constrained, which is realized by employing a constraint on the channel taps according to the value of the channel coefficients and it is named as 𝑙0 and 𝑙1 independently constrained setmembership NLMS (L0L1SM-NLMS) algorithm. The realization of our L0L1SM-NLMS is devised via integrating 𝑙0 norm and 𝑙1 -norm into the SM-NLMS’s cost function for constructing a desired zero-attraction term which separates the channel taps into small and large groups. Then, the two different groups are attracted to zero based on 𝑙0 norm and 𝑙1 -norm penalties. The proposed L0L1SM-NLMS algorithm is given in detail. Also, a reweighing method is introduced into the zero-attraction term to enhance the robustness of the L0L1SM-NLMS algorithm, resulting in a reweighting L0L1SM-NLMS (RL0L1SM-NLMS) algorithm. We give an evaluation of our developed L0L1SM-NLMS and RL0L1SM-NLMS algorithms in designated sparse channels. The obtained results by estimating sparse channels illustrate that our L0L1SM- and RL0L1SM-NLMS algorithms are more excellent compared with the LMS, NLMS, ZA-LMS, RZALMS, and ZA-, RZA-, SM-, ZASM-, and RZASM-NLMS algorithms as for the convergence and estimation misalignment. The rest of the paper is given herein. Section 2 gives a review of the conventional SM-NLMS and SM filtering theory. Also, the previously proposed ZASM-NLMS algorithm is discussed. In Section 3, our developed L0L1SMand RL0L1SM-NLMS algorithms are derived thoroughly. In Section 4, channel estimation behavior of our L0L1SM-NLMS
Wireless Communications and Mobile Computing and RL0L1SM-NLMS is investigated and discussed in detail. The last section is conclusion of this paper.
2. Conventional SM-NLMS Algorithm 2.1. SM Filtering Theory. A training signal x(𝑛) = [𝑥(𝑛 − 1), 𝑥(𝑛), . . . , 𝑥(𝑛 − 𝑁)]𝑇 is considered to give a discussion on the SM-NLMS. An AWGN signal V(𝑛) and an expected signal 𝑑(𝑛) are used to discuss the SM filtering (SMF) theory and typical adaptive channel estimation (ACE) system. x(𝑛) is conveyed to an unknown FIR wireless communication channel w and the multipath fading channel’s output is 𝑦(𝑛) that is gotten by 𝑦(𝑛) = x𝑇 (𝑛)w. At receiver side, the expected signal 𝑑(𝑛) is contaminated by V(𝑛). The purpose of the ACE aims to minimize the estimation error 𝑒(𝑛) which denotes a difference between 𝑦(𝑛) and 𝑑(𝑛). Thus, we can get 𝑒(𝑛) = 𝑑(𝑛) − x𝑇 (𝑛)̂ w(𝑛). The typical adaptive filter (AF) algorithms are employed to get an estimation of the unknown FIR channel w via minimizing an error function related to estimation error 𝑒(𝑛). For instance, the LMS algorithm uses the secondorder estimation error 𝑒2 (𝑛). Moreover, the NLMS algorithm utilizes normalized power of x(𝑛) to improve the estimation behavior of the LMS. As for SMF theory, a certain bound is exerted on 𝑒(𝑛) [10–13]. The SMF methods employ an interested subspace to create a model 𝑒(𝑛). Assume that a model space 𝑆 is comprised of input-vector-desired-output pairs (IVDOPs). In SMF theory, an error criterion is used for bounding 𝑆. Parameter estimations are bounded based on a parameter 𝛾 for all the data in 𝑆. Thereby, SMF algorithm should properly choose a special set in the parameter space and hence it is different with point estimation. According to the SMF principle, we have [10–13] |𝑒 (𝑛)|2 ≤ 𝛾2
∀ (𝑑 (𝑛) , x (𝑛)) ∈ 𝑆,
(1)
where (𝑑(𝑛), x(𝑛)) denotes IVDOPs. As arbitrary ∀(𝑑(𝑛), x(𝑛)) ∈ 𝑆, we have a solution of the possible vectors w by the use of the following equation [10–13]: Θ=
⋂ (𝑑(𝑛),x(𝑛))∈𝑆
̂ (𝑛) ≤ 𝛾} , (2) {̂ w ∈ R𝑁 : 𝑑 (𝑛) − x𝑇 (𝑛) w
where R𝑁 is a vector space with a dimension of 𝑁. If 𝑘 IVDOPs {𝑥𝑖 (𝑛), 𝑑𝑖 (𝑛)}𝑘𝑖=1 are used for training the filter, the measurement set of the SMF algorithms can be written as [10– 13] ̂𝑘 (𝑛) ≤ 𝛾} . w ∈ R𝑁 : 𝑑𝑘 (𝑛) − x𝑘𝑇 (𝑛) w 𝐻𝑘 = {̂
(3)
The SMF algorithm is to find the solutions following an exact set Ψ𝑘 which possesses 𝑘 observed IVDOPs 𝑘
Ψ𝑘 = ⋂𝐻𝑖 . 𝑖=1
In fact, Θ is a subset of Ψ𝑘 in each iteration.
(4)
Wireless Communications and Mobile Computing
3
2.2. SM-NLMS Algorithm. Based on SMF theory and NLMS algorithm, the SM-NLMS has been developed by finding a minimization of ‖̂ w(𝑛 + 1) − w ̂(𝑛)‖2 within a constraint of w ̂(𝑛 + 1) ∈ 𝐻𝑘 [10]. The SM-NLMS algorithm finds out the solution of the following optimization for w ̂(𝑛) ∉ 𝐻𝑘 min s.t.:
w (𝑛 + 1) − w ̂ (𝑛)‖2 ‖̂ 𝑇
̂ (𝑛 + 1) = 𝛾. 𝑑 (𝑛) − x (𝑛) w
(5)
We use Lagrange multiplier method to find out the minimization of (5). Therefore, the SM-NLMS’s updating equation is w ̂ (𝑛 + 1) = w ̂ (𝑛) +
x (𝑛) 𝛼𝑒 (𝑛) , x𝑇 (𝑛) x (𝑛) + 𝛿
(6)
where
𝛾 for |𝑒 (𝑛)| > 𝛾 {1 − (𝑛)| |𝑒 𝛼={ (7) otherwise, {0 where 𝛿 > 0 is to avoid dividing by zero. From (6), we can see that the SM-NLMS’s updating formula is similar to basic NLMS algorithm. Herein, 𝛼 acts as a step factor. Inspired by the CS and ZA techniques, sparse SMNLMS algorithm denoted as ZASM-NLMS was presented by exerting a 𝑙1 -norm constraint on the SM-NLMS’s cost function, and hence, the ZASM-NLMS solves the following problem, given as [35] min
w (𝑛 + 1) − w ̂ (𝑛)‖2 + 𝛾1 ‖̂ w (𝑛 + 1)‖1 ‖̂
s.t.:
̂ (𝑛 + 1) = 𝛾, 𝑑 (𝑛) − x𝑇 (𝑛) w
(8)
where 𝛾1 denotes a ZA parameter. Similarly, we employ the Lagrange multiplier method to acquire the updating equation of the reported ZASM-NLMS whose updating equation is w ̂ (𝑛 + 1) = w ̂ (𝑛) +
x (𝑛) {𝑒 (𝑛) − 𝛾} x𝑇 (𝑛) x (𝑛)
𝛾 + 1 G (𝑛) sgn [̂ w (𝑛)] , 2
(9)
where G(𝑛) = {x(𝑛)x𝑇 (𝑛)/x𝑇 (𝑛)x(𝑛) − I}. Since the updating equation for ZASM-NLMS is complex, a simple method can be employed to solve (8), which brings about an optimization problem without constraint. For the sake of compatibility with the traditional SM-NLMS, we can use the following equation to get the solution of (8) [35]: minimize
1 w (𝑛 + 1) − w ̂ (𝑛)‖2 ‖̂ 2 +
1 2 w (𝑛 + 1)‖1 , 𝑒 (𝑛) − 𝛾ZA R(𝑛) + 𝛾1 ‖̂ 2
x (𝑛) 𝛼𝑒 (𝑛) x𝑇 (𝑛) x (𝑛) + 𝛿
− 𝜌 sgn [̂ w (𝑛)] ,
𝑥 { , 𝑥 ≠ 0 |𝑥| sgn [𝑥] = { 𝑥 = 0. {0,
(12)
In contrast to the update equation in (6), it is found that the ZASM-NLSM provides an extra term −𝜌 sgn[̂ w(𝑛)] in its update equation, which is to provide a fast attraction to force the small coefficients in magnitude to zero rapidly. Moreover, the ZA strength is controlled by parameter 𝜌 and the ZASMNLSM algorithm provides the same zero-attraction on all the channel coefficients. As a result, the ZASM-NLSM algorithm cannot effectively distinguish the zero and nonzero channel coefficients, which may degrade its performance for less sparse systems. To improve its performance, an improved sparse SM-NLMS is presented by using a sum-log function to replace the 𝑙1 -norm in ZASM-NLSM algorithm, which was named as RZASM-NLSM algorithm. Although the RZASMNLSM algorithm improves the performance of the ZASMNLSM, it also increases the computational complexity.
3. Proposed L0L1SM-NLMS Algorithms As is known to us, the previously presented ZASM- and RZASM-NLMS algorithms estimate the sparse channel by using an 𝑙1 -norm and sum-log function penalty to achieve good channel estimation performance. However, they may neglect the sparse structure in-nature unknown channel in practical applications. Inspired by the CS and the ZA techniques [14–20], we propose an adaption-norm penalized SMNLMS algorithm by exerting 𝑙0 -norm and 𝑙1 -norm on the general cost function of the basic SM-NLMS to construct a desired independent norm penalty. The proposed algorithms are to produce desired zero-attraction terms which divides the filter coefficients into large and small groups. By dynamically assigning the 𝑙0 -norm and 𝑙1 -norm penalties to the different channel taps, the proposed algorithms can attract the large and small group channel taps to zero based on 𝑙0 norm and 𝑙1 -norm penalties. Here, a mixture of 𝑙0 -norm and 𝑙1 -norm penalty is implemented by using an 𝑙𝑝 -norm penalty. The 𝑙𝑝 -norm is described as 𝑁−1
(10)
where R(𝑛) = [x𝑇 (𝑛)x(𝑛)]−1 , and let ‖𝑒(𝑛) − 𝛾ZA ‖2R(𝑛) = [𝑒(𝑛) − 𝛾ZA ]𝑇 R(𝑛)[𝑒(𝑛) − 𝛾ZA ] = 0; we get the ZASM-NLMS’s updating equation w ̂ (𝑛 + 1) = w ̂ (𝑛) +
where 𝜌 > 0 denotes a ZA ability factor to give a balance between the estimation misalignment and the sparse constraint of w ̂(𝑛) and sgn[⋅] represents a sign function with a component-wise implementation, and it is given by [14]
𝑝 ̂ w (𝑛)‖𝑝𝑝 = ∑ 𝑤 ‖̂ 𝑖 (𝑛) ,
where we have a definition of 0 ≤ 𝑝 ≤ 1. Then, 𝑙0 -norm and 𝑙1 -norm are 𝑝→0
w (𝑛)‖0 , w (𝑛)‖𝑝𝑝 = ‖̂ lim ‖̂
(14)
w (𝑛)‖1 . w (𝑛)‖𝑝𝑝 = ‖̂ lim ‖̂
(15)
𝑝→1
(11)
(13)
𝑖=0
We note that ‖̂ w(𝑛 + 1)‖0 can count the nonzero channel coefficients in the sparse channel. To utilize of the advantages
4
Wireless Communications and Mobile Computing
of the 𝑙0 -norm and 𝑙1 -norm, the proposed adaption-norm penalty SM-NLMS algorithm is given by minimize
w (𝑛 + 1) − w ̂ (𝑛)‖2 + 𝛾pro ‖̂ w (𝑛 + 1)‖𝑝𝑝 ‖̂
subject to
̂ (𝑛 + 1) = 𝛾, 𝑑 (𝑛) − x𝑇 (𝑛) w
(16)
where 𝛾pro is a ZA parameter used for controlling the sparsity and convergence rate. As a result, the modified cost function of our developed algorithm is w (𝑛 + 1) − w ̂ (𝑛)‖2 + 𝛾pro ‖̂ 𝐽pro (𝑛) = ‖̂ w (𝑛 + 1)‖𝑝𝑝 ̂ (𝑛 + 1) − 𝛾) . + 𝜆 (𝑑 (𝑛) − x𝑇 (𝑛) w
𝜕̂ w (𝑛 + 1)
𝑝
=
𝜕 ‖̂ w (𝑛 + 1)‖𝑝 𝜕𝐽SM-NLMS (𝑛) + 𝛾pro 𝜕̂ w (𝑛 + 1) 𝜕̂ w (𝑛 + 1)
= 0,
(18)
𝜕𝐽pro (𝑛)
= 0. 𝜕𝜆 Form (18), we get ̂𝑖 (𝑛) + ̂𝑖 (𝑛 + 1) = 𝑤 𝑤 − 𝛾pro
𝜆 𝑥 (𝑛) 2 𝑖
𝑝 sgn [̂ 𝑤𝑖 (𝑛 + 1)] 1−𝑝 , ̂ 2 𝑤𝑖 (𝑛 + 1)
̂𝑖 (𝑛 + 1) = 𝑑 (𝑛) − 𝛾. 𝑥𝑖𝑇 (𝑛) 𝑤
𝑁
𝑝 𝑝𝑖 ̂ w (𝑛)‖𝑝,𝑁 = ∑ 𝑤 ‖̂ 𝑖 (𝑛) , 𝑖=1
0 ≤ 𝑝𝑖 ≤ 1.
(24)
By considering the definition of (24), (23) is updated to be (17)
The Lagrange multiplier method is also adopted to find out a minimization of cost function in (17). Then, we have 𝜕𝐽pro (𝑛)
However, we note that the ZA term in the update equation will inevitably result in error for obtaining accurate sparse channel estimation. Fortunately, we note that the parameter 𝑝 is a variable value according to our previous definition. Thus, we can redefine the 𝑙𝑝 -norm penalty further, which is illustrated as
(19)
(20)
By left-multiplying 𝑥𝑖𝑇 (𝑛) on both sides of (19) and combining it with equation (20), we have 𝜆 2
̂𝑖 (𝑛 + 1) = 𝑤 ̂𝑖 (𝑛) + 𝑤
𝑥𝑖 (𝑛) 𝑇 𝑥𝑖 (𝑛) 𝑥𝑖 (𝑛)
+𝛿
{𝑒 (𝑛) − 𝛾}
(25) 𝛾pro 𝑝𝑖 sgn [̂ 𝑤𝑖 (𝑛 + 1)] + 𝐺𝑖 (𝑛) . 1−𝑝𝑖 2 𝑤 +𝛿 ̂𝑖 (𝑛 + 1) From (25), we can find that the zero attractor term suggests ̂𝑖 (𝑛 + 1) can be divided into small and large that the value of 𝑤 groups. Herein, we define an expectation value ̂ ℎ (𝑛) = 𝐸 [𝑤 (26) 𝑖 (𝑛)] , ∀0 ≤ 𝑖 < 𝑁. For the large group, we aim to minimize [𝑝𝑖 ||̂ 𝑤𝑖 (𝑛)|𝑝𝑖 −1 |] = 0, ̂𝑖 (𝑛) > ℎ(𝑛). As for the small group, we which is subjected to 𝑤 use 𝑝𝑖 = 1 to balance the solution [18]. Thereby, 𝑝𝑖 is assigned ̂𝑖 (𝑛) < ℎ(𝑛), respectively. Till ̂𝑖 (𝑛) > ℎ(𝑛) or 𝑤 to 0 or 1 for 𝑤 now, the proposed 𝑝-norm is separated into 𝑙0 -norm and 𝑙1 norm by considering a mean of the channel taps. Therefore, the proposed algorithm mentioned above can be regarded as 𝑙0 -norm and 𝑙1 -norm penalty SM-NLMS for the large and small group, respectively. Then, the updating equation of (25) is changed to be ̂𝑖 (𝑛) + ̂𝑖 (𝑛 + 1) = 𝑤 𝑤
𝑥𝑖 (𝑛) 𝑇 𝑥𝑖 (𝑛) 𝑥𝑖 (𝑛)
+𝛿
{𝑒 (𝑛) − 𝛾} (27)
+ 𝜌pro 𝑓𝑖 sgn [̂ 𝑤𝑖 (𝑛 + 1)] 𝐺𝑖 (𝑛) , −1
= [𝑥𝑖𝑇 (𝑛) 𝑥𝑖 (𝑛)] {𝑒 (𝑛) − 𝛾}
(21)
𝛾pro
𝑝 sgn [̂ 𝑤𝑖 (𝑛 + 1)] −1 [𝑥𝑖𝑇 (𝑛) 𝑥𝑖 (𝑛)] 𝑥𝑖𝑇 (𝑛) 1−𝑝 . 𝑤 2 ̂𝑖 (𝑛 + 1) Then, substituting (21) into (19), we can get +
̂𝑖 (𝑛) + ̂𝑖 (𝑛 + 1) = 𝑤 𝑤
𝑥𝑖 (𝑛) {𝑒 (𝑛) − 𝛾} 𝑥𝑖𝑇 (𝑛) 𝑥𝑖 (𝑛)
(22) 𝛾pro 𝑝 sgn [̂ 𝑤𝑖 (𝑛 + 1)] + 1−𝑝 𝐺𝑖 (𝑛) . 2 𝑤 ̂𝑖 (𝑛 + 1) In order to prevent dividing by zero, we add a very small constant 𝛿 > 0 into (23). Then, the updating equation of our developed algorithm is ̂𝑖 (𝑛 + 1) = 𝑤 ̂𝑖 (𝑛) + 𝑤
𝑥𝑖 (𝑛) {𝑒 (𝑛) − 𝛾} 𝑇 𝑥𝑖 (𝑛) 𝑥𝑖 (𝑛) + 𝛿
𝛾pro 𝑝 sgn [̂ 𝑤𝑖 (𝑛 + 1)] + 𝐺𝑖 (𝑛) . 1−𝑝 2 𝑤 ̂𝑖 (𝑛 + 1) + 𝛿
(23)
where 𝐺𝑖 (𝑛) = {𝑥𝑖 (𝑛)𝑥𝑖𝑇 (𝑛)/𝑥𝑖𝑇(𝑛)𝑥𝑖 (𝑛) − 1} and 𝜌pro is a ZA strength controlling factor used to provide a balance between the convergence and spareness, and 𝑓𝑖 is ̂ sgn [ℎ (𝑛) − 𝑤 𝑖 (𝑛)] + 1 (28) 𝑓𝑖 = , ∀0 ≤ 𝑖 < 𝑁. 2 When the proposed algorithm is close to stability, we have 𝑤𝑖 (𝑛)]. Then, (27) is changed to be sgn[̂ 𝑤𝑖 (𝑛 + 1)] = sgn[̂ ̂𝑖 (𝑛) + ̂𝑖 (𝑛 + 1) = 𝑤 𝑤
𝑥𝑖 (𝑛) 𝑇 𝑥𝑖 (𝑛) 𝑥𝑖 (𝑛)
+𝛿
𝛼𝑒 (𝑛) (29)
𝑤𝑖 (𝑛)] 𝐺𝑖 (𝑛) . + 𝜌pro 𝑓𝑖 sgn [̂ In fact, we can use a matrix 𝑓0 0 ⋅ ⋅ ⋅ [0 𝑓 0 [ 1 [ F=[. [. [. 0 d [0
0
0
0 ] ] ] ] ] 0 ]
0 𝑓𝑁−1 ]𝑁×𝑁
(30)
Wireless Communications and Mobile Computing
5
to define all 𝑓𝑖 in each iteration of (29). Then, the proposed algorithm is also written as w ̂ (𝑛 + 1) = w ̂ (𝑛) +
x (𝑛) 𝛼𝑒 (𝑛) x𝑇 (𝑛) x (𝑛) + 𝛿
(31)
+ 𝜌pro F sgn [̂ w (𝑛)] G (𝑛) . As we know, the proposed algorithm is complex for calculating G(𝑛). We can use a simple method to find out the solution of (16) by considering (24) and the separation of the 𝑙0 -norm and 𝑙1 -norm. Then, the L0L1SM-NLMS algorithm solves an unconstrained optimization problem min
1 1 2 w (𝑛 + 1) − w ̂ (𝑛)‖2 + 𝑒 (𝑛) − 𝛾𝑆(𝑛) ‖̂ 2 2 + 𝛾L0L1 ‖̂ w (𝑛 +
1)‖𝑝𝑝
(32)
.
Here, S(𝑛) = [x𝑇 (𝑛)x(𝑛)]−1 , and we have ‖𝑒(𝑛) − 𝛾‖2𝑆(𝑛) = [𝑒(𝑛) − 𝛾]𝑇S(𝑛)[𝑒(𝑛) − 𝛾]. Then, the updating equation of our L0L1SM-NLMS is obtained w ̂ (𝑛 + 1) = w ̂ (𝑛) +
x (𝑛) 𝛼𝑒 (𝑛) x𝑇 (𝑛) x (𝑛) + 𝛿
(33)
The developed L0L1SM-NLMS has a last zero attractor term, which is to exert 𝑙0 -norm or 𝑙1 -norm penalties on the channel taps according to sparse channel property in practical application. The 𝑙0 -norm or 𝑙1 -norm penalty is assigned via the F matrix to the separated large and small groups. Thus, our L0L1SM-NLMS algorithm can be used to enhance the convergence rate and it can reduce the estimation bias for estimating sparse channels. The developed L0L1SM-NLMS algorithm synthesizes the advantages of 𝑙0 -norm and 𝑙1 -norm penalties and serves as a dynamic adaptive norm constraint to give an 𝑙0 -norm penalty on the large channel taps and to exert an 𝑙1 -norm on the small channel taps. Similar to the RZASMNLMS, the proposed L0L1SM-NLMS algorithm can be further enhanced by introducing a reweighing factor into (33) to achieve a reweighing L0L1SM-NLMS algorithm (RL0L1SMNLMS) algorithm. As a result, the related updating equation of RL0L1SM-NLMS is 𝑥𝑖 (𝑛) 𝛼𝑒 (𝑛) 𝑥𝑖𝑇 (𝑛) 𝑥𝑖 (𝑛) + 𝛿
sgn [̂ 𝑤𝑖 (𝑛)] − 𝜌RL0L1 𝑓𝑖 , ̂ 1 + 𝜉 𝑤 𝑖 (𝑛)
(34)
where 𝜌RL0L1 is a ZA parameter and 𝜉 is a positive parameter to adjust the reweighing strength. By considering all the channel taps and the training signals, (34) is changed to be w ̂ (𝑛 + 1) = w ̂ (𝑛) +
x (𝑛) 𝛼𝑒 (𝑛) x𝑇 (𝑛) x (𝑛) + 𝛿
− 𝜌RL0L1 F
sgn [̂ w (𝑛)] . 1 + 𝜉 |̂ w (𝑛)|
4. Simulation Results Here, we give an investigation on the behavior of our presented L0L1SM-NLMS and RL0L1SM-NLMS algorithms. Exactly, the convergence properties and steady-state misalignment of our developed L0L1SM- and RL0L1SM-NLMS algorithms are accessed via sparse channels with different sparse level 𝐾. Furthermore, we also give the effects of the ZA parameters. The performance is obtained by the use of computer simulation and the behaviors of the L0L1SM- and RL0L1SM-NLMS algorithms are compared with classic LMS, NLMS, and SM-NLMS algorithms and popular ZA-LMS, RZA-LMS, and ZA-, RZA-, ZASM-, and RZASM-NLMS. We adopt the MSE criterion to evaluate the performance of sparse channel estimators. Herein, the MSE is MSE = 𝐸 {‖w (𝑛) − w ̂ (𝑛)‖2 } .
− 𝜌L0L1 F sgn [̂ w (𝑛)] .
̂𝑖 (𝑛) + ̂𝑖 (𝑛 + 1) = 𝑤 𝑤
Here, the matrix F is the same as that in (33). We observe that our RL0L1SM-NLMS algorithm has a reweighing factor 1/(1 + 𝜉|̂ w(𝑛)|) in its update equation in comparison with L0L1SM-NLMS algorithm. Thus, we can use the reweighing factor to adjust the zero attractor ability by properly choosing parameter 𝜉. A suitable 𝜉 can create an attracting constraint on the small or the large grouped channel coefficients to effectively adjust the 𝑙0 -norm or 𝑙1 -norm constraints, respectively.
(35)
(36)
In all the experiments, x(𝑛) denotes a Gaussian random signal which is independent of V(𝑛). The power of x(𝑛) and white noise is 1 and 1 × 10−2 , respectively. The signal-to-noise ratio (SNR) is set to be 20 dB in the experiments. In the first experiment, we consider a channel whose length is set to be 𝑁 = 16, and only one nonzero tap (𝐾 = 1) is set to find out the effects of 𝜌L0L1 and 𝜌RL0L1 on our developed L0L1SM- and RL0L1SM-NLMS algorithms, respectively. Here, the simulation parameters are 𝛿 = 0.01 and 𝛾 = √2𝛿V2 in L0L1SM-NLMS algorithm. Figure 1 gives the effects of 𝜌L0L1 on our L0L1SMNLMS algorithm. As we can see form Figure 1, there is a large MSE for 𝜌L0L1 = 1 × 10−1 . When 𝜌L0L1 decreases from 5 × 10−2 to 1 × 10−3 , the MSE of our developed L0L1SMNLMS algorithm is reduced, indicating that our L0L1SMNLMS algorithm achieves a low estimation misalignment. If 𝜌L0L1 continues to decrease, the MSE of our L0L1SM-NLMS algorithm is getting larger. Next, the effect of 𝜌RL0L1 is shown in Figure 2 with 𝜉 = 0.05 and 𝛾 = √2𝛿V2 . It is clear to see that the MSE gradually decreases with 𝜌RL0L1 reducing from 1 × 10−1 to 5 × 10−3 . When 𝜌RL0L1 ranges from 1 × 10−3 to 5 × 10−5 , the MSE is getting larger with the decrement of 𝜌RL0L1 . Thus, 𝜌L0L1 and 𝜌RL0L1 should be properly selected to well improve the behavior of our presented L0L1SM- and RL0L1SM-NLMS algorithms. Then, we will verify estimation behaviors of the L0L1SMand RL0L1SM-NLMS algorithms over a sparse channel with different sparsity levels and their estimation behaviors are compared with LMS, NLMS, SM-NLMS, and their sparse forms. The convergence of our presented L0L1SM- and RL0L1SM-NLMS algorithms compared with NLMS algorithms is shown in Figure 3. We observe that both L0L1SMand RL0L1SM-NLMS algorithms converge faster than those
6
Wireless Communications and Mobile Computing 101
101
100
100
10−1
10−1
MSE
MSE
MSE
100
10−2
10−2
10−1 10−2 10−3
80 100
180 Iterations
10−3
0
100
200 300 Iterations
휌L0L1 = 1 × 10−1 휌L0L1 = 5 × 10−2 휌L0L1 = 1 × 10−3
400
400
500
RZASM-NLMS L0L1SM-NLMS RL0L1SM-NLMS
101
100
MSE
10−1
10−1 MSE
−2
10−2
10−3
10
200 300 Iterations
Figure 3: Convergence comparisons between the proposed algorithms and NLMS algorithms for 𝐾 = 2.
0
−4
100
NLMS SM-NLMS ZA-NLMS ZASM-NLMS
101
10
0
휌L0L1 = 1 × 10−4 휌L0L1 = 1 × 10−5 휌L0L1 = 1 × 10−6
Figure 1: 𝜌L0L1 effects on our L0L1SM-NLMS algorithm.
10
10−3
500
10−3 0
100
200 300 Iterations −1
휌RL0L1 = 1 × 10 휌RL0L1 = 5 × 10−2 휌RL0L1 = 5 × 10−3
400
500 −3
휌RL0L1 = 1 × 10 휌RL0L1 = 5 × 10−4 휌RL0L1 = 5 × 10−5
Figure 2: 𝜌RL0L1 effects on our RL0L1SM-NLMS algorithm.
of the previously discussed LMS and NLMS algorithms. It is worth pointing out that our developed RL0L1SMNLMS algorithm provides the fastest convergence speed. The estimation performance in terms of the steady-state misalignment is shown in Figures 4, 5, and 6 for 𝐾 = 2, 𝐾 = 4, and 𝐾 = 8, respectively. In the experiments, the simulation parameters are 𝜇NLMS = 0.5 for NLMS algorithm and its sparse forms, 𝛾SM = 𝛾ZASM = 𝛾RZASM = 𝛾L0L1 = 𝛾RL0L1 = √2𝛿V2 , 𝜌ZANLMS = 𝜌RZASM = 5 × 10−4 , and 𝜌ZASM = 4 × 10−4 , and 𝜉 = 0.05 for our RL0L1SM-NLMS algorithm. Herein,
10−4
0
100
NLMS SM-NLMS ZA-NLMS ZASM-NLMS
200 300 Iterations
400
500
RZASM-NLMS L0L1SM-NLMS RL0L1SM-NLMS
Figure 4: Steady-state behavior comparisons between the proposed algorithms and NLMS algorithms for 𝐾 = 2.
the parameter 𝜇NLMS denotes a step size for NLMS algorithm and its sparse forms, and 𝜌ZASM , 𝜌ZANLMS , and 𝜌RZASM are the ZA parameters for the already reported ZASM-, ZA-, and RZASM-NLMS algorithms, respectively. The steady-state misalignment of our L0L1SM- and RL0L1SM-NLMS gives lower estimation error floors compared to the traditional
Wireless Communications and Mobile Computing
7
101
100
MSE
10−1
10−2
10−3
10−4
0
100
NLMS SM-NLMS ZA-NLMS ZASM-NLMS
200 300 Iterations
400
500
RZASM-NLMS L0L1SM-NLMS RL0L1SM-NLMS
Figure 5: Steady-state behavior comparisons between the proposed algorithms and NLMS algorithms for 𝐾 = 4.
101
MSE
100
10−1
10−2
10−3
0
100
NLMS SM-NLMS ZA-NLMS ZASM-NLMS
200 300 Iterations
400
500
RZASM-NLMS L0L1SM-NLMS RL0L1SM-NLMS
Figure 6: Steady-state behavior comparisons between the proposed algorithms and NLMS algorithms for 𝐾 = 8.
NLMS and SM-NLMS, and their sparse variants were recently developed because our proposed algorithms can adaptively adjust the ZA though dividing the sparse channel taps into small and large groups and exerting 𝑙1 and 𝑙0 -norm penalties on each group, respectively. The proposed RL0L1SM-NLMS
algorithm can provide lowest estimation misalignment compared to the recent reported ZASM-NLMS and RZASMNLMS algorithm when 𝐾 equals 2. When 𝐾 increases from 4 to 8, the steady-state misalignment becomes higher in comparison with that of 𝐾 = 2. However, the steadystate error is better than those sparse SM-NLMS and NLMS algorithms and their related sparse algorithms realized by using ZA techniques. The steady-state behaviors of our L0L1SM- and RL0L1SM-NLMS algorithms are fully verified and well compared with the existing LMS algorithms, including LMS, ZA-LMS, RZA-LMS, ZA-NLMS, and RZA-NLMS. Here, the parameters of our L0L1SM- and RL0L1SM-NLMS algorithms are the same as those in the last experiment, while other extra simulated parameters are listed as follows: 𝜇LMS = 0.05, 𝜌ZA-LMS = 5 × 10−4 , and 𝜌RZA-LMS = 9 × 10−4 , where 𝜇LMS is a step size for LMS algorithm and 𝜌ZA-LMS and 𝜌ZA-RLMS are the ZA parameters for ZA- and RZA-LMS algorithms, respectively. The estimation behaviors for 𝐾 = 2, 𝐾 = 4, and 𝐾 = 8 are described in Figures 7, 8, and 9, respectively. It is found that our RL0L1SM-NLMS algorithm achieves fast convergence and possesses the lowest estimation misalignment. Additionally, the proposed L0L1SM-NLMS algorithm is also better than the mentioned algorithms in terms of MSE. When the number of the dominant coefficients increases to 8, the steady-state misalignment of our developed algorithms increased. However, our developed algorithms still yield excellent steady-state behavior, implying that our presented L0L1SM- and RL0L1SM-NLMS algorithms are more powerful than the popular LMS and its sparse versions. In addition, the estimation behaviors of our constructed L0L1SM- and RL0L1SM-NLMS algorithms are discussed in Figure 10. We can see that the estimation misalignment of RL0L1SM-NLMS algorithm is better than the L0L1SMNLMS algorithm for any sparsity level 𝐾, indicating the proposed RL0L1SM-NLMS algorithm is more robust. Finally, the estimation behavior of our L0L1SM- and RL0L1SM-NLMS algorithms is studied for estimating an echo channel which is illustrated in Figure 11 as an example. There are 16 nonzero taps (𝐾 = 16) within the echo channel and its length is 256 (𝑁 = 256). In this experiment, the sparsity is illustrated as 𝜁12 (w) = (𝑁/(𝑁 − √𝑁))(1 − ‖w‖1 /√𝑁‖w‖2 ) [28, 35]. The SNR is 30 dB and the used parameters in this experiment are 𝜇NLMS = 𝜇ZA-NLMS = 0.5, 𝜌ZA-NLMS = 3×10−6 , 𝜌ZASM = 5 × 10−6 , 𝜌RZASM = 6 × 10−6 , 𝜌L0L1 = 7 × 10−6 , 𝜌RL0L1 = 1 × 10−5 , and the other parameters are the same as the above experiments. We use 𝜁12 (w) = 0.8222 and 𝜁12 (w) = 0.7362 for the first 10000 and second 10000 iterations, respectively. Herein, only the NLMS and its sparse forms are used for comparison since the NLMS algorithm is better than the LMS algorithm [22]. The computer simulation result is given in Figure 12. As is described in Figure 12, the steady-state misalignment of our developed L0L1SM- and RL0L1SM-NLMS algorithms is superior to the previously developed RZASM-NLMS algorithm. After the first 10000 iterations, the steady-state error of our algorithms is getting larger. However, their estimation behaviors are still better than mentioned channel estimation algorithms. Therefore,
8
Wireless Communications and Mobile Computing 101
101
100
100
MSE
MSE
10−1
10
10−1
−2
10−2
10−3
10−4
0
100
LMS SM-NLMS ZA-LMS RZA-LMS
200 300 Iterations
400
10−3
500
0
ZA-NLMS RZA-NLMS L0L1SM-NLMS RL0L1SM-NLMS
100
200 300 Iterations
LMS SM-NLMS ZA-LMS RZA-LMS
Figure 7: Steady-state behavior comparisons between our algorithms and LMS algorithms for 𝐾 = 2.
400
500
ZA-NLMS RZA-NLMS L0L1SM-NLMS RL0L1SM-NLMS
Figure 9: Steady-state behavior comparisons between our algorithms and LMS algorithms for 𝐾 = 8. 101
101
100 100
MSE
10
MSE
10−1 −1
10−2 10−2 10−3 10−3 10−4 10−4
0
100
LMS SM-NLMS ZA-LMS RZA-LMS
200 300 Iterations
400
0
100
500
ZA-NLMS RZA-NLMS L0L1SM-NLMS RL0L1SM-NLMS
200 300 Iterations
L0L1SM-NLMS (K = 2) RL0L1SM-NLMS (K = 2) L0L1SM-NLMS (K = 4)
400
500
RL0L1SM-NLMS (K = 4) L0L1SM-NLMS (K = 8) RL0L1SM-NLMS (K = 8)
Figure 10: Steady-state behaviors comparisons of our L0L1SM- and RL0L1SM-NLMS algorithms with different sparse levels.
Figure 8: Steady-state behavior comparisons between our algorithms and LMS algorithms for 𝐾 = 4.
5. Conclusion a conclusion is given that our developed L0L1SM- and RL0L1SM-NLMS algorithms have an excellent performance based on the evaluation criterion of convergence and estimation behavior for sparse adaptive channel estimation.
A L0L1SM-NLMS algorithm and a RL0L1SM-NLMS algorithm have been proposed and their derivations have been introduced in detail. The proposed L0L1SM-NLMS algorithm was realized by integrating a 𝑝-norm and separating the 𝑝norm into 𝑙0 and 𝑙1 norm via a mean of the channel taps
Wireless Communications and Mobile Computing
9
Acknowledgments
Amplitude
1
0 −1
0
100
200
300
400
500
Delay
Figure 11: A typical sparse echo channel. 102
References
101
[1] S. F. Cotter and B. D. Rao, “Sparse channel estimation via matching pursuit with application to equalization,” IEEE Transactions on Communications, vol. 50, no. 3, pp. 374–377, 2002. [2] Y. Li and M. Hamamura, “Zero-attracting variable-step-size least mean square algorithms for adaptive sparse channel estimation,” International Journal of Adaptive Control and Signal Processing, vol. 29, no. 9, pp. 1189–1206, 2015. [3] G. Gui and F. Adachi, “Improved least mean square algorithm with application to adaptive sparse channel estimation,” Eurasip Journal on Wireless Communications and Networking, vol. 2013, no. 1, article 204, 2013. [4] F. Adachi, D. Grag, S. Takaoka et al., “New direction of broadband CDMA techniques,” Wireless Communications and Mobile Computing, vol. 7, no. 8, pp. 969–983, 2007. [5] L. Dai, Z. Wang, and Z. Yang, “Next-generation digital television terrestrial broadcasting systems: key technologies and research trends,” IEEE Communications Magazine, vol. 50, no. 6, pp. 150–158, 2012. [6] L. Vuokko, V.-M. Kolmonen, J. Salo, and P. Vainikainen, “Measurement of large-scale cluster power characteristics for geometric channel models,” IEEE Transactions on Antennas and Propagation, vol. 55, no. 11, pp. 3361–3365, 2007. [7] B. Widrow and S. D. Stearns, Adaptive Signal Processing, Prentice Hall, Upper Saddle River, NJ, USA, 1985. [8] S. Haykin, Adaptive Filter Theory, Prentice Hall, New Jersey, NJ, USA, 1991. [9] D. Schafhuber, G. Matz, and F. Hlawatsch, “Adaptive wiener filters for time-varying channel estimation in wireless OFDM systems,” in Proceedings of the IEEE International Conference on Accoustics, Speech, and Signal Processing (ICASSP ’03), pp. 688– 691, IEEE, Hong Kong, April 2003. [10] S. Gollamudi, S. Nagaraj, S. Kapoor, and Y.-F. Huang, “Setmembership filtering and a set-membership normalized LMS algorithm with an adaptive step size,” IEEE Signal Processing Letters, vol. 5, no. 5, pp. 111–114, 1998. [11] P. S. Diniz and S. Werner, “Set-membership binormalized data-reusing LMS algorithms,” IEEE Transactions on Signal Processing, vol. 51, no. 1, pp. 124–134, 2003. [12] W. Tang, Low-complexity signal processing algorithms for wireles sensor networks [Ph.D. thesis], University of York, York, UK, 2012. [13] S. Zhang and J. Zhang, “Set-membership NLMS algorithm with robust error bound,” IEEE Transactions on Circuits and Systems II: Express Briefs, vol. 61, no. 7, pp. 536–540, 2014. [14] Y. Chen, Y. Gu, and A. O. Hero III, “Sparse LMS for system identification,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ’09), pp. 3125–3128, Taipei, Taiwan, April 2009.
100
MSE
This work was partially supported by the National Key Research and Development Program of China-Government Corporation Special Program (2016YFE0111100), the Science and Technology Innovative Talents Foundation of Harbin (2016RAXXJ044), the Foundational Research Funds for the Central Universities (HEUCFD1433, HEUCF160815), and Projects for the Selected Returned Overseas Chinese Scholars of MOHRSS and Heilongjiang Province of China.
10−1 10−2 10−3 10−4 10−5
0
0.5
NLMS SM-NLMS ZA-NLMS ZASM-NLMS
1 Iterations
1.5
2 ×104
RZASM-NLMS L0L1SM-NLMS RL0L1SM-NLMS
Figure 12: Tracking behavior of our L0L1SM- and RL0L1SM-NLMS algorithms over an echo channel.
and the RL0L1SM-NLMS algorithm was implemented by using a reweighting factor in the L0L1SM-NLMS algorithm to enhance the ZA strength. Both the proposed algorithms can provide a ZA-like performance on the small channel coefficients and give rise to an 𝑙0 -norm penalized ZA on the large channel taps. The proposed L0L1SM- and RL0L1SMNLMS algorithms have been discussed via a sparse channel with different sparsity levels and a designated echo channel. The computer simulations obtained from these channels give a conclusion that our RL0L1SM-NLMS algorithm has the best behavior with respect to the convergence and steadystate channel estimation behavior. Furthermore, both the proposed L0L1SM- and RL0L1SM-NLMS algorithms provide better estimation behaviors than the traditional LMS, NLMS, and SM-NLMS and the previously reported popular sparse algorithms.
Competing Interests The authors declare that there is no conflict of interests regarding the publication of this article.
10 [15] Y. Gu, J. Jin, and S. Mei, “l 0 Norm constraint LMS algorithms for sparse system identification,” IEEE Signal Processing Letters, vol. 16, no. 9, pp. 774–777, 2009. [16] E. M. Eksioglu, “Sparsity regularised recursive least squares adaptive filtering,” IET Signal Processing, vol. 5, no. 5, pp. 480– 487, 2011. [17] G. Gui, W. Peng, and F. Adachi, “Improved adaptive sparse channel estimation based on the least mean square algorithm,” in Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC ’13), pp. 3105–3109, April 2013. [18] F. Y. Wu and F. Tong, “Non-uniform norm constraint LMS algorithm for sparse system identification,” IEEE Communications Letters, vol. 17, no. 2, pp. 385–388, 2013. [19] R. Tibshirani, “Regression shrinkage and selection via the lasso,” Journal of the Royal Statistical Society. Series B. Methodological, vol. 58, no. 1, pp. 267–288, 1996. [20] D. L. Donoho, “Compressed sensing,” IEEE Transactions on Information Theory, vol. 52, no. 4, pp. 1289–1306, 2006. [21] O. Taheri and S. A. Vorobyov, “Sparse channel estimation with 𝑙𝑝 -norm and reweighted 𝑙1 -norm penalized least mean squares,” in Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP ’11), pp. 2864– 2867, Prague, Czech Republic, May 2011. [22] T. Zhang and G. Gui, “Convergence analysis of RL1-SLMS-based robust adaptive sparse channel estimation,” in Proceedings of the 8th International Conference on Wireless Communications & Signal Processing (WCSP ’06), pp. 1–5, Yangzhou, China, October 2016. [23] G. Gui and F. Adachi, “Sparse least mean fourth algorithm for adaptive channel estimation in low signal-to-noise ratio region,” International Journal of Communication Systems, vol. 27, no. 11, pp. 3147–3157, 2014. [24] G. Gui, L. Xu, and S.-Y. Matsushita, “Improved adaptive sparse channel estimation using mixed square/fourth error criterion,” Journal of the Franklin Institute, vol. 352, no. 10, pp. 4579–4594, 2015. [25] Y. Li, Y. Wang, and T. Jiang, “Norm-adaption penalized least mean square/fourth algorithm for sparse channel estimation,” Signal Processing, vol. 128, pp. 243–251, 2016. [26] Y. Li, Y. Wang, and T. Jiang, “Low complexity norm-adaption least mean square/fourth algorithm and its applications for sparse channel estimation,” in Proceedings of the IEEE Wireless Communications and Networking Conference (WCNC ’16), pp. 1–6, Doha, Qatar, April 2016. [27] Y. Li, Y. Wang, and F. Albu, “Sparse channel estimation based on a reweighted least-mean mixednorm adaptive filter algorithm,” in Proceedings of the 24th European Signal Processing Conference (EUSIPCO ’16), Budapest, Hungary, 2016. [28] M. S. Salman, “Sparse leaky-LMS algorithm for system identification and its convergence analysis,” International Journal of Adaptive Control and Signal Processing, vol. 28, no. 10, pp. 1065– 1072, 2014. [29] R. Meng, R. C. De Lamare, and V. H. Nascimento, “Sparsityaware affine projection adaptive algorithms for system identification,” in Proceedings of the Sensor Signal Processing for Defence (SSPD ’11), pp. 1–5, London, UK, September 2011. [30] Y. Li and M. Hamamura, “Smooth approximation l0 -norm constrained affine projection algorithm and its applications in sparse channel estimation,” The Scientific World Journal, vol. 2014, Article ID 937252, 15 pages, 2014.
Wireless Communications and Mobile Computing [31] M. V. S. Lima, W. A. Martins, and P. S. R. Diniz, “Affine projection algorithms for sparse system identification,” in Proceedings of the 38th IEEE International Conference on Acoustics, Speech, and Signal Processing (ICASSP ’13), pp. 5666–5670, Vancouver, Canada, May 2013. [32] Y. Li, C. Zhang, and S. Wang, “Low-complexity non-uniform penalized affine projection algorithm for sparse system identification,” Circuits, Systems, and Signal Processing, vol. 35, no. 5, pp. 1611–1624, 2016. [33] Y. Li, W. Li, W. Yu, J. Wan, and Z. Li, “Sparse adaptive channel estimation based on 𝑙𝑝-norm-penalized affine projection algorithm,” International Journal of Antennas and Propagation, vol. 2014, Article ID 434659, 8 pages, 2014. [34] M. V. S. Lima, T. N. Ferreira, W. A. Martins, and P. S. Diniz, “Sparsity-aware data-selective adaptive filters,” IEEE Transactions on Signal Processing, vol. 62, no. 17, pp. 4557–4572, 2014. [35] Y. Li, Y. Wang, and T. Jiang, “Sparse-aware set-membership NLMS algorithms and their application for sparse channel estimation and echo cancelation,” AEU—International Journal of Electronics and Communications, vol. 70, no. 7, pp. 895–902, 2016.
International Journal of
Rotating Machinery
Engineering Journal of
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
The Scientific World Journal Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
International Journal of
Distributed Sensor Networks
Journal of
Sensors Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Journal of
Control Science and Engineering
Advances in
Civil Engineering Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Submit your manuscripts at https://www.hindawi.com Journal of
Journal of
Electrical and Computer Engineering
Robotics Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
VLSI Design Advances in OptoElectronics
International Journal of
Navigation and Observation Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Hindawi Publishing Corporation http://www.hindawi.com
Chemical Engineering Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
Active and Passive Electronic Components
Antennas and Propagation Hindawi Publishing Corporation http://www.hindawi.com
Aerospace Engineering
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Volume 2014
International Journal of
International Journal of
International Journal of
Modelling & Simulation in Engineering
Volume 2014
Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Shock and Vibration Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014
Advances in
Acoustics and Vibration Hindawi Publishing Corporation http://www.hindawi.com
Volume 2014