Preparation of Papers in a Two-Column Format for ...

3 downloads 0 Views 739KB Size Report
ZA-NLMF (RZA-NLMF) algorithm have been proposed to mitigate noise and to ... algorithm where RL1 can exploit more sparsity information than ZA and RZA.
Improved Adaptive Sparse Channel Estimation Using Re-Weighted L1-norm Normalized Least Mean Fourth Algorithm Chen Ye†1, Guan Gui1, Li Xu1, and Nobuhiro Shimoi2 2.

1. Department of Electronics and Information Systems, Akita Prefectural University, Yurihonjo, Japan Department of Machine Intelligence and Systems Engineering, Akita Prefectural University, Yurihonjo, Japan (Tel: +81-184-27-2241; E-mail: [email protected])

Abstract: In the frequency-selective fading broadband wireless communications systems, two adaptive sparse channel estimation (ASCE) methods using zero-attracting normalized least mean fourth (ZA-NLMF) algorithm and reweighted ZA-NLMF (RZA-NLMF) algorithm have been proposed to mitigate noise and to exploit channel sparsity. Motivated by compressive sensing, in this paper, an improved ASCE method is proposed by using reweighted L1-norm NLMF (RL1-NLMF) algorithm where RL1 can exploit more sparsity information than ZA and RZA. Specifically, we construct the cost function of RL1-NLMF algorithm and hereafter derive its update equation. Intuitive illustration is also given to demonstrate that RL1 is more efficient than conventional two sparsity constraints. Finally, simulation results are provided to show that the proposed method achieves better estimation performance than the two conventional ones. Keywords: NLMF, adaptive sparse channel estimation, ZA-NLMF, RL1-NLMF.

1. INTRODUCTION Broadband signal transmission is becoming one of the mainstream techniques in the next generation communication systems [1]–[3]. Due to the frequency selective channel fading, accurate channel state information (CSI) is necessary for coherent detection [4]. One of effective approaches is adaptive channel estimation (ACE). A typical framework of ACE is shown in Fig. 1. It is well known that ACE using least mean fourth (LMF) algorithm outperforms the least mean square (LMS) algorithm in achieving an appropriate balance between convergence and steady-state performances [5]. However, standard LMF algorithm is unstable due the fact that its stability depends on the following three factors: input signal power, noise power and weight initialization [5]. To ensure the stability of LMF algorithm, normalized LMF (NLMF) algorithm was proposed in [6][7]. But the standard NLMF algorithm may not take the advantage of channel sparsity. Recently, plentiful of channel measurement experiments have verified that broadband channels usually exhibit sparse structure as shown in Fig. 2. In other words, sparse channel is consisted of extremely few channel coefficients, and most values of them are nearly to zero [8]–[10]. To estimate the sparse channel, two ASCE methods were proposed by incorporating sparse constraint into standard NLMF algorithm as zero-attracting NLMF (ZA-NLMF) and reweighted ZA-NLMF (RZA-NLMF) [11]. According to compressive sensing (CS) [12] theory, accurately, exploiting more sparsity information can further improve channel estimation performance. Motivated by the theory, in this paper, we propose an improved ASCE method using re-weighted L1-norm NLMF (RL1-NLMF) algorithm. Firstly, cost function of RL1-NLMF is constructed and its update equation is derived. Secondly, in order to evaluate the sparse constraint strength of the RL1, intuitive illustration is demonstrated to compare with ZA and RZA. By virtual † Chen Ye is the presenter of this paper.

of Monte Carlo (MC) simulations, mean square deviation (MSD) performance curves are depicted to verify the effectiveness of proposed method with respect to the channel sparsity, the step-size and the signal-to-noise (SNR).

additive noise input signal vector

unknown FIR channel

Σ

estimated FIR channel

Σ

adaptive algorithm

Fig.1 ASCE for broadband communication systems. The rest of this paper is organized as follows. Standard NLMF algorithm is briefly introduced in Section 2. In Section 3, two conventional ZA-NLMF and RZA-NLMF channel estimation algorithms are reviewed and an improved RL1-NLMF channel estimation algorithm is proposed. Numerical simulation results are presented in Section 4. Finally, we conclude this paper in Section 5. Notation: Throughout the paper, capital bold letters and small bold letters denote matrices and row/column vectors, respectively; The superscripts ( )T denotes the transpose; E {} denotes the expectation operator; || h ||p stands for the ℓ𝑝 -norm operator which is p defined as h p  (  i | h i | ) 1 p , where p  {1 , 2} .

normalized LMF (NLMF) algorithm was proposed in [7]. The updated equation is given by

1 Sparse channel length: 16 No. dominant channel taps: 3

0.9 0.8

3

Magnitude

e (n )x (n )

h (n  1)  h (n )  

0.7

x (n )

0.6 0.5

2 2

 x (n )

e (n )x (n )

 h (n )   N

x (n )

0.4

2 2

2

 e (n )



(4)

,

2 2

0.3

where

0.2 0.1 0

 e (n ) 2

N  2

4

6

8

10

12

14

16

x (n )

Channel taps Fig.2 A typical example of sparse multipath channel.

2. SYSTEM MODEL AND STANDARD NLMF ALGORITHM Consider a baseband frequency-selective fading wireless communication system where finite impulsive response (FIR) of sparse channel h  [h1 , h 2 , ..., h N ]T , which is only supported by K nonzero taps. Assume that an input training signal x (n ) is used to probe the unknown sparse channel. At the receiver, the observed signal 𝑑(𝑛) is obtained as T

d (n )  h x (n )  z (n ),

(1)

where x (n )  [x (n ), x (n  1), ..., x (n  N  1)]T denotes training signal vector, and z (n ) is the additive white Gaussian noise (AWGN) which is assumed to be independent with x (n ) . The objective of ASCE is to adaptively estimate the channel h (n ) , using the training signal x (n ) and the observed signal d ( n ) . According to [5], the cost function of standard LMF algorithm is constructed as G L M F (n ) 

1

4

e (n ),

(2)

4

where e (n )  d (n )  h T (n )x (n ) is n -th adaptive updating error. Based on Eq. (2), the LMF algorithm can be derived as h ( n  1)  h ( n )  

 G L M F (n )  h (n )

 h ( n )   e ( n ) x ( n ),

where  denotes the step-size of gradient descend. Since LMF algorithm is unstable, it is hard to be applied in channel estimation [2]. To improve its stability,

2

2

 e (n )

(5)

,

denotes variable step-size which depends on the initial step-size  , the update error e (n ) and the input signal x (n ) . However, the standard NLMF algorithm cannot take advantage of the channel sparsity and thus additional performance gain cannot be obtained. To exploit the channel sparsity, it is necessary to develop sparse NLMF algorithms. Next we review two sparse NLMF channel estimation algorithms (i.e., ZA-NLMF and RZA-NLMF), and propose an improved RL1-NLMF channel estimation algorithm.

3. SPARSE NLMF CHANNEL ESTIMATION ALGORITHMS 3.1. ZA-NLMF algorithm Recall that the adaptive channel estimation method using standard NLMF algorithm in Eq. (4), however, the standard linear method does not take advantage of channel sparsity. It is caused that its original cost function in (2) which does not utilize the sparse constraint or penalty function. Hence, here L1-norm [13] sparse constraint to the cost function in (4) is imported to obtain a new cost function as follow G ZA (n ) 

1 4

e (n )  ZA h (n ) , 4

1

(6)

where  Z A denotes a regularization parameter which balances the error term and sparse constraint of h . The updated equation of ZA-NLMF algorithm [14]–[16] is given as

(3)

3

2

h (n + 1)  h ( n )   N

e (n )x (n ) x (n )

2

  sgn  h ( n )  , (7)

2

where     Z A and sg n ( ) denotes the sign function which is defined as follows

estimation algorithm is devised as

h i (n )  0

 1,  sg n  h (n )    0,   1, 

h i (n )  0 ,

(8)

h i (n )  0

G R L 1 (n ) 

where i  1, 2, , N . It is well known that ZA-NLMF can be applied in sparse channel estimation but the sparsity penalty is inefficient [17].

4

e (n )   R L 1 f 4

T

( n )h ( n ) ,

(11)

1

R L 1

is the regularization parameter and T , f N (n )] is a reweighted vector where fi ( n ) , i  {1 , 2 , ..., N } , is defined as f (n )  [ f1 (n ), f 2 (n ),

fi ( n ) 

solution plane

sparse constraint

where

1

1

  h i (n  1 )

(12)

,

where  is some positive number and thus fi (n )  0 . The updated equation can be derived as h (n  1 )  h (n )  

 G R L 1 (n )

 h (n )   N

non-sparse constraint

 h (n ) e (n )x (n ) x (n )

2 2



  R L 1 sgn f

Fig.3 Geometrical illustration of non-sparse constraint and sparse constraint.

 h (n )   N

e (n )x (n ) x (n )

2 2

T

(13)



( n )h ( n ) f ( n )



 R L 1 sgn  h (n )    h (n  1 )

,

3.2. RZA-NLMF algorithm Motivated by reweighted L1-minimization sparse recovery algorithm [6] in CS [18], RZA-NLMF channel estimation algorithm was proposed [11]. The cost function of RZA-NLMF is given as G R ZA n  

1 4

F IR

e

4

 n   R Z A 





log 1   h i  n  , (9)

i 1

where  R Z A  0 is a regularization parameter which trades off the estimation error and channel sparsity. The updated equation is derived as

where  R L 1    R L 1 . Since sgn( f T (n ))  1 1 N and thus sgn( f T (n )h (n ))  sgn(h (n )) is simplified in (13). To fair evaluate the sparse penalty strength of ZA, RZA and RL1, accordingly, they are listed as  Z A  sgn  h (n )  ,

 RZA 

 RL1  h (n  1)  h (n )   N

e (n )x (n ) x (n )

2



sgn (h (n )) 1   h (n )

sg n  h ( n )  1   h (n )

sgn  h ( n ) 

  h (n )

(14)

,

(15)

,

(16)

, (10)

2

where    R Z A  is a parameter which is decided by the step-size  , the regularization parameter  R Z A and the reweighted factor  , respectively. In the second term of (10), if the magnitude of h i (n ), i  {1, 2,  , N } are smaller than 1 /  , then these coefficients will be replaced by zeros in high probability. 3.3. Proposed RL1-NLMF algorithm In the perspective of CS, RL1 can exploit more sparse structure information than ZA and RZA [11]. Motivated by this fact, RL1-NLMF algorithm is proposed to further improve sparse channel estimation performance. Similarly, the cost function of RL1-NLMF channel

where channel coefficients of h (n ) are assumed in the range of [ 1, 1 ] . Considering above sparse functions in Eqs. (14)~(16), their sparse penalty strength curves are depicted in Fig. 4. One can find that ZA utilizes uniform sparse penalty to all channel coefficients in the range of [ 1, 1 ] . Hence, it is inefficient to exploit channel sparsity. Unlike ZA, both RZA and RL1 can take advantage of adaptive sparse penalty in the scenarios of different values of channel coefficients, i.e. stronger sparse penalty can take advantages of more sparse structure information. Additionally, we can find that RL1 utilizes stronger sparse penalty than RZA as shown in Fig. 4. Hence, RL1-LMF channel estimation algorithm can get better the estimation performance than ZA-LMF and RZA-LMF.

20

RZA-NLMF algorithm could take advantage of the sparsity information more effective than ZA-NLMF algorithm, hence, the MSD performance of RZA-NLMF is better than ZA-NLMF. Accurately, the estimation performance can be further improved by devising more efficient sparse constraint such as RL1. As the figures show, RL1-NLMF can achieve better MSD performance than the conventional three algorithms. The simulation results are also coincident with sparse constraints comparisons in Fig. 5. In the scenarios of different channel sparsity, we can find that effective sparse constraint could be utilized to improve sparse channel estimation performance.

L1-SNLMF RZA-SNLMF (=20) RL1-SNLMF (=0.05)

Sparse penalized strength

15

10

5

0

-5 -1

-0.5

0

0.5

1

Table 1 Simulation parameters.

Value of channel coefficients

Fig.4 Comparison of the three sparse penalty functions.

4. NUMERICAL SIMULATIONS In this section, the proposed RL1-NLMF channel estimation algorithm is evaluated by means of Monte-Carlo (MC) simulation approach. In order to achieve average estimation performance, independent 100 runs is adopted. The length of h is set as F IR  16 and number of dominant taps in channel is set as K  {1, 4, 6} . All dominant channel taps obey 2 random Gaussian distribution (0, σ h ) and their positions are randomly distributed in estimation channel h which is subject to E {|| h ||22 }  1 . Received signal-to-noise ratio (SNR) is defined as 2 10log 10 ( E 0 /  n ) , where E 0  1 denotes the unit transmission power and  n2 denotes additive noise variance. All of simulation parameters are listed in Table 1. Estimation performance is evaluated through average MSD which is defined as



M SD { h ( n )}=E h  n   h

2 2

,

(17)

where h denotes the actual channel and h (n ) denotes adaptive channel estimator at n -th iteration time. In the sequel, three simulation examples are presented to confirm the effectiveness of proposed algorithm. Example 1: MSD comparisons against channel sparsity. To evaluate the estimation performance of proposed algorithm, three conventional algorithms, NLMF, ZA-NLMF, RZA-NLMF, are adopted as for performance benchmarks, as shown in Figs. 5~7. In the three figures, one can easily find that proposed RL1-NLMF channel estimation algorithm can achieve better stable-state MSD performance than traditional three algorithms in different channel sparsity (K). We can find that ZA-NLMF could not exploit the channel sparsity effectively and thus its MSD performance is very close to standard NLMF algorithm. Since

Parameters

Values

Channel distribution of nonzero coefficient

Random Gaussian

Training sequence

Pseudo-random Binary sequence

Channel length

N  16

No. nonzero coefficients

K  {1, 4, 6}

Step-size

  {1.5, 2.0, 2.5}

Signal-to-Noise Ratio

S N R  {8dB ,10dB }

Regularization parameters

(0,1)

 L 1  5  10

5

 R Z A  5  10  R L 1  5  10

Re-weighted factor of RZA-NLMF

ε = 20

Threshold of RL1-NLMF

𝛿 = 0.05

5

8

Example 2: MSD performance comparisons v.s. initial step-sizes (𝝁). Since the step-size is a significant parameter which decides the stability of adaptive filtering algorithm. In other words, it directly affects the stability of the proposed algorithm. Two initial step-size (i.e.,   2.5 and 1.5) are adopted in adaptive filtering algorithm and performance curves are depicted in Fig. 8 and 9, respectively. According to the two figures, one can find that proposed algorithm could keep stable during gradient descend process. In addition, one can find that initial step-size (  ) could not affect the convergence speed obviously because the step-size (  N ) depends on three factors: initial step-size (  ), update error ( e (n ) ) as well as input training signal vector ( x (n ) ). It is necessary to mention that suitable setting of initial step-size is still extremely important. According to above analysis respect to the two figures, the stability of proposed algorithm through adopting different initial step-size could be verified.

NLMF

K=1 SNR=10dB =2.0

-1

10

L1-NLMF

-1

10

RZA-NLMF

L1-NLMF RZA-NLMF RL1-NLMF

RL1-NLMF

-2

-2

10

10 MSD

MSD

NLMF

K=1 SNR=10dB =2.5

-3

10

-4

-3

10

-4

10

10

-5

-5

10

10

0

0.5

1

1.5 Iterations

2

2.5

0

3

NLMF L1-NLMF RZA-NLMF RL1-NLMF

K=4 SNR=10dB =2.0

-1

1

1.5 Iterations

2

2.5

3 4

x 10

Fig.8 MSD performance comparisons (𝜇 = 2.5).

Fig.5 MSD performance comparisons ( K=1).

10

0.5

4

x 10

K=1 SNR=10dB =1.5

-1

10

NLMF L1-NLMF RZA-NLMF RL1-NLMF

-2

10 -2

MSD

MSD

10

-3

10

-3

10

-4

10 -4

10

-5

10

0

0.5

1

1.5 Iterations

2

2.5

0

3

2

x 10

NLMF L1-NLMF RZA-NLMF RL1-NLMF

K=6 SNR=10dB =2.0

10

3 Iterations

4

5

6 4

x 10

Fig.9 MSD performance comparisons (𝜇 = 1.5).

Fig.6 MSD performance comparisons (K=4).

-1

1

4

NLMF

K=1 SNR=8dB =2.0

-1

L1-NLMF RZA-NLMF

10

-2

10

-2

10

MSD

MSD

RL1-NLMF

-3

10

-3

10

-4

10

-4

10 0

0.5

1

1.5 Iterations

2

2.5

3 4

x 10

Fig.7 MSD performance comparisons (K=6).

0

0.5

1

1.5

2

2.5

3

4 Iterations x 10 Fig.10 MSD performance comparisons taps (SNR=8dB).

Example 3: MSD performance comparisons in the case of SNR=8dB. To further confirm the performance of proposed algorithm, MSD curves in the case of SNR=8dB are depicted in Fig. 10. In this figure, one can find that the proposed algorithm can achieve better MSD performance than conventional three channel estimation algorithms. In addition, one can also find that the convergence speed of NLMF-type channel estimation algorithms is faster than the case in SNR =10dB. According to the figure, one can deduce that convergence of estimation performance would be accelerated in the lower SNR due to the fact that the step-size is enlarged by instantaneous estimation error.

5. CONCLUSION In this paper, we have proposed an improved ASCE method using RL1-NLMF algorithm. Numerical simulation results have been provided to confirm the effectiveness of our proposed algorithm which can achieve better MSD performance than ZA-NLMF and RZA-NLMF algorithms. Since this study is based on assumption of Gaussian noise model, it may be unsuitable to be applied in non-Gaussian impulsive noise environment. In future work, we plan to develop a robust sparse NLMF algorithm to mitigate the impulsive noises in various wireless communication systems.

ACKNOWLEDGEMENT This work was supported in part by the Japan Society for the Promotion of Science (JSPS) research grants (No. 26889050, No. 15K06072), and the National Natural Science Foundation of China grants (No. 61401069, No. 61261048, No. 61201273).

REFERENCES [1] L. Dai, Z. Wang, and Z. Yang, “Next-generation digital television terrestrial broadcasting systems: Key technologies and research trends,” IEEE Commun. Mag., vol. 50, no. 6, pp. 150–158, 2012. [2] B. D. Raychaudhuri and N. B. Mandayam, “Frontiers of wireless and mobile communications,” Proceedings of the IEEE, vol. 100, no. 4, 824-840, 2012. [3] F. Adachi and E. Kudoh, “New direction of broadband wireless technology,” Wirel. Commun. Mob. Comput., vol. 7, no. 8, pp. 969-983, 2007. [4] D. Tse, Fundamentals of wireless communication. Cambridge, U.K., 2005. [5] E. Walach and B. Widrow, “The least mean fourth (LMF) adaptive algorithm and its Family,” IEEE Trans. Inf. Theory, vol. 30, no. 2, pp. 275-283, 1984. [6] G. Gui and F. Adachi, “Adaptive sparse system identification using normalized least-mean fourth algorithm,” Int. J. Commun. Syst., vol. 28, no. 1, pp. 38-48, 2015.

[7] E. Eweda, “Global stabilization of the least mean fourth algorithm,” IEEE Trans. Signal Process., vol. 60, no. 3, pp. 1473-1477, 2012. [8] L. Dai, Z. Wang, and Z. Yang, “Compressive sensing based time domain synchronous OFDM transmission for vehicular communications,” IEEE J. Sel. Areas Commun., vol. 31, no. 9, pp. 460-469, 2013. [9] L. Dai, Z. Wang, and Z. Yang, “Spectrally efficient time-frequency training OFDM for mobile large-scale MIMO systems,” IEEE J. Sel. Areas Commun., vol. 31, no. 2, pp. 251-263, 2013. [10] Z. Gao, L. Dai, Z. Lu, C. Yuen, and Z. Wang, “Super-resolution sparse MIMO-OFDM channel estimation based on spatial and temporal correlations,” IEEE Commun. Lett., vol. 18, no. 7, pp. 1266-1269, 2014. [11] G. Gui, L. Xu, and F. Adachi, “RZA-NLMF algorithm-based adaptive sparse sensing for realizing compressive sensing,” EURASIP J. Adv. Signal Process., vol. 2014, 2014. [12] E. J. Candes, M. B. Wakin, and S. P. Boyd, “Enhancing sparsity by reweighted l1 minimization,” J. Fourier Anal. Appl., vol. 14, no. 5-6, pp. 877-905, 2008. [13] M. L. Aliyu, M. A. Alkassim, and M. S. Salman, “A p-norm variable step-size LMS algorithm for sparse system identification,” Signal, Image Video Process., doi: 10.1007/s11760-013-0610-7, 2014. [14] G. Gui and F. Adachi, “Sparse least mean forth filter with zero-attracting,” ICICS, Tainan, Taiwai, 10-13 Dec. 2013, pp. 1-5. [15] C. Turan and M. S. Salman, “Zero-attracting function controlled VSSLMS algorithm with analysis,” Circuits, Syst. Signal Process., 2015. [16] M. N. S. Jahromi, M. S. Salman, A. Hocanin, and O. Kukrer, “Convergence analysis of the zero-attracting variable step-size LMS algorithm for sparse system identification,” Signal, Image Video Process., pp. 1-4, 2013. [17] D. L. Donoho and Y. Tsaig, “Fast solution of L1-norm minimization problems when the solution may be sparse,” IEEE Trans. Inf. Theory, vol. 54, no. 11, pp. 4789-4812, 2008. [18] G. Gui, W. Peng, and F. Adachi, “Improved adaptive sparse channel estimation based on the least mean square algorithm,” WCNC, Shanghai, China, 7-10 April 2013, pp. 3105-3109.