High Resolution Direction Finding Using Krylov

0 downloads 0 Views 171KB Size Report
W. L. Brogan, Modern Control Theory, second edition ed. NJ: Prentice-Hall,. Englewoods Cliffs, 1985. [BA 02]. S. Burykh and K. Abed-meraim,. “Reduced-rank ...
SETIT 2007

4th International Conference: Sciences of Electronic, Technologies of Information and Telecommunications March 25-29, 2007 – TUNISIA

High Resolution Direction Finding Using Krylov Subspace Hichem SEMIRA*, Hocine BELKACEMI** and Sylvie MARCOS** *

Département d’électronique Université d’Annaba 23000 Algérie [email protected] **

Laboratoire des signaux et Systèmes (LSS) UMR-CNRS 8506 Supélec, 3 rue Joliot-Curie 91192 Gif-sur-Yvette France [email protected] [email protected]

Abstract: This paper proposes two new algorithms for the direction of arrival (DOA) estimation of P radiating sources. Unlike the classical subspace-based methods, they do not resort to the eigen-decomposition of the covariance matrix of the received data. Indeed, the proposed algorithms involve the building of the signal subspace from the Krylov subspace of order P associated with the covariance matrix of the received data and a search steering vector, through either the multi-stage Wiener filter (MSWF) or the conjugate gradient method (CG). The proposed algorithms exhibit a higher super-resolution capability than the classical MUSIC and ESPRIT algorithms. A comparison with another theoretically equivalent Krylov subspace-based algorithm, namely the auxiliary vector basis is also presented. It reveals that the proposed CG-based method outperforms over its counterparts in term of resolution of closely spaced-sources with a small number of snapshots and a low signal-to-noise ratio (SNR). Key words: DOA, High resolution algorithm, Krylov subspace, MUSIC, source localization.

estimation procedure that uses a modified version of the orthogonal auxiliary-vector filtering (AVF) algorithm [PB 99] was proposed. The procedure starts with a linear transformation of the array response search vector by the input covariance matrix. Then a new scheme to DOA estimation is derived based on the collapse of the rank of the extended signal subspace from P+1 to P (where P is the number of sources) when the vector falls in the signal subspace and it was shown that this new estimation scheme outperforms the MUSIC and ESPRIT algorithms in term of resolution performance. It was shown recently, that the AVF, MSWF, CG span the same Krylov subspace [CMS 02] [WHGZ 02] [BA 02]. This subspace which is defined by taking the powers of the covariance matrix of observation multiplied by the source steering vector [HX 01]. In this paper, we use the matched filters of the MSWF [GRS 98] by performing P+1 forward recursion and the residuals of the CG [GV 96] algorithms after P iterations to generate the Krylov subspace basis. Then, we form a localisation function following the same approach as described in [GPM 05].

INTRODUCTION Array processing deals with the problem of extracting information from signals received simultaneously by an array of sensors. In many fields such as radar, underwater acoustics and geophysics, the information of interest is the direction of arrival (DOA) of waves transmitted from radiating sources and impinging on the sensor array. Over the years, many approaches to the problem of source DOA estimation have been proposed [KV 96]. The subspace based methods, which resort to the decomposition of the observation space into a noise subspace and a source subspace, have proved to have high resolution (HR) capabilities and to yield accurate estimates. These methods almost always use the eigendecomposition of the covariance matrix of the received signals. Among the most famous of these HR methods, MUSIC [Sch 86], ESPRIT [RK 89], MINNORM [KT 83] and WSF [VOK 91]. Though their HR capabilities, the performance of these methods degrades substantially in the case of closely spaced sources with a small number of snapshots and at a low SNR. In [GPM 05] a direction-of-arrival (DOA)

The paper is organized as follows. After a formulation of the DOA estimation problem in section

-1-

SETIT2007 1, the Krylov subspaces are defined in section 2. The three ways of obtaining a basis of the P-order Krylov subspace associated with the couple of a symmetric, positive, definite matrix and a non-zero vector, namely the MSWF, AVF and CG, are recalled. The new algorithms are presented in section 3. After simulations in section 4, few concluding remarks are drawn in section 5.

likelihood estimate Rˆ based on a finite number K of data samples can be obtained as

1. Problem Formulation

2. Krylov subspaces

We consider a uniformly spaced linear array having M omnidirectional sensors receiving P (P 2 becomes [CMS 02]

Table 1. Forward recursion of the rank D MSWF.

{

Notice that , g , … , g av , D −1 are av , 0 av ,1

orthonormal. Where g av ,0 =

E x i −1 (k ) d ∗i −1 (k )

=

h i +1

H mw , i

D

]

H μ i∗ g av ,i x

and

E WiH x ).

h 1 = b , δ 1 = b , x 0 (k ) = x (k ) for i

WiH−1x

D − 1 do

(I − ∑

i −1 j = i−2

)

H H g av , j g av , j R g av , i −1

ti ti

H H μ i −1 (g av , i R g av ,i −1 g av , i R g av ,i )

g av , D −1 = (− 1) μ D −1 t D −1 D

Table 2. AVF algorithm.

A recursive conditional optimization of the auxiliary vectors was also presented in [PB 99] and the optimization procedure results in the following projection matrix (see table 2):

Table 3. Auxiliary Vector (AV) generation. 2.3. Conjugate Gradient (CG) Algorithm

magnitude of the cross-correlation between W x

The method of conjugate gradients (CG) is an iterative inversion technique for the solution of symmetric positive definite linear systems. Consider the Wiener-Hopf equation, there are several ways to derive the CG method. We consider the approach from [GV 96] which minimizes the following cost function

H H and g av ,i x ,i.e., maximize Wi −1 R g av,i , and μ i is the

Φ (w ) = w H R w − 2 ℜe b H w

G

D av

[

= g av , 0 g av ,1 … g av , D −1

where

the

auxiliary

]

vector g av,i

maximize

the H i −1

(

scalar that minimizes the mean square error (MSE)

)

Table 4 depicts a basic version of the CG. -3-

(12)

SETIT2007 H w 0 = 0 , P1 = g cg , 0 = b , ρ 0 = g cg , 0 g cg , 0

signal subspace of dimension P. However, when θ ≠ θ i for i ∈ {1,… , P } ,

for i = 1 to D − 1 do

= R pi

vi

αi

=

wi

=

R a (θ i ) =

ρ i −1 p iH v i

g cg , i = g cg , i −1 − α i v i H = g cg , i g cg , i

βi

=

p i +1 End

H

2 j

j =1

2

j

(15)

j

and b is a linear combination of the P+1 steering vectors [a(θ ), a(θ1 ), a(θ 2 ),… , a(θ P )] and thus lies in the extended signal subspace of dimension P+1 which includes the true signal subspace of dimension P plus the search vector a(θ ) .

w i −1 + α i p i

ρi

∑ E{s } a (θ ) a(θ ) a(θ ) + σ a(θ ) P

ρi ρ i −1

Having defined the initial vector as described above and after performing P+1 iterations (D = P + 1) , we form a set of basis by letting the last vectors unormalized for each algorithm, (i.e. g mw, P+1 ,

= g cg , i + β i p i for

g av, P and g cg , P are unormalized). Therefore , it can be

Table 4. Basic Conjugate Gradient Algorithm.

shown [HWFZ 04] that if the initial vector b is contained in the signal subspace then all the set of P P P basis G mw , G av and G cg are contained in the column

After D-1 iterations of the conjugate gradients the set of gradients (residuals) D G cg = g cg , 0 , g cg ,1 ,… , g cg , D −1 has some properties summarized as follows [GV 96].

space of A(θ ) , it follows that the orthonormal P P P , G av and G cg span the true signal matrices G mw



subspace for θ = θ i , i = 1,2,… , P i.e,

{



}

the gradient vectors are mutually orthogonal i.e., H g cg ∀i ≠ j , i g cg , i = 0,

{

{

}

{ }

{ }

P P P span G mw ≡ span G av ≡ span G cg ≡ span { A(θ )}

}

D span G cg ≡ Κ D ( R ,b )

and the last unormalized vectors g mw, P + 1 = g av , P = g cg , P = 0 . However, when θ ≠ θ i for i ∈ {1, … , P } ,

The last property establishes the equivalence between the CG and the MSWF. Therefore, the MSWF and the CG methods initialized with w 0 = 0 minimize the same cost function in the same subspace [WHGZ 02]. Notice that an orthonormal basis is found using the CG algorithm by normalization of each gradient i.e., g cg , i (θ ) g cg , i (θ ) = , i = 1, … , D − 1 . g cg , i (θ )

{

{

{

}

}

We have formed an orthogonal basis for the extended signal subspace. Following the same approach used in [GPM 05], let θ (n ) = n Δ for ° n ∈ {1,2,… , 180 Δ ° } where Δ is a search angle step, and

(

)

define the matrix G θ ( n ) calculated at step n by performing P+1 (D = P + 1) iterations of any algorithm described above:

3. Proposed DOA Estimation Algorithm

) [ (

(

) (

)

(

G θ ( n ) = g1 θ ( n ) , g 2 θ ( n ) , … , g P + 1 θ ( n )

In this section, we consider the signal model presented above and we generate an extended signal subspace basis of rank P+1 non-based on eigen-vector analysis. Let us define the initial vector b (θ ) as follows: R a(θ ) b (θ ) = R a(θ )

}

P+ 1 P+ 1 P+ 1 span G mw ≡ span G av ≡ span G cg

Notice

that

the

last

vector

) ] (16) (θ ( ) ) is n

g P+ 1

unormalized and the first and the last vectors in AVF and CG methods are denoted by g 0 θ ( n ) and

(

gP θ

(13)

(n)

)

(

)

respectively. Then the spectrum is defined

by [GPM 05]:

where a(θ ) is the search vector for θ ∈ [ − 90°, 90° ] . when the P sources are uncorrelated and θ = θ i for i ∈{1,… , P } we have

( { }

(

+

2

)

∑ E {s }( a (θ ) a(θ )) a(θ ) P

j =1, j ≠ i

2 j

H

j

i

j

(

1

) (

g HP+ 1 θ ( n ) G θ ( n−1 )

)

(17)

2

° with n = 1, 2, … , 180 Δ ° . It is easy to show that we can obtain a peak in the spectrum if θ ( n ) = θ i , i = 1,2,… , P because the last vector in the

R a(θ i ) = E s i M + σ a (θ i ) 2

)

PK θ ( n ) =

(

)

basis g P + 1 θ ( n ) = 0 . When on the other hand,

(14)

θ

from (14) we note that b (θ i ) is a linear combination of the P signal steering vectors and thus lies in the

(n)

the -4-

(

≠ θ i , i = 1,2,… , P , g P + 1 θ ( n ) extended

signal

)

is contained in

subspace,

hence

SETIT2007

(

) (

g HP+ 1 θ ( n ) G θ ( n−1 )

(

)

)

≠0

because

(

a

part

)

purpose of comparisons, we added the ESPRIT algorithm [RK 89]. In both figures, we consider two uncorrelated complex Gaussian sources separated by 3°. We note clearly the complete failure of the MUSIC as well as ESPRIT to resolve the two signals compared to the Krylov subspace-based algorithms. The two figures show the outperformance of the CGbased algorithms over its counterparts in term of the resolution probability.

of

g P + 1 θ ( n ) lies in the subspace of G θ ( n −1 ) . In real

situation R is unknown, we use the sample average ˆ as defined in (7). From (17), it is clear estimate R we have that when θ ( n ) = θ i , i = 1,2,… , P

(

) (

ˆ θ ( n−1 ) gˆ HP+ 1 θ ( n ) G

)

(

)

≈ 0 and Pˆ K θ ( n ) → ∞ .

4. Simulation results In this section, computer simulations were conducted with a uniform linear array composed of 10 isotropic sensors, whose spacing equals halfwavelength. There are two equal-power uncorrelated plane waves arriving at the array. The internal noises of equal power exist at each sensor element and they are statistically independent of the incident signal and of each other. Angles of arrival are measured from the broadside direction of the array. First, we fix the signal angles of arrival at -1° and 1° and the SNR's at 10dB. In Figure 1 we examine the proposed spectra when the observation data record K = 50 compared with that of AVF and the standard MUSIC. The MSWF and CG spectra resolve the two sources better than the AVF spectrum; the MUSIC algorithm fails. To analyse the performance of the algorithms in terms of the resolution probability, we use the following random inequality [Zha 95] PK (θ m ) −

1 ( PK (θ1 ) + PK (θ 2 ) ) < 0 2

Figure 2. Probability of resolution versus SNR. (separation 3°, K = 50)

(18)

where θ1 and θ 2 are the angles of arrivals of the two signals and θ m denotes their mean. PK (θ ) is the pseudspectrum defined in (17) of the angle of arrival θ.

Figure 3. Probability of resolution versus Number of Snapshots ( separation 3°, SNR = 0 db)

5. Conclusion In this paper, we propose the application of the MSWF and the CG algorithm to the DOA estimation problem. The proposed methods do not resort to the computation of the eigen-decomposition of the input covariance matrix. Instead they use new bases for the signal subspace based on the Krylov subspace. Numerical results indicate that the proposed methods outperform the AVF method, the classical MUSIC and ESPRIT in term of resolution at a small record data and low SNR.

Figure 1. CG, MSWF, AVF, and MUSIC spectra ( θ1 = −1° , θ 2 = 1° , SNR1 = SNR2 = 10 db, K = 50) In figures 2 and 3, we show the probability of resolution of the algorithms as a function of the SNR and the number of snapshots, respectively. For -5-

SETIT2007 1406-1414, Sep. 1999.

REFERENCES [B 85]

W. L. Brogan, Modern Control Theory, second edition ed. NJ: Prentice-Hall, Englewoods Cliffs, 1985.

[BA 02]

S. Burykh and K. Abed-meraim, “Reduced-rank adaptive filtering using Krylov subspace,” EURASIP JASP , vol. 2002, no. 12, pp. 1387-1400, Dec. 2002.

[CMS02]

[GPM 05]

[GRS 98]

W. Chen and U. Mitra and P. Schniter, “On the equivalence of three reduced rank linear estimators with applications to DS-CDMA,”IEEE Trans. on Information Theory, vol. 48, no. 9, pp.2609-2614, Sep. 2002. R. Grover and D. A. Pados and M. J. Medley, “Super-resolution direction finding with an auxiliary-vector basis,” in Proc. of SPIE vol 5819, Defense and security Syposium, Orlando, march 2005, pp. 357-365. J. S. Goldstein and I. S. Reed and L. L. Scharf, “A multistage representation of the Wiener filter based on orthogonal projections,” IEEE Trans. on Information Theory, vol. 44, no. 2, pp. 2943-2959, Nov. 1998.

[GV 96]

G. H. Golub and C. F. Van Loan, Matrix computations, 3rd ed. Johns Hopkins Univ. Press, Baltimore, 1996.

[HWFZ 04]

L. Huang and S. Wu and Feng and L. Zhang, “Low complexity method for signal subspace fitting,” IEE Electronics Letter, vol. 40, July 2004.

[HX 01]

M. L. Honig and W. Xiao, “Performance of reduced-rank linear interference suppression ,” IEEE Trans. on Information Theory, vol. 47, no. 5, pp. 1928-1946, July 2001.

[KT 83]

R. Kumaresan and D. W. Tufts, “Estimating the angles of arrival of multiple plane waves,” IEEE Trans. on Aerospace and Electronic Systems, vol. 19, pp. 134-139, 1983.

[KV 96]

H. Krim and M. Viberg, “Two decades of array signal processing research,” IEEE Signal Processing Magazine, pp. 67-94, July 1996.

[PB 99]

D. A. Pados and S. N. Batalama, “Joint Space-Time Auxilary-Vector Filtering for DS/CDMA Systems with Antenna Arrays,” IEEE Trans. on Communications, vol. 47, no. 9, pp.

-6-

[RK 89]

R. Roy and K. Kailath, “Esprit Estimation of signal parameter via rotational invariance technique,” IEEE Trans. on. Acoust., Speech and Signal Proc., vol. ASSP-37, no. 7, pp. 984-995, July 1989.

[Sch 86]

R. O. Schmidt, “Multiple Emitter Location and Signal parameter Estimation,” IEEE Trans. on Antennas and Propagation, vol.34, pp. 276-280, March 1986.

[VOK 91]

M. Viberg and B. Ottersten and T. Kailath, “Detection and estimation in sensor arrays using weighted subspace fitting,” IEEE Trans. Signal Processing, vol.39, pp. 2436-2449, 1991.

[WHGZ 02]

M. E. Weippert and J. D. Hiemstra and J. S. Goldstein and M. D. Zoltoski, “Insights from the relationship between the multistage Wiener filter and the method of conjugate gradients,” in IEEE SAM2002 Workshop, Arlington, VA, Aug. 2002, pp. 388 – 392.

[Zha 95]

Q.T. Zhang, “Probability of resolution of the MUSIC algorithm,” IEEE Trans. Signal Processing, vol. 43, no. 4, pp. 978987, April 1995.

Suggest Documents