so-called spatial smoothing technique has been devised in order to pre- process the array covariance matrix so that signal subspace algorithms can be applied ...
IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL. 36, NO. 4, APRIL 1988
425
An Improved Spatial Smoothing Technique for Bearing Estimation in a Multipath Environment RONALD T. WILLIAMS, SURENDRA PRASAD, A. K. MAHALANABIS, AND LEON H. SIBUL, MEMBER, IEEE
.
Abstract-In this paper, we consider a preprocessing technique that allows the application of signal subspace methods to the problem of direction-of-arrival estimation in the presence of multipath propagation. It is well known that signal subspace algorithms perform poorly when coherent or highly correlated signals are present. Recently, the so-called spatial smoothing technique has been devised in order to preprocess the array covariance matrix so that signal subspace algorithms can be applied irrespective of the signal correlation. Unfortunately, it has been found that application of this technique reduces the effective aperture of the array. In this paper, we explore the modified spatial smoothing technique of Evans et al. which is capable of increasing the effective aperture of the array over that of conventional spatial smoothing methods. We have shown that under certain conditions, the modified algorithm may fail to yield the desired increase in array aperture, and we have provided some simulation results concerning the sensitivity of the modified spatial smoothing algorithm to these conditions.
I. INTRODUCTION HE signal subspace method has proven to be an effective means of obtaining bearing estimates of distant narrow-band signal sources from noisy array measurements. The performance of algorithms based on this method is severely degraded, however, when coherent or highly correlated signals are present. Several authors have addressed this problem with varying degrees of success. In his original treatise on the signal subspace approach [ l ] , Schmidt proposed to search the signal subspace for linear combinations of steering vectors. However, the amount of computation involved in this procedure makes it impractical to implement in most real life situations. Wang and Kaveh [2] have proposed a solution of the direction-of-arrival estimation problem for coherent wideband sources using what might be described as a frequency averaging technique. A drawback of this tech-
T
Manuscript received August 15, 1986; revised September 28, 1987. This work was supported in part by the Naval Sea Systems Command and Applied Research Laboratory Exploratory and Foundational Research Program and by the Office of Naval Research Contract N00014-86-8-0542. R. T . Williams and A. K. Mahalanabis are with the Department of Electrical Engineering, The Pennsylvania State University, University Park, PA 16802. S. Prasad was on leave at the Department of Electrical Engineering, The Pennsylvania State University, University Park, PA 16802. He is with the Department of Electrical Engineering, Indian Institute of Technology, New Delhi-110016, India. L. H. Sibul is with the Applied Research Laboratory, The Pennsylvania State University, University Park, PA 16802. IEEE Log Number 8719006.
nique is that it requires some prior knowledge (at least good estimates) of the directions of arrival before it can be applied. When an accurate noise-free estimate of the array covariance matrix can be obtained, the matrix decomposition approach suggested by Di [3] proves to be an effective means of detecting the signals and estimating their bearings. The most promising solution to the coherent signal problem has been the so-called spatial smoothing method proposed by Evans et al. [4] and developed more fully by Shan et al. [ 5 ] . This scheme uses spatial averaging techniques to “decorrelate” the signals. A disadvantage of the spatial smoothing algorithm and many of the other approaches discussed above is that they significantly reduce the effective aperture of the array. The method of modijied spatial smoothing (MSS) has been proposed by Evans et al. [4] in order to achieve a larger effective aperture without having to increase the computational burden significantly. However, as in the case of spatial smoothing, a formal proof of the increased capability of the MSS algorithm has also not been provided in [4]. Our aim in this paper is first to provide a simple proof of this result for the case originally discussed in [4]. Our second aim is to bring out a set of conditions under which the MSS algorithm will fail to achieve the expected increase in the array aperture. A set of simulation results that examine the sensitivity of the MSS algorithm to a key parameter in the set of conditions are also presented. An outline of the paper follows. A brief review of the signal subspace method is undertaken in Section I1 as a prelude to the analysis of the MSS method. This method is then considered in Section I11 where a formal proof of the possible reduction in the number of array elements needed to resolve a given set of distant narrow-band signal sources is provided. This is accompanied by the derivation of a set of conditions under which such a reduction may not be possible. Finally, in Section IV, we present some simulation results to illustrate the sensitivity of the MSS algorithm to a parameter involved in the conditions derived in the preceding section. 11. REVIEWOF THE SIGNALSUBSPACE METHOD We consider a uniformly spaced linear array having N omnidirectional sensors receiving stationary random sig-
0096-3518/88/0400-0425$01.OO O 1988 IEEE
~
426
IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL. 36, NO. 4, APRIL 1988
nals emanating from correlated and possibly coherent point sources. The received signals are known to be embedded in spatially white Gaussian noise with unknown variance u 2 , with the signals and the noise being mutually statistically independent. We will assume the signals to be narrow-band' with center frequency wo. The vector of the array outputs can be represented as d
r(t)
=
C r=l
s , ( t )a@,)
+ n(t)
( la)
where d represents the number of point sources, a (0,) is the steering or direction vector defined by
44)=
e J W 7 ~ ,e J 2 W 7 ~
,
[I,
6
T~ = -
V
.. .
el(N-l)WTt 9
1T ,
sin Oi,
[nl(t),
n2(t),
7
* *
9
nN(t)]
represents the vector of noise terms. In the above set of equations, 6 represents the sensor spacing, v the speed of propagation, and 0; is the direction of arrival of the ith signal with respect to the perpendicular of the array. Equation (la) can be written in the alternative form
r(t) = As(t)
+ n(t)
where
A
=
[ a ( O , ) ,a ( 0 2 ) ,
-
*
= [ S l ( t ) , S2(t),
- @,)I 7
3
Sd(t)IT.
where the bar represents the expectation operation and t denotes conjugate transpose. Using (lb), it is then simple to get the result
-
+
+
= A , D ~ - ' s [ D ~A: - I ~ u2z
(3a)
[e1m71,
eJW72
. . . ,eJWrd].
(3b)
We now form the matrix -
R:) & J [ R : ) ] * J (4) where * denotes the complex conjugate and J is the exchange matrix:
+ n ( t ) n(t)t,
+
ASAt a2Z (2) where S is the signal covariance matrix. The signal subspace method uses a set of eigenvectors of the matrix R to estimate the steering vectors. The following lemma summarizes several important aspects of the signal subspace approach. Lemma I : Let us denote the eigenvalues of the array covariance matrix R by Xi and the corresponding eigen2 vectors by ui.We shall assume that XI 2 h2 2 AN. If the columns of A are linearly independent and the matrix S is nonsingular, then the smallest eigenvalue of R is equal to a 2and has multiplicity N - d . Furthermore, the eigenvectors corresponding to these eigenvalues span the noise subspace and are orthogonal to the columns of =
R:)
D b diag *
r ( t )r ( t ) +
R = A s ( t ) s ( t ) At
-
where A, is the set of steering vectors for a subarray of length m and
We can now form the covariance matrix of the received signal
R
+
( W
and S(t)
METHOD 111. THE MODIFIEDSPATIALSMOOTHING Let us assume that d fully correlated signals are received by the N element array described in the previous section. We begin our analysis by constructing L subar1 in such a way that each one rays of length m 2 d shares with an adjacent subarray all but one of its sensors. For example, if the ith subarray consists of the sensors ( i , i + l ; - - ,i m - l ) , then it will have two adjacent subarrays which consist of the elements ( i - 1, i , * , i + m - 2 ) and ( i + 1, i + 2 , * * , i m ) , respectively. It is easy to check that the covariance matrix of the signal across the ith subarray is given by
+
and
n(t)=
--
A. Also, the set of vectors ( u , , , u d ) spans the signal subspace. Proof: See [l]. In the subsequent development, we shall consider the problem of estimating the signal bearings using the results summarized in Lemma 1 when the signal covariance matrix is singular. In this case, the lemma cannot be applied directly. In the following section, however, we will discuss a procedure proposed by Evans et al. [4] which generates a modified signal covariance matrix which is not singular and which subsequently allows us to apply the signal subspace algorithm presented above.
---
'As Wax et al. [7] have shown, signal subspace techniques can be generalized to encompass the wide-hand case.
Equation (4) is equivalent to
E:)
= JA;[D'-']* S*[Di-']*tA:tJ
+ a2Z.
(5)
Now observe that JA; = A m ( D m - l ) * ,so that ( 5 ) can be rewritten as R:) = A,[Dm+i-21* S*[Drn+i-2 *t ] A : + u2z. Following Evans et al. [4], we now perform the spatial averaging and introduce the smoothed covariance matrix R by the relation .
L
427
WILLIAMS et a[.: SPATIAL SMOOTHING TECHNIQUE FOR BEARING ESTIMATION
This approach diverges from that of Shan et al. [ 5 ] in that it includes the second term shown in the summation of (6). As we shall see, the addition of this term may, in general, lead to an increase in the array aperture. Note first that the matrix D satisfies the condition D* = D - I , so that the smoothed covariance matrix R can be rewritten as
It is well known (see, for example, [9]) that the rank of the smoothed matrix Swill be the same as the rank of G. By expanding the products on the right-hand side of (12) and permuting the resulting columns, we introduce a new matrix G': cTiai
GI=( +
~ 2 - i - mS
* [ D 2 - i - m ] t A: ]
+ a21.
(7)
Defining the smoothed signal covariance matrix by
c L
2! 1 Di-'S[~'-'It 2L i = l
+D
ciibi
~12bi
* *
C2*1a2 ~21b2
C2*2a2
~22b2 *
czlad Cdlbd
C22Ud
Cd2bd
:
* *
Qdb2
C&Zd
Cdbd (134
--
'
*
&a2
where, for i = 1, d , we have defined the vectors ai * - ~ - ~ S * [ D ~ - ~ -and ~ bi by the relations 1 ai =
(8) the spatially smoothed array covariance matrix R can be finally written in the form If the smoothed signal covariance matrix S is nonsingular, we can employ a signal subspace algorithm such as MUSIC [ l ] in order to obtain estimates of the directions of arrival, the signal covariance matrix, and the noise variance from the matrix R . We shall now investigate the conditions under which the nonsingularity of will be ensured. Note that (8) can be written in the form
[ej(2-L-m)wo~t 7
...
,j(-m)woTt 7
, e j ( l - m ) w o T ~1
and
Now observe that any matrix whose columns are a permutation of those of G has the same rank as G. Thus, G' has the same rank as G, and consequently the same rank as S. Lemma 2: Provided condition (A.3) of the Appendix holds, the matrix G' has rank d. Pro05 To prove this statement, we begin by defining
Equation (13a) can then be written as
x [ D 2 - L - m , . . . , D-m, D I p m , I,D, DL-11' e
.
.
,
( lo)
We now form the Cholesky factorization of 3: 1 = -G G ~
S
2L
(
where it is easy to check that G = [ D 2 - L - m C * ., . . D P m C * ,D ' - m C * ,
C , DC,
-
,DL-'C]
(12) with the matrix C being obtained from the factorization S = CCt. *
We now claim that if for all k the set of nonzero vectors in { g k i ;i = 1, * * , d } are linearly independent (Le., condition (A.3) in the Appendix holds), and if there exists at least one vector gki is row i such that gki # 0, then the ith row of (14) is linearly independent of the other ( d 1 ) rows. To show the validity of this statement, we use the method of contradiction. If a row of (14) is linearly dependent on the remaining rows, then a segment gki of that row can be expressed as a linear combination of the corresponding segments of the other ( d - 1 ) rows. That is, gki may be expressed as a linear combination of { gk,; 1 Ij Id, i # j }. However, as shown in the Appendix, the set of nonzero vectors in { gki; 1 I i Id } will be linearly independent given (A.3) holds; hence, this statement must be false provided that for any given k , not all
IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL. 36. NO. 4, APRIL 1988
428
the gki are null vectors. This provision follows easily from the fact that no signal has zero energy. As noted in [ 5 ] , if all signals have nonzero energy, the matrix C will not have a row with all zero entries: thus, the matrix G' cannot have an all-zero row, which implies that there will exist at least one vector gki in each row of (14) which is nonzero. Thus, by contradiction, the ith row must be linearly independent of the remaining rows, and hence G' has full row rank. This argument is equivalent to noting that if for each row there exists a gki which cannot be formed from a linear combination of the vectors { gk,; 1 Ij Id , j # i }, then any single row cannot be formed from a linear combination of the other ( d - 1 ) rows. The above result leads us to the following theorem. Theorem 1: The number of subarrays needed to obtain a nonsingular spatially smoothed signal covariance matrix as dejned in (8), is d provided (A.3) of the Appendix is satisjed. Proofi The proof follows directly from Lemma 2 and the fact that and G' have equivalent rank. Remarks: 1) As noted earlier, the use of the spatial smoothing algorithm proposed by Shan et al. [5] reduces the effective aperture of the array by half. That is, the number of sensors N must be greater than or equal to twice the number of sources present. We have shown that the number of subarrays L needed for the modified spatial smoothing method is, under a mild condition, d / 2 . Hence, the number of sensors needed for this approach is
s,
N = L + m - I
for any set of 1 > d / 2 coefficients of the Cholesky decomposition of S . Writing c l fas
d ~ ; + d
cl, = l c l f l e'"'
L
L
3d -. 2
This shows the modified algorithm needs only 3 d / 2 sensors, which implies that the array aperture may be significantly larger than that for the previously proposed spatial smoothing algorithm. 2 ) In order to further increase array aperture, we can generalize the theorem to include the case where there exist uncorrelated groups of coherent signals. Using the procedures outlined in [5] and [8], it can easily be shown that if the signal covariance matrix S contains independent groups of coherent signals, then the number of subarrays needed to obtain a nonsingular matrix, as defined in (8), will, in general, be equal to half the number of of signals contained in the largest coherent group. To prove this result, note that the signal covariance matrix for this case is block diagonal. Thus, the received covariance matrix can be decomposed into a sum of covariance matrices whose signal components contain only a single correlated block. In other words, we may write
s
R = R1
argued that the largest number of subarrays needed is equivalent to half the largest number of signals contained within any single block. 3) It is interesting to note that an independently derived theoretical justification for the conditions derived in our paper may be found in the recent work by Bresler and Macovski [6] which was published at a time when the present paper was under review. While the emphasis in our work has been to find the minimum number of array elements needed to resolve a given set of sources, Bresler and Macovski have discussed the dual question of finding the maximum number of sources resolvable by a given linear array. While the results of [6] are of great theoretical significance, in the subsequent development, we shall study the limitations of these results which arise in practical situations. In [6], Bresler and Macovski have shown that if the directions of arrival are modeled as continuous random variables, R will be nonsingular with probability one. A question then arises as to how closely a set of conditions under which the method will fail can be met in practice without a significant degredation in the algorithm's ability to estimate the signal bearings. Using ( A . 3 , we can develop conditions under which the method will fail. This will allow us to test the performance of the algorithm when these conditions are closely satisfied. A solution to (A.5) will exist when the signal covariance matrix has rank one and c* C - l e J ( 2 - L - m ) w o 7 r = c* C - l e J ( 2 - L - m ) W T k (16) I1 I f Ik Ik
+ R2 + R3 +
e
*
*
+ Rp
where p represents the number of blocks in S . If we now apply the theorem to each block individually, it can be
( 1 6) becomes e -J 2a, e J ( 2 - L - m ) W 7 i =
e-J2ak
e J ( 2-L - m ) W T k
.
(17)
It is interesting to observe that whether or not a nontrivial solution to ( A S ) exists will depend on the signal bearings and the elements of the Cholesky factorization of the signal covariance matrix. These elements will, in turn, be functions of the signal phase and correlation. Furthermore, there exists a countably infinite set of coefficients for which the above relationship will be satisfied. However, if as in [6], the delays ( 7 , ) are modeled as continuous random variables, the probability that the relationship will hold will be zero. It should be pointed out that although the probability of satisfying (16) exactly may be small, we must also consider the effects on the system performance when (16) is nearly satisfied. This could cause $ t o be ill conditioned, which would severely degrade our ability to estimate the directions of arrival of the signals. Thus, to some degree, this method will be less robust than the conventional spatial smoothing scheme. Our simulation results, however, tend to suggest that this method performs reasonably well even when (16) is approximately satisfied. These results will be discussed in greater detail in the next section.
WILLIAMS
et al.:
429
SPATIAL SMOOTHINO TECHNIQUE FOR BEARING ESTIMATION
IV. SIMULATION RESULTS Application of the modified spatial smoothing method has been carried out using some simulation studies, and the results will now be compared to those of other methods. Example I: The first example involves the case of four coherent signals impinging on a *six-elementlinear array at angles of -6O", - S o , 20", and 45" with signal-tonoise ratios of 5 , 6 , 4 , and 2 dB. 500 snapshots, obtained from a 64-point DFT, were used to estimate the covariance matrix. The signals to be estimated were sinusoids with slowly varying random phase, uniformly distributed between ( - r , r).The phase was assumed constant over -100 -50 0 50 100 each DFT data segment. The array elements were spaced ANGLE ( D E G R E E S ) one-half wavelength apart. The results for the standard (a) . MUSIC algorithm and the conventional spatial smoothing approach using two subarrays are shown in Fig. l(a) and (b). As would be expected, the MUSIC algorithm fails to detect the signals. The conventional spatial smoothing scheme shows some improvement over the MUSIC algorithm since it does detect the signal at -5". Fig. l ( ~ ) shows the results for the modified method using two fG; Y 20 subarrays. It detects four signals and gives reasonable estimates of their bearings. Example 2: We have also used the MSS method for the case of two groups of coherent signals using the same signal model as in Fig. 1. A uniformly spaced, ten-element linear array was employed. In this example, various delays were added to the signals in order to obtain a com;e" -20o -100 -50 0 50 100 plex-valued signal covariance matrix. These deluys were chosen such that (I 7) would be approximately satisfzedfor ANGLE ( D E G R E E S ) each group of signals. Parameters were selected so that (b) the exponents in (17) agreed to within 3.5 percent error. Two pairs of coherent signals were created. The first pair had angles of arrival of -20" and -5" and SNR's of -3 and 0 dB. The second pair arrived at angles of 3 " and 25 " with SNR's of 2 and - 5 dB. No subarrays were necessary for this case. Again, 500 snapshots derived from 64-point DFT's were used to estimate the covariance matrix. Ten runs were made using independent data sets. As Fig. 2 shows, the method performed well under these conditions. This simulation demonstrates both the veracity of the results discussed in Remark 2) of Section 111 and the degree of insensitivity of the spatial smoothing method to the conditions set forth in (17). In other simulations conducted using the same param-100 -50 0 50 100 eters as above, it was found that the signals could be detected and their directions of arrival estimated for differANGLE ( D E G R E E S ) ences in the exponents of (17) as small as 2 percent. 03 Simulations have also shown that as the signal-to-noise Fig. 1. An illustration of the ability of (a) MUSIC, (b) conventional sparatio increases, the sensitivity of the method to the contial smoothing, and (c) the proposed method to detect and estimate the directions of amval of four coherent signals using a six-element array. ditions described in (17) decreases even further. Example 3: Finally, Fig. 3 depicts the results of simulation work performed using two signals with bearings (17) was chosen to be zero, while the value of a2 was of 20" and 30" and SNR's of 20 dB. A ten-element array selected to satisfy this condition with equality. The value was used. One hundred snapshots were combined to es- of a2 was then perturbed by a quantity with magnitude timate the array covariance matrix. The value of a1 in varying by between 0 and 7 percent of a2'soriginal value.
40i( j c___
430
IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING. VOL. 36, NO. 4. APRIL 1988
40
S
i
i
4
U
C C
30
--
T R
1
A
L
S -100
-50
0
ANGLE
50
J 00
(DEGREES)
%
DEVIATION
Fig. 2 . Estimates of the directions of amval for two pairs of coherent signals with delays chosen so as to closely satisfy (17). A ten-element array was used.
Fig. 3. A plot of the number of successful trials obtained using the modified spatial smoothing approach versus the percent deviation for crz from the value required to satisfy (17) exactly.
For each new value of a2, 50 trials were performed. A trial was determined to be a success if peaks were detected in the array response within 5" of the actual signal bearings. As can be seen from the diagram, the algorithm began to have difficulty detecting the signals when a2 was perturbed by less than 2 percent of the value of a2,which satisfies (17) exactly.
Furthermore, it is important to point out that the set of conditions derived in Remark 3) are not unique. There exist other conditions not described by the results presented in (17) for which (A.5) will be satisfied. It would be very useful to obtain a set of equations which completely decribe the conditions under which the MSS algorithm will fail.
erture is obtained, to some extent, at the expense of robustness. AS we have discussed, when (16) is approximately true, the spatially smoothed signal covariance matrix may become ill conditioned. Although the simulation results we have presented tend to suggest that (16) must be closely satisfied for the matrix in question to become ill conditioned, more research is needed to determine the sensitivity of this matrix to the restrictions implied in (AS).
If we assume that d is an even integer and L this matrix can be expressed as
e = (V,D:'-mF: VIFI
V2D;-"-" V2F2
=
d / 2 , then
* . F2
)
(A.2)
The columns of VI consist of the vectors { bi; 1 C= i I d / 2 } and the columns of V2 consist of the vectors { bi; d / 2 + 1 5 i I d }. D ,and D2 are diagonal matrices
43 1
WILLIAMS et ul. : SPATIAL SMOOTHING TECHNIQUE FOR BEARING ESTIMATION
REFERENCES
defined in a manner similar to that followed for D : D , = diag [ e l W T I , . . . e J w O r d / Z ] 3
and
D2 = diag
I7
[,JWOTd/Z+
...
eJw07d]
Since we have assumed that each 7, is unique, it is possible to assert that the nonzero elements of D , and D, will also be unique. The matrices F, and F2 have the form diag IC,,, * * > ~ i ( d / 2 ) 1 and diag [ c ~ ( ~ / z ) + *I ) * ,* ~ i d l ,respectively. In order to prove that G is nonsingular, we must demonstrate that its determinant is nonzero. It can be shown [9] that the determinant of has the form
-
7
Since VI and V2 are Vandermonde matrices, they, along with their inverses, are nonsingular and have nonzero determinants. Furthermore, F, and F2 are full rank diagonal matrices, and hence they too have nonzero determinants. Consequently, will be nonsingular if the following condition holds:
ID:-L-mF;F;l
[ I ] R. 0.Schmidt, “A signal subspace approach to multiple source location and spectral estimation,” Ph.D. dissertation, Stanford Univ., Stanford, CA, 1981. [2] H. Wang and M. Kaveh, “Coherent signal-subspace processing for detection and estimation of angles of arrival of multiple wide-band sources,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP33, pp. 823-831, Aug. 1985. 131 A. Di, “Multiple source location-A matrix decomposition approach,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP33, pp. 1086-1091, Oct. 1985. [4] J. E. Evans, J. R. Johnson, and D. F. Sun, “Application of advanced signal processing techniques to angle of arrival estimation in ATC navigation and surveillance systems,’’ M.I.T. Lincoln Lab., Lexington, MA, Tech. Rep. 582, June 1982. [5] T. J. Shan, M. Wax, and T. Kailath, “On spatial smoothing for direction-of-arrival estimation of coherent signals,” IEEE Trans. Acoust., Speech? Signal Processing, vol. ASSP-33, pp. 806-81 1, Aug. 1985. [6] Y. Bresler and A. Macovski, “On the number of signals resolvable by a uniform linear array,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-34, pp. 1361-1375, Dec. 1986. [7] M. Wax, T. J. Shan, and T. Kailath, “Spatio-temporal spectral analysis by eigenstructure methods,” IEEE Trans. Acoust., Speech, Signal Processing. vol. ASSP-32, pp. 817-827, Aug. 1984. 181 T. J. Shan and T. Kailath, “Adaptive beamforming for coherent signals and interference,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-33, pp. 527-536, June 1985. 191 F. A. Graybill, Matrices with Applications in Statistics. Belmont, CA: Wadsworth International, 1383.
V D2-12-m F2* F;’V;’V,I # 0 .
-
(‘4.3 1 For the above condition to be violated, one of the eigenvalues of the difference matrix must be zero, and thus there should exist a vector u such that (D:-L-~F,*F;~
-
. F;F;, v;’ v,)u
v20;-L-m
=
0.
(A.4)
Ronald T. Williams was born in Wilkes-Barre, PA, on November 29, 1961. He received the Bachelor’s degree in electrical engineering from the University of Delaware, Newark, in 1983 and the Master of Science degree in electrical engineering from The Pennsylvania State University, University Park, where he currently is a Ph.D. candidate. Mr. Williams is a member of Tau Beta Pi and Eta Kappa Nu.
Alternatively, we must have
,
- ~ 2 - L - m
-
F, F r ‘ u .
(A.5)
These results can be generalized to include odd numbers of signals and to situations where the number of subarrays is greater than half the number of signals. Note that if L > d / 2 , (A.l) will have a submatrix equivalent to G. Since the rank of a matrix cannot be smaller than the rank of any of its submatrices, the rank of this matrix will therefore be d. This proof can be extended to the case of an odd number of signals by augmenting the matrix in (A. 1) with a column vector having a form similar to that of the other columns in the matrix, with T~ for this column chosen to be unique. We can then apply the notion that the members of any subset of a set of linearly independent vectors are also linearly independent. Thus, in summary, provided condition (A.3) is met, the matrix G will have full rank acd the vectors in question will be linearly independent.
Surendra Prasad received the B.Tech. degree in electronics and electrical communication engineering from the Indian Institute of Technology, Kharagpur, in 1969, and the M.Tech. and Ph.D. degrees in electrical engineering from the Indian Institute of Technology, New Delhi, in 1971 and 1974, respectively. He has been teaching at the Indian Institute of Technology, New Delhi, since 1971, where he is presently a Professor of Electrical Engineering. He was a visiting Research Fellow at the Louehborough University of Technology, Loughborough, England, from August 1976 to August 1977, where he was involved in developing algorithms for adaptive processing for high-frequency arrays. From August 1985 until December 1986 he was a Visiting Professor at The Pennsylvania State University, University Park. His teaching and research interests are radarlsonar signal processing, communications, and computer-aided design of digital systems. Currently, he is engaged in research in the areas of sonar and seismic signal processing, underwater communications, and array signa! processing. I
-
432
IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL. 36, NO. 4, APRIL 1988
A. K. Mahalanabis received the M.Sc. (Tech.) and D.Phil.(Sc.) degrees in radiophysics and electronics from Calcutta University, Calcutta, India, in 1957 and 1962, respectively. He held teaching positions in the Institute of Radiophysics and Electronics, Calcutta University (1961-1966), Roorkee University (19661967), Indian Institute of Technology, Delhi (1967-1982), and Lehigh University (19821984). In 1984 he joined The Pennsylvania State Universitv. Universitv Park. as a Professor of Electrical Engineering. He also had visiting appointments with McMaster University (1976- 1977) and the University of California, Santa Barbara (1981-1982). His main research interests are in statistical estimation theory and its application to control and signal processing problems. He has authored or coauthored more than 100 research papers, an undergraduate text on systems engineering, and a graduate text on computer-aided power systems analysis and control. Dr. Mahalanabis is a member of the Editorial Board of Optimal Control Applications and Methods and an Associate Editor of Autornatica and the Journal of the Institute of Electronic and TelecommunicafionEngineers.
Leon H. Sibul (S’52-A’53-M’60) was born in VBru, Estonia, on August 30, 1932. He received the B.E.E. degree from George Washington University, Washington, DC, in 1960, the M.E.E. degree from New York University, New York, NY, in 1963, and the Ph.D. degree from The Pennsylvania State University, University Park, all in electrical engineering. From 1960 to 1964 he was a member of the Technical Staff at Bell Telephone Laboratories, working primarily on the electronic switching sysrem. Since 1964 he has been with the Applied Research Laboratory, The Pennsylvania State University, engaged in various aspects of research in underwater systems. His primary research interests are in the areas of adaptive signal processing, array processing, stochastic system theory, and broad-band signal ambiguity function theory. He directs a group that does research in these areas. He has developed and has taught graduate courses in adaptive signal processing. He is currently a Senior Scientist at the Applied Research Laboratory and a Professor of Acoustics. Dr. Sibul is a member of Sigma Tau, Sigma Xi, and the Society for Industrial and Applied Mathematics. He is the Associate Editor for Sonar ON AEROSPACE AND ELECand Undersea Systems of IEEE TRANSACTIONS TRONIC SYSTEMS. He is the Editor of the IEEE Press book, Adaptive Signal Processing.