Maximum likelihood localization of multiple sources by ... - IEEE Xplore

0 downloads 0 Views 569KB Size Report
Oct 10, 1988 - Abstract-We present a novel and efficient algorithm for computing the exact maximum likelihood estimator of the locations of multiple sources in ...
IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL. 36, NO. IO, OCTOBER 1988

1553

Maximum Likelihood Localization of Multiple Sources by Alternating Projection

Abstract-We present a novel and efficient algorithm for computing the exact maximum likelihood estimator of the locations of multiple sources in passive sensor arrays. The estimator is equally well applicable to the case of coherent signals appearing, for example, in multipath propagation problems, and to the case of a single snapshot. Simulation results that demonstrate the performance of the algorithm ate included.

I. INTRODUCTION HE localization of radiating sources by passive sensor arrays is one of the central problems in radar, sonar, radio-astronomy , and seismology. The simplest, yet not degenerate, problem in this context is the estimation of the directions-of-arrival of narrow-band sources with the same known center frequency, which are located in the far field of an array composed of sensors with arbitrary locations and arbitrary directional characteristics. This problem has received considerable attention in the last 30 years, and a variety of techniques for its solution have been proposed. The Maximum Likelihood (ML) technique was one of the first to be investigated [ 141, [9]. Nonetheless, because of the high computational load of the multivariate nonlinear maximization problem involved, it did not become popular. Instead, suboptimal techniques with reduced computational load have dominated the field. The better known ones are the Minimum Variance method of Capon [3], the MUSIC method of Schmidt [ 131 and Bienvenu and Kopp [ 11, and the related Minimum Norm method of Reddi [ 1 13 and Kumaresan and Tufts [8]. For a review of these and related techniques, the reader is referred to [6], [lo], and [5]. The performance of these techniques is inferior to that of the ML technique. The difference in the performance is especially conspicuous in the threshold region, namely, when the signal-to-noise ratio is small, or alternatively, when the number of samples (“snapshots”) is small. Moreover, these techniques cannot handle the case of coherent signals. This case appears, for example, in specular multipath propagation problems and, therefore, it is of great practical importance. The preprocessing spatial

T

Manuscript received October 7, 1986; revised February 25, 1988. The authors are with RAFAEL, P.O. Box 2250, Haifa 31021, Israel IEEE Log Number 8822828.

smoothing techniques proposed to cope with this problem [4], [12] remedy the situation only partially. Yet another difference between the ML and these suboptimal techniques is their performance in the case that the number of snapshots is smaller than the number of sensors as happens, for example, in the case of a single snapshot. Again, while the ML technique handles this case without any difficulty, the suboptimal techniques fail completely. In this paper we present a novel and computationally attractive method for computing the ML estimator. It is based on an iterative technique referred to as “Alternating Projection” (AP), that transforms the multivariate nonlinear maximization problem into a sequence of much simpler one-dimensional maximization problems. The paper is organized as follows. In Section 11, we formulate the problem. In Section I11 and Section IV, respectively, we derive the ML estimator and present the AP algorithm. Computer simulations that demonstrate the performance of the ML estimator, as computed with the AP algorithm, in comparison with the Cramer-Rao lower bound and the MUSIC algorithm are presented in Section V. Our concluding remarks are given in Section VI. 11. PROBLEMFORMULATION Consider an array composed of p sensors with arbitrary locations and arbitrary directional characteristics, and assume that q narrow-band sources, centered around a known frequency, say wo,impinge on the array from locations el, , 8,. Since narrow-bandedness in the sensor array context means that the propagation delays of the signals across the array are much smaller than the reciprocal of the bandwidth of the signals, it follows that the complex envelopes of the signals received by the array can be expressed as

---

where x ( t ) is the p

X

1 vectors

and a ( @ k )is the “steering vector” of the array toward

0096-3518/88/1000-1553$01.OO O 1988 IEEE

1554

IEEE TRANSACTIONS ON ACOUSTICS. SPEECH, AND SIGNAL PROCESSING, VOL. 36, NO. 10. OCTOBER 1988

direction 0, e-j,,,,r,(en). . . , u p ( 0 , )

= [al

9

.

,-j,,,,Tp(ek)l T

( 1.c) Here T denotes the transpose, and xi ( t ) = the signal received by the ith sensor, sk ( t ) = the signal emitted by the kth source as received at the reference point, ai (0,) = the amplitude response of the ith sensor to a wavefront impinging from location Ok, T~ (0,) = the propagation delay between the reference point and the ith sensor for a wavefront impinging from location 0,, and ni ( t ) = the noise at the ith sensor.

The vector of the received signals x ( t ) can be expressed more compactly as x(t) =

+

A ( @ )~ ( t ) n ( t )

(2.4 where A ( 0 )is the p x q matrix of the steering vectors

-

A ( @ )= [ 4 0 , ) , ,~ ( e , ) ] X 1 vector of the signals

(2.b)

and s ( t ) is the q

ever, to the case of fully correlated signals and to the case of a single snapshot). Assumptions A1 and A2 are needed to guarantee the uniqueness of the solution [19]. Assumptions A3 and A4 are the conventional assumptions for the noise in sensor arrays, made to facilitate the application of the Maximum Likelihood (ML) technique. As we shall see, in case these assumptions do not hold, the estimator to be derived is still meaningful: it coincides with the Least-Squares (LS) estimator. 111. THE MAXIMUMLIKELIHOOD ESTIMATOR In this section we derive the Maximum Likelihood (ML) estimator of the source locations. The derivation follows that in [16] (see also [2]). Unlike the common approach in the sensor array literature, we do not regard the signals as sample functions of random processes. Instead, we regard them as unknown deterministic sequences. Although this is done mainly because it allows, as we shall show, certain computational simplifications, it also has some interesting advantages when the signals waveforms are of interest. Under assumptions A4 and A5, it follows from (1) that the joint density function of the sampled data is given by M

s(t) = [s*(t),

>

*

s,(t)l'.

(2.4

--

Let t , , , tu denote the time instants at which the snapshots are taken. The sampled data can then be expressed as

+N

X = A ( @ )S where X and N are the p

and S is the q

X

X

(3'a)

M matrices

M matrix

s = [S(tl)? * S(tM)]. (3.4 The localization problem is to estimate the locations 0 I , . . . , 0, of the sources from the M samples ("snapshots") of the array x ( t l ) , * ,x(tM). To solve this problem, we make the following assumptions regarding the array, the signals, and the noise. 7

A 1: The number of signals is known and is smaller than the number of sensors, namely, q < p . A2: Every set of p steering vectors is linearly independent. A3: The noise { n ( t )} is stationary and ergodic complex valued Gaussian process of zero mean and variance matrix a2Z,where u 2 is an unknown scalar and Z is the identity matrix. A4: The noise samples { n ( t i ) 1 are statistically independent. The assumption that the number of signals is known was made to simplify the exposition. The case of unknown number of signals is dealt with in [18] (see also [ 171 for a different method which is not applicable, how-

=

FI

1 a det [a2Z]

where det [ ] denotes the determinant. Thus, the log likelihood, ignoring constant terms, is given by

To compute the Maximum Likelihood (ML) estimator we have to maximize the log likelihood with respect to the unknown parameters. Fixing 0 and S, and then maximizing with respect to u2, we get 62

l

cM I x ( t i )

= Mp i = l

-

A ( @ )s ( t i ) I 2 .

(6)

Substituting this result back into the log-likelihood function, ignoring constant terms, we get that the ML estimator is obtained by solving the following maximization problem:

(7) Since the logarithm is a monotonic function, the above maximization problem is equivalent to the following minimization problem:

ZISKIND AND WAX: ML LOCALIZATION

1555

which is the Least Squares (LS) criterion for the estimation problem at hand. To carry out this minimization, we fix 0 and minimize with respect to S. This yields the well-known solution sl(t,) = ( A H ( @A ) ( @ ) ) - ’A H ( @ x) ( t t )

(9)

where H denotes the Hermitian conjugate. Substituting (9) into (8) we obtain the following minimization problem:

the spectral representation theorem of matrix theory, we can express R as P

R =

@

c I x ( t l ) - A ( O ) ( A H ( OA) ( @ ) ) - ’A H ( @ ) x ( t , >1.’

r=l

( 10) This can be rewritten as M

min @

r=l

IX(il)

2

-

PA(@)x(ti)l

(1l.a)

where PA(@)is the projection operator onto the space spanned by the columns of the matrix A ( O ) ,

P A ( @= ) A(@)(AH(@ A )( @ ) ) - ’A H ( @ ) . (1l.b) Thus, the maximum likelihood estimate of the parameter 0 is obtained by maximizing the log-likelihood function M

~ ( 0=)r = l Ip,-i(@) .(ti>

1.’

(12)

This estimator has an appealing geometric interpretation. Notice, from ( l ) , that in the absence of noise the vector x ( t ) stays in the q-dimensional space spanned by the columns of A(@), referred to as the “signal subspace,” while the presence of noise may cause x ( t ) to wander away from this subspace. From (12) it follows that the Maximum Likelihood estimator is obtained by searching over the array manifold for those q steering vectors that form the q-dimensional signal subspace which is “closest” to the vectors { x ( t ) >,where closeness is measured by the modulus of the projection of the vectors onto this subspace. A different form of (12), which is found to be more suitable for our purposes, is obtained by rewriting it as

L ( @ ) = tr [PA(@)RI

(13.a)

where tr[ ] is the trace of the bracketed matrix, and R is the sample covariance matrix

R

l M

=-

C

Mi=l

x ( t l )x H ( t , ) .

(13.b)

~;U,U?.

(14)

With this representation, using the properties of the projection and trace operators, we can rewrite (13) as P

L(@)=

M

min

i= 1

i=l

6; IPA(@)Ui(’.

(15)

This expression shows that the Maximum Likelihood estimator, unlike the MUSIC and the Minimum Norm estimators, involves all the eigenvalues and the eigenvectors of R;the larger the eigenvalue the more important it is that the projection of the corresponding eigenvector onto the signal subspace be maximized. The maximization of the log-likelihood, (13), is a nonlinear, multidimensional maximization problem, and as such is computationally expensive. Moreover, in many cases, the array manifold, { a c e ) } ,is given as a table, referred to as the “calibration table,” thus rendering all the conventional gradient-type maximization techniques unapplicable. In the next section we present a rather simple and efficient multidimensional maximization technique “tailored” for the problem at hand. IV. THE ALTERNATING PROJECTION ALGORITHM In this section we present an efficient algorithm for computing the ML estimator. The algorithm is based on a maximization technique, referred to as Alternating Maximization, which we apply in conjunction with a computationally efficient projection-matrix decomposition scheme. A . The Alternating Maximization Technique The Alternating Maximization (AM) is a conceptually simple technique for multidimensional maximization. The technique is iterative; at every iteration a maximization is performed with respect to a single parameter while all the other parameters are held fixed. That is, the value of 0, at the ( k 1 )th iteration is obtained by solving the following one-dimensional maximization problem:

+

elk+{) =

arg max tr

[P[A(&i:$,

( 16.a)

0,

e{:,’

where denotes the ( q - 1 ) computed parameters, (1

)

=

X

1 vector of the pre-

- . , e!”’,,e!?,, - , ea)]

(16.b)

An interesting interpretation of (13), which sheds some and arg maxe, f( 0,)denotes the value of 0, that attains light on the relation between the Maximum Likelihood the maximum o f f ( 0;). estimator and the MUSIC estimator of Schmidt [ 131 and Intuitively, the algorithm climbs the peak of L ( 0 ) Bienvenu and Kopp 111, and the Minimum Norm esti- along lines parallel to the axes, as shown schematicaly in mator of Reddi [ 113 and Kumaresan and Tufts 181, is ob- Fig. 1. The rate of climb depends, of course, on the structained by casting (13) in terms of the eigenstructure of R. ture of L ( 0 ) in the proximity of the peak. Since a max2 6, and u I , , up denote the imization is performed at every iteration, the value of the Let 6, 2 62 2 eigenvalues and eigenvectors, respectively, of R. From maximized function, L ( @ ) ,cannot decrease. As a result,

-

1556

IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL. 36, NO. IO. OCTOBER 1988

82

t

our problem, we have

e[!])is the unit vector

where b( ei, q01

(2) A(41

8,

8 , 8, Fig. 1. Successive iterations of the AM algorithm in the maximization of a 2-dimensional function.

the algorithm is bound to converge to a local maximum. Depending on the initial condition, the local maximum may or may not be the global one. As the initialization step is so critical to the global convergence, it is a key element in the algorithm. The following rather simple initialization procedure gave excellent results in the extensive set of simulations we have run for different scenarios. We start by solving the problem for a single source. In this case we get

8‘1’)= arg max tr [ P , ( ~ , ) R ] . 81

(17)

1) 1)

denotes the norm.

C. The Algorithm Using the notation (i) =

[e\”,. . . , e!!),],

e p = arg max tr [P[a(blo)),a(e2)1Rl. e2

( 18)

C!ntinuing in this fashion, at the ith iteration we solve for 8 io), with all the other sources being at their precomputed value;. The proce$ure is continued until all the initial values 8 j0), * , are computed.

- - e?)

B. Projection-Matrix Decomposition The AM algorithm reduces the number of evaluations of L ( 0 ) with respect to an exhaustive search. Nevertheless, the computational load at every iteration is still substantial since matrix inversion and multiplications are involved. To simplify the computation at each iteration, we introduce a basic property of projection matrices, known as the projection-matrix update formula. Let B and C be two arbitrary matrices with the same number of rows, and let P [ E , c ,denote the projection-matrix onto the columns space of the augmented matrix [B, C ] . It is well known that

+p

C ~

(19.a)

where CEdenotes the residual of the columns of C when projected on B,

c, = ( 1 - PB)C.

(19.b)

Applying the projection-matrix update formula (19) to

(23)

the AP algorithm we have described above can be summarized as follows: INITIALIZATION: For-i = 1 to q do e!’)= arg max 8i

Next, we solve the second source, assuming the first source is at 8

P [ B , C ] = PB

and

bH(ei, el!,’)R b ( e i ,e{!,’>

End; MAIN LOOP: 0 +- k a n d O 7 i; Repeat until I for all i ( i = 1, k+k+l; i + (i 1)modq Forh = 1 t o q d o

I , Rb(ei,

e[!,’);

End;

V . SIMULATION RESULTS In order to demonstrate the performance of the (ML) estimator computed by the AP algorithm, we compare it to the Cramer-Rao lower bound and with the suboptimal MUSIC algorithm in several simulated experiments. In all the first four experiments, the array was linear and uniform with three isotropic sensors spaced half a wavelength apart. The array beamwidth, defined as the reciprocal of its length in wavelengths, was therefore 57”. The sources were two equal power narrow-band emitters, and the noise was additive and uncorrelated from sensor to sensor and with the signals. In every experiment we performed 100 Monte-Carlo runs and computed the root-mean-square (rms) error for each direction-of-arrival. In the first experiment we simulated two uncorrelated

1557

ZISKIND AND WAX: ML LOCALIZATION 8 .

6 . W W [z (3

IO

SNAPSHOTS

W

n Y

4 .

a 0 K

a W

2 v)

5 [z

I

I

0.

I

I

IO

I

I

15

I

I

I

I

25

20

30 (db)

SNR

Fig. 2 . Two equal power uncorrelated emmiters, located at 0" and 20", impinging on a linear uniform array with three sensors spaced half a wavelength apart. The array beamwidth is 57". The number of snapshots is 10. 2.0

. MUSIC

I . 6

W W

SNR

=

2 0 ( d b 1

1.2

a (3

W

\\

CR

n 0.8

a 0 a U

W

0 . 4

I

0.0 6

I

I l l

10

1

I

1

20

30

40

I

1

60

l l l l 100

I '200

I

1

I

1

1

1

1 l 1000

SNAPSHOTS

Fig. 3. Same scenario as in Fig. 2. The SNR is 20 dB.

emitters impinging from 0" and 20". The number of snapshots taken was 10. Fig. 2 shows the resulted rms error (in degrees) of the first source as a function of the SNR, defined as SNR = 10 log ( s 2 / 0 2 ) , where s 2 and a* are the average power of the signals and the noise, respec-

tively. The improved performance of the ML estimator at low and moderate SNR is evident. In the second experiment, the scenario was the same as in the first one, except that this time we fixed the SNR to 20 dB. Fig. 3 shows the resulted rms error of the first

1558

IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL. 36, NO. IO, OCTOBER 1988

3. 0 MUSIC

2 . 5

2. 0

W W

a

SNR

=

20 (db)

1 . 5

0 W

n

a a a

1 . 0

v)

0 . 5

0 W

5

a I

0.0

I

I

I

200

0

I

1

I

I

600

400

J

1

IO00

000

S NA P SH 0 TS

Fig. 4. Two equal power uncorrelated emitters, located at 0" and 5 " , impinging on the same array as in Fig. 2. The SNR is 20 dB. 8.

6 .

IO

SNAPSHOTS

W W

a a

4 .

W

n v

a oz a

0 2.

W

U J

5

a 0.

I

I

I

I O

I IS

I

I 20

I

I

I

2s

1 30

S N R (dbl

Fig. 5. Same scenario as in Fig. 3 , biut the two sources are fully correlated.

source as a function of the number of snapshots. Again, note the improved performance of the ML estimator. In the third experiment, the two emitters were located at 0" and 5" and the SNR was again 20 dB. Fig. 4 shows the substantially improved performance of the ML estimator. The somewhat larger deviation from the CramerRao bound in this case is explained by the fact that the bound is known to be not tight in the threshold region.

In the fourth experiment, we simulated the coherent signals case. The scenario was as in the first experiment, except that this time the two sources were fully correlated. Fig. 5 shows the results as a function of the SNR. When compared to Fig. 3, about 10 percent degradation is observed. In contrast, the MUSIC algorithm failed completely in this case. In the fifth experiment, we simulated a completely dif-

1559

ZlSKlND AND WAX: ML LOCALIZATION

5L-

\

4

MUSIC

SIGNALS

SNR= 2 0 ( d b )

a

0

a U

W

t 0.0

I

0.

I 100.

I

I

200

I

I

1

1 500

4 0 0 .

300.

SNAPSHOTS

Fig. 6. Four equal power uncorrelated emitters, located at -8", - l o , 5", and 15", impinging on a uniform linear array with seven sensors spaced half a wavelength apart. The array beamwidth is 19". The SNR is 20 dB .

ferent scenario. The array consisted of seven isotropic sensors spaced half a wavelength apart (array beamwidth of 19"), and the number of emitters was four. The emitters were of equal power and uncorrelated. Their directions-of-arrival were -8", - 1", 5 " , and 15". The SNR was 20 dB. Fig. 6 shows the resulted rms error of the emitter arriving from 5 " as a function of the number of snapshots. In the extensive set of simulations we have run, the number of iterations to convergence almost never exceeded 7, with the average being between 4 and 5. For example, typical iteration sequences in the first experiment are 8", l l " , 5 " , 16", l o , 20" for 15 dB SNR, and 9", 13", 3", 18", 0", 20" for the 20 dB S N R . To bring another example, typical sequences in the third experiment are 3", 2 " , 4 " , 0", 6" for 100 snapshots, and 3 " , l o , 5 " , 0" for 1000 snapshots. VI. CONCLUDING REMARKS We have presented a novel and efficient algorithm, referred to as AP, for computing the ML estimator of the locations of multiple sources in passive sensor arrays. The algorithm is equally applicable to the case of coherent signals and to the case of a single snapshot. The algorithm is iterative; the maximum of the likelihood function is computed by successive approximations. The convergence of the algorithm to the global maximum was demonstrated for a variety of scenarios. Evidently, the key to this global convergence is the initialization scheme.

The complexity involved in each iteration is modest. Moreover, the numerically and computationally troublesome eigendecomposition of the sample-covariance matrix is avoided. In spite of the excellent global convergence of the algorithm in the extensive set of simulations, no guarantee for global convergence can be given in general. This limitation is common to all the "deterministic hill climbing" techniques. The recently introduced ' 'randomized hill climbing" technique referred to as ''Simulated Annealing" [7] is more promising from this respect, although even for this scheme global convergence is not guaranteed [201.

ACKNOWLEDGMENT The authors are grateful to B. Porat for his help in the computations of the Cramer-Rao lower bounds, and to the anonymous reviewers for their valuable comments which helped to improve the exposition. REFERENCES [ l ] G . Bienvenu and L. Kopp "Adaptivity to background noise spatial coherence for high resolution passive methods," in Proc. ICASSP80, 1980, pp. 307-310. [2] J . Boheme, "Estimating the source parameters by maximum likelihood and nonlinear regression," in Proc. ICASSP84, 1984, pp. 7.3.17.3.4. [3] J . Capon, "High-resolution frequency-wavenumber spectrum analysis,'' Proc. IEEE, vol. 57, pp. 1408-1418, 1969. [4] J . E. Evans, J. R. Johnson, and D . F. Sun, "Application of advanced signal processing techniques to angle of arrival estimation in ATC navigation and surveillance systems," Rep. 582 Lincoln Lab., M.I.T., Cambridge, MA, 1982.

1560

IEEE TRANSACTIONS ON ACOUSTICS, SPEECH, AND SIGNAL PROCESSING, VOL. 36, NO. IO. OCTOBER 1988

S. Haykin, “Radar array processing for angle of arrival estimation,” in Array Signal Processing, S . Haykin, Ed. Englewood Cliffs, NJ: Prentice-Hall, 1984, pp. 194-292. D. H. Johnson, “The application of spectral estimation methods to bearing estimation problems,” Proc. IEEE, vol. 70, pp. 1018-1028, 1982. S. Kirkpatric, C . D. Gellat, and M. Vecchi, “Optimization by simulated annealing,” Science, vol. 220, no. 5598, pp. 671-680, 1983. R. Kumaresan and D. W. Tufts, “Estimating the angles of amval of multiple plane waves,” IEEE Trans. Aerosp. Electron. Syst., vol. AES-19, pp. 134-139, 1983. W. S . Ligget, “Passive sonar: Fitting models to multiple time-series,” in NATO ASI on Signal Processing, J. W. R. Griffiths et a l . , Eds. New York: Academic, 1973, pp. 327-345. N. L. Owsley, “Sonar array processing,” in Array Signal Processing, S . Haykin, Ed. Englewood Cliffs, NJ: Prentice-Hall, 1984, pp. 115-193. S. S. Reddi, “Multiple source location- A digital approach,” IEEE Trans. Aerosp. Electron. Syst., vol. AES-15, pp. 95-105, 1979. T.-J. Shan, M. Wax, and T. Kailath, “On spatial smoothing for direction-of-arrival estimation of coherent sources,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-33, pp. 806-811, 1985. R. 0. Schmidt, “Multiple emitter location and signal parameter estimation,” IEEE Trans. Antennas Propagat., vol. AP-34, pp. 276280, 1986. F. C. Schweppe, “Sensor array data processing for multiple signal sources,” IEEE Trans. Inform. Theory, vol. IT, 14, pp. 294-305, 1968. G. Strang, Linear Algebra and Its Applications. New York: Academic, 1980. M. Wax, “Detection and estimation of superimposed signals,” Ph.D. dissertation, Stanford Univ., Stanford, CA, 1985. M. Wax and T. Kailath, “Detection of signals by information theoretic criteria,” IEEE Trans. Acoust., Speech, Signal Processing, vol. ASSP-32, pp. 387-392, 1985.

[IS] M. Wax and I. Ziskind, “Detection of coherent and noncoherent signals by the MDL principle,” submitted to IEEE Trans. Acoust., Speech, Signal Processing. 1191 -, “On unique localization of multiple sources in passive sensor arrays,” submitted to IEEE Trans. Acoust., Speech, Signal Processing. [20] I. Ziskind and M. Wax, “Maximum likelihood localization of diversed polarized sources by simulated annealing,” submitted to IEEE Trans. Antennas Propagat.

Ilan Ziskind was born in Haifa, Israel, on January 7 , 1937. He received the B.Sc. degree from the Technion, Israel Institute of Technology, Haifa, Israel, in 1962, the M.Sc. degree from the University of Michigan, Ann Arbor, in 1963, and the Ph.D. degree from Cornell University, Ithaca, NY, in 1974. From 1964 to 1965 he was with Burrough Corporation. In 1965 he joined RAFAEL. Between 1974 to 1976 he was the Head of the Systems Analysis and Simulation Section. Between 1977 to 1980 he was the Head of Signal Processing Group. In 1980 he was a Visiting Scientist at Bell Northern Research Laboratones, Montreal, P.Q., Canada. Since 1981 he has been involved in research and development of adaptive systems, radar subsystem, and emitter localization techniques.

Mati Wax (S’81-M’85) for a photograph and biography, see p. 588 of the April 1988 issue of this TRANSACTIONS.