Jun 6, 2000 - A Multiple-Target Tracking Filter Using Data Association. Based on a MAP ... portant methods are the Joint Probabilistic Data As- sociation ...
IEICE TRANS. FUNDAMENTALS, VOL.E83–A, NO.6 JUNE 2000
1203
PAPER
A Multiple-Target Tracking Filter Using Data Association Based on a MAP Approach Hong JEONG† and Jeong-Ho PARK† , Nonmembers
SUMMARY Tracking many targets simultaneously using a search radar has been one of the major research areas in radar signal processing. The primary difficulty in this problem arises from the noise characteristics of the incoming data. Hence it is crucial to obtain an accurate association between targets and noisy measurements in multi-target tracking. We introduce a new scheme for optimal data association, based on a MAP approach, and thereby derive an efficient energy function. Unlike the previous approaches, the new constraints between targets and measurements can manage the cases of target missing and false alarm. Presently, most algorithms need heuristic adjustments of the parameters. Instead, this paper suggests a mechanism that determines the parameters in an automated manner. Experimental results, including PDA and NNF, show that the proposed method reduces position errors in crossing trajectories by 32.8% on the average compared to NNF. key words: multiple-target tracking, data association, Kalman
lter
1.
Introduction
Multiple-target tracking (MTT) plays an important role in radar, especially in surveillance radar systems that must estimate positions and velocities of moving targets from noisy measurements. In this field, it is well known that the most probable errors are false alarms and missing targets. The other important errors are those due to a single measurement from multiple targets and multiple measurements from a single target. Unfortunately, there is no unified method that can deal with all these errors together. Among many MTT schemes, the three most important methods are the Joint Probabilistic Data Association (JPDA) [2], the Expectation Maximization (EM) approach [1], [6], [14], and the neural net approach [10], [15]. The JPDA is based upon a probabilistic model of targets and measurements and is an expansion of PDA [4]. Both methods are well described in [2], [3]. The underlying mechanism of this algorithm is the introduction of an association mechanism, called association matrix, between the targets and measurements. Within a tracking gate, the optimal estimate of target state is obtained by the conditional mean of the states given the measurements. This quantity is actually a sum of all the estimates of the states, weighted with the corresponding association probability from individManuscript received July 24, 1999. Manuscript revised January 11, 2000. † The authors are with the Department of E.E., POSTECH, Pohang, Kyungbuk, 790-784, Korea.
ual measurements. This association is combined with the Kalman filter as a closed loop that forms two parts: data association and prediction. For each time frame, a Kalman filter predicts the target centers, and forms a gate for each target center. Based on the measurements in the gates, that surround the predicted target centers, the data association unit tries to associate measurements and targets in some optimal manner. Consequently, the updated measurements are used in the Kalman filters. In EM approach, Avitzour’s method [1] is to calculate the target states using the maximum likelihood (ML) estimation according to the EM algorithm [5]. The data, observed over a time interval, is used and processed in block form. Molnar [14] derived the whole system consisting of prediction and association units in an unified manner with the EM method. Defining the association matrix as missing data, the algorithm can estimate the association matrix for the current measurements. Also the algorithm can determine the association in a time-recursive manner. Like the Kalman filter technique, this scheme computes the measurement and time updates in parallel. Another approach for optimal association is the connectionist scheme, called neural networks, where the problem becomes energy minimization. In this approach, the constraints on association are naturally represented with the connection strengths between neurons and the optimal association is automatically obtained when the network converges to an equilibrium state. The energy function is often realized by Hopfield networks [7], [8] and, starting from some suitable initial state, the equilibrium state is reached with the gradient direction method. Our research is confined to the association matrix and the related energy function as a cost function for optimal constraints. By adopting the constraints on target missing and false alarm errors, we derive a new energy function that is more general and natural. The Lagrange multiplier method is used to find out an optimal association and parameters. We derive a new scheme that computes the data association by a MAP estimation. Our algorithm does not need any information like the probability of detection and the clutter density that are all essential for PDA. Unlike the neural networks, it does not need any parameters like the balancing coefficients, yet maintain-
IEICE TRANS. FUNDAMENTALS, VOL.E83–A, NO.6 JUNE 2000
1204
Gate at time k − 1
xt (k − 1|k − 2) ❥ × ×
xt (k − 1|k − 1) Fig. 2
Fig. 1
The multiple-target tracking system.
ing better performance. Section 2 explains the general concept of the tracking filter. Section 3 defines the problem with the MAP. The optimal solution is derived in Sect. 4. Finally, the scheme is tested in Sect. 5. 2.
The Overall Structure of MTT
The overall scheme of our target tracking system is depicted in Fig. 1. It consists of three parts: acquisition, association, and prediction. The purpose of the acquisition part is to detect the targets when the system starts from the beginning. This part should take care of the cases when a target appears or disappears from the field of scope or when intermittent measurements are received from a target. Once detected, the targets should be continuously tracked by the joint cooperation of the association and prediction parts. The prediction part uses a Kalman filter to provide the association part with the predicted positions of the targets along with their gate shapes. In a given time frame, the association part counts the measurements that lie in the gates and encodes this information into a validation matrix. Utilizing this information with other additional constraints, the association part decides which measurements correspond to which targets and represents the relationships by an association matrix. This information is supplied to the Kalman filter in a measurement update stage for the current time frame. This routine repeats for each time frame. The crucial part of this algorithm is the association part, that is the main concern of this paper. The concept is illustrated in Fig. 2. In this figure, it is evident that the circles and the crosses de-
xt (k|k)
×
Gate at time k
✿
xt (k|k − 1) ×
×
❥
xt (k + 1|k)
Snapshot views of tracking a target in k − 1 and k.
note, respectively, the gates and the measurements. Let us denote the state and measurement of target t by xt (k) and z t (k), respectively. Assuming that the number of targets N is known, we define the state as the positions and velocities of the targets at time k: x(k) = (x1 (k), x2 (k), . . . , xN (k)). For a specific target t, these become xt (k) = (xt (k), x˙ t (k), yt (k), y˙ t (k))T and z t (k) = (xt (k), yt (k))T , where the vectors (xt (k), yt (k))T and (x˙ t (k), y˙ t (k))T represent, respectively, the target position and velocity on the xy plane. The state equation of linearly moving target t can be represented by xt (k) = Ft (k − 1)xt (k − 1) + Gt (k − 1)w(k − 1), (1) and the measurement from the target by z t (k) = Ht (k)xt (k) + v(k),
(2)
where Ft (·), Gt (·), and Ht (·) are respectively the state transition, process noise coupling, and measurement matrices. Without loss of generality, we assume all of these to be constant: Ft (·) = F , Gt (·) = G, and Ht (·) = H. The noise processes w(·) and v(·) are mutually independent Gaussian with zero mean and variances Q and R, respectively. The Kalman filter calculates the predicted target state: xt (k|k − 1) = F xt (k − 1|k − 1),
(3)
the state prediction covariance: Pt (k|k − 1) = F Pt (k − 1|k − 1)F T + GQGT ,
(4)
and the measurement prediction covariance: St (k) = HPt (k|k − 1)H T + R,
(5)
for each target and sends xt (k|k − 1) and St (k) to the association unit. Note that the measurement update part of Kalman filter is xt (k|k) = xt (k|k − 1) + W t (k){z t (k) − Hxt (k|k − 1)}, (6) and Pt (k|k) = [I − W t (k)H]Pt (k|k − 1), where the Kalman gain W t (k) is
(7)
JEONG and PARK: A MULTIPLE-TARGET TRACKING FILTER USING DATA ASSOCIATION BASED ON A MAP APPROACH
1205
z3 (k)
×
z1 (k)
×
×
x1 (k|k − 1) z2 (k) x2 (k|k − 1) z4 (k) ×
Fig. 4 Fig. 3
The parameter and measurement spaces.
Measurements and targets.
W t (k) = Pt (k|k − 1)H T St−1 (k).
(8)
So far, the mechanism of the prediction part has been described. Now, we can describe which measurements are contained in which gates by a matrix, called validation matrix. Let us see Fig. 3. The two gates are denoted by the vectors of their centers x1 (k|k − 1) and x2 (k|k − 1) that have been determined in previous time frame. There are four measurements z 1 (k), z 2 (k), z 3 (k), and z 4 (k). The gates contain, respectively, {z 1 (k), z 2 (k)} and {z 3 (k), z 4 (k)}, with z 2 (k) being contained in both gates. For M (k) measurements and N targets, the validation matrix at time k is ω k = {ωjt (k)|j ∈ [1, M (k)], t ∈ [1, N ]},
(9)
where ωjt (k) = 1, if the measurement z j (k) is in the gate of target t and ωjt (k) = 0, otherwise. For example, the validation matrix for Fig. 3 is 1 0 1 1 0 1 . 0 1 Next, consider the association part dealing with M (k) measurements and N targets. The relationships between the targets and measurements are conveniently denoted by the association matrix Ω = {ωjt |j ∈ [1, M (k)], t ∈ [1, N ]}. Here, ωjt ∈ [0, 1] denotes the probability of the association between measurement j and target t. Also assume that the measurements are y j (k) for j ∈ [1, M (k)] and the gate centers are g t (k) for t ∈ [1, N ]. Then, the state xt (k|k − 1) predicted by the filter is utilized as an input to the association part to produce the center position of gate t: g t (k) = Ht (k)xt (k|k − 1), t ∈ [1, N ].
(10)
Next, the measurement prediction covariance matrix St (k) is used to produce the validation matrix Ω0 in the following way. The distance between a measurement j and the center of gate t is given by the Mahalanobis distance: 2 rjt (k) = [y j (k) − g t (k)]St−1 (k)[y j (k) − g t (k)]T . (11)
If rjt (k) is smaller than the radius g of the gate,
then the measurement is considered to belong to the gate. This relationship can be conveniently described 0 0 |ωjt ∈ [0, 1], j ∈ by the validation matrix ω 0 = {ωjt [1, M (k)], t ∈ [1, N ]}. Since the noise is Gaussian, the distance from the gate center is directly related with the probability. Therefore, it is natural to define the elements of the matrix as 1 2 2 if rjt ≤ γ, 1/2 exp(−rjt /2), 0 (12) ωjt , |2πSt (k)| 0, otherwise. Then, the association part generates the measured position of the target t by z t (k) = y j (k), t ∈ [1, N ],
(13)
where j = arg maxj ωjt . This is the quantity to be supplied to the prediction filter for measurement update and for further processing. 3.
MAP Estimates for Data Association
This section describes the internal structure of the association unit in detail. One can assume that the probability space consists of the parameter Ω and the observation Ω0 , as illustrated in Fig. 4. The observation Ω0 is the modified validation matrix given in (12). The parameter Θ is fixed for the time being and later on will be released for further generalization. The goal is to find the association matrix ω, given the modified validation matrix ω 0 . By the convention, the upper and lower cases of the variables denote respectively the random variables and realizations. A MAP estimate ω ∗ is given by ω ∗ = arg max log p(ω|ω 0 ).
!
(14)
In this description, the posterior probability can be derived by the Bayes rule: p(ω|ω 0 ) =
p(ω 0 |ω)p(ω) . p(ω 0 )
(15)
We assume that the conditional p(ω 0 |ω) and the prior p(ω) are all Gibbsian [13], p(ω 0 |ω) , Z11 exp{−E1 (ω 0 |ω)}, (16) p(ω) , Z12 exp{−E2 (ω)}. Z1 and Z2 are partition functions given by
IEICE TRANS. FUNDAMENTALS, VOL.E83–A, NO.6 JUNE 2000
1206
Z1 = !0 exp{−E1 (ω 0 |ω)}, Z2 = ! exp{−E2 (ω)}.
(17)
[12] and become the Lagrangian L(ω): L(ω) =
From the above equations, we obtain ω ∗ = arg min{E1 (ω 0 |ω) + E2 (ω)}. ω
(18) +
For convenience, the partition functions Z1 and Z2 are assumed to be constant. From now on, we must specify the energy functions E1 (ω 0 |ω) and E2 (ω) in (18). Let us first look into the relationship between Ω and Ω0 . In a parameterobservation model, one can assume that Ω − Ω0 is a Gaussian noise process and therefore E1 (ω 0 |ω) ,
N M (k) β 0 2 (ωjt − ωjt ) , 2 t=1 j=1
(19)
where β is a positive constant. In order to model the prior of Ω, we require all the constraints involved in this quantity. First, a target must be associated with at most one measurement. If no measurement is assigned for a target, then it means a missed detection has occurred. This condition can be nicely represented mathematically by assuming that the sum of a column must be less than or equal to 1. Conversely, a false alarm is treated the other way around. The sum of a row must be less than or equal to 1. If the sum is zero, then it means that the measurement is a false alarm. Combining these facts together, we obtain M (k) ≤ 1, for t ∈ [1, N ] , j=1 ωjt (20) N ≤ 1, for j ∈ [1, M (k)] . t=1 ωjt These constraints are different from those in previous work [9] in that it deals with equality constraints. The final constraint is a restriction of the region of ωjt : 0 ≤ ωjt ≤ 1.
4.
U
t=1
+
j=1
U
N
L(ω) = E1 (ω 0 |ω)+ < G(ω), l >,
t=1
∂L(ω ∗jt ) = 0, for given l∗ ∈ l, ∂ω jt
(22) where the barrier function U (x) is defined by 0, if x ≤ 0, U (x) , ∞, otherwise.
(23)
The energy functions (19) and (22) can be integrated by the three Lagrangian multipliers λ, , and µ
(25)
(26)
∗ where ωjt is the (j, t) element of the stationary point ∗ ω . The solution of this equation is turned to be ∗ ωjt =
0 µ∗jt − λ∗t − ∗j + βωjt . β
(27)
The vector equation < G(ω), l∗ >= 0 is equivalent to
λ∗t
M (k)
j=1
j=1
j=1 t=1
t=1
where G(ω) is a constraint vector including (20) and (21), and l is a Lagrange multiplier vector with the same dimension, n + m + n × m, as G(ω). The symbol < ·, · > denotes an inner product. According to the generalized Kuhn-Tucker theorem [12], there is a l∗ ∈ l, l∗ ≥ θ so that the Lagrangian is stationary at ω ∗ and < G(ω), l∗ >= 0. At the stationary point the gradient becomes zero, that is
+
M (k) N ωjt − 1 + U (−ωjt ),
j=1
The next step is to determine appropriate Lagrange multipliers λ, , µ and the minimizer ω ∗ from (24). The Lagrangian can be represented by the vector equation
M (k)
j=1
M (k)
j=1
Finding Optimal Solution
t=1
E2 (ω) =
M (k) N ωjt − 1 + j ωjt − 1 .
Here, β > 0,λt ≥ 0, j ≥ 0, and µjt ≥ 0.
The constraints (20) and (21) are integrated into the energy function for the prior of Ω ωjt − 1
M (k)
(24)
N
M (k)
λt
t=1
(21)
N
N
(k) N M (k) N M β 0 2 (ωjt − ωjt ) − µjt ωjt 2 t=1 j=1 t=1 j=1
∗j
∗ ωjt − 1
N t=1
∗ ωjt
−1
+
M (k) N
∗ (−µ∗jt ωjt ) = 0.
j=1 t=1
(28) We can observe all terms in this equation are nonpositive. Hence in order that the summation is zero, each term must be zero. Therefore, this equation can be separated into the three equations M (k) ∗ −1) = 0, for t ∈ [1, N ], λ∗t ( j=1 ωjt N ∗ ∗ for j ∈ [1, M (k)], j ( t=1 ωjt −1) = 0, ∗ = 0, for t ∈ [1, N ], j ∈ [1, M (k)]. µ∗jt ωjt (29) From the requirements of the generalized Kuhn-Tucker
JEONG and PARK: A MULTIPLE-TARGET TRACKING FILTER USING DATA ASSOCIATION BASED ON A MAP APPROACH
1207
theorem, we get ∗ λt ≥ 0, ∗ j ≥ 0, µ∗ ≥ 0. jt
5. (30)
Substituting (27) into (29) and combining these with (30), we get the simultaneous equations
M (k) ∗ 0 −β+ j=1 (µ∗ jt − j +βωjt ) ∗ λt = max 0, , M (k)
N ∗ 0 −β+ t=1 (µjt −λt ∗+βωjt ) (31) , ∗j = max 0, N ∗ 0 µjt = max(0, λ∗t + ∗j − βωjt ).
P P
To solve the above simultaneous equations, we use some iterative scheme. In particular, we use the GaussSeidel method to increase the stability of convergence. The detail algorithm is described in Algorithm 1. Algorithm 1 (Iterative Algorithm): Given ω 0 and β, compute λ, µ, and for each 1 ≤ t ≤ N , 1 ≤ j ≤ M (k), and l ≥ 0. 1. Set the initial values of all Lagrange multipliers to zero. 2. Calculate the Lagrange multipliers sequentially. (l+1)
µjt
(l)
(l)
0 = max(0, λt + j − βωjt ),
(l+1)
λt
= max 0,
−β +
M (k) j=1
(l+1)
(µjt
(l)
0 − j + βωjt )
M (k)
= max 0,
−β +
N t=1
(l+1)
(µjt
(l+1)
− λt
N
0 + βωjt )
.
3. If the norm of the change of the Lagrange multipliers is larger than a threshold, let l = l + 1 and return to the step 2. Otherwise, go to the step 4. 4. Calculate the minimum point from the Lagrange multipliers, ωjt =
The algorithm 1 has been tested for the dynamical model described in (1) and (2). The Nearest Neighbor Filter (NNF) [2], [11] and the Probabilistic Data Association (PDA) [2], [4] techniques were also used for the same data to be compared with the proposed method. For simplicity, the state vector xt (k) is represented in 2D Cartesian coordinates. The matrices of the state transition and process noise in (1) are, respectively, 1 T 0 0 0 1 0 0 F = (32) 0 0 1 T 0 0 0 1 and
2 0 T /2 T 0 G= 2 , 0 T /2 0 T
(33)
where the sampling interval T is 1s without loss of generality. As a noise model, we assumed that there is no process noise. However, as a filter parameter, we use the covariance matrix of the process noise:
1.2106 × 10−5 0.0 Q= , (34) 0.0 1.2106 × 10−5 for all k in the unit of km2 /s2 . Similarly, the measurement noise covariance matrix is defined as
0.0225 0.0 R= (35) 0.0 0.0225
,
(l+1)
j
Experimental Results
0 µjt − λt − j + βωjt . β
One can assign any value to β considering the numerical range of a computer. Here we set β = 1 without loss of generality. If M is the average number of M (k), the computational complexity of the single loop is O(M N ). The ¯ N ), where we ascomplete algorithm requires O(kM ¯ sume k is the average number of iterations. Therefore, even if the number of tracks and measurements are increased, the computational requirement does not increase exponentially.
for all k in the unit of km2 . We assume that the probability of validation Pg is 0.99. Therefore, the threshold used for the validation gate is γ = g 2 = 9.2 [2]. As a termination criterion, the threshold is 10−6 for the norm of the change of the Lagrange multipliers. As a typical scenario, we used two cases: single linear moving target and a pair of crossing targets. The initial positions are (−4.0 km, 1.0 km) and (−4.0 km, −1.0 km), and the initial velocities are (0.20 km/s, −0.05 km/s) and (0.20 km/s, 0.05 km/s), respectively. Incidentally, the tracking system is realized in JAVA and executed on both Salaris 2.6 and PC. 5.1 Single Target Tracking To begin with, we simulated single target tracking using the first target given previously. As a criterion, we defined the position error as the distance between the estimated and the true target position in the xy plane. The Monte Carlo simulation has been tried for
IEICE TRANS. FUNDAMENTALS, VOL.E83–A, NO.6 JUNE 2000
1208
Fig. 6
(a)
Tracking two targets with Pd = 0.8 and C = 0.3.
where σ 2 = 0.0225. The initial state is a random sample of Gaussian distribution given by x(0|0) ∼ N {x(0), P (0|0)}.
(37)
Here, x(0) is the initial position and velocity. According to Fig. 5, the RMS position errors of the proposed method were better than those of the nearest neighbor filter, but worse than those of PDA mostly. When clutter density is 0.01, the proposed method is the worst, due to the variation of the random measurements. 5.2 Crossing Target Tracking
(b) Fig. 5 The RMS position errors of single target tracking: (a) Pd = 0.8 and (b) Pd = 0.9.
N = 50 runs. Figure 5 depicts the RMS position error, which is the RMS value of the distances between the actual and the estimated tracks. Also, the detection probabilities Pd are 0.9 and 0.8 for the two cases in the figure. For each Pd , the four cases of the clutter densities, 0.01, 0.1, 0.2, and 0.3 km−2 , have been tried. Determining initial values are very important to obtain proper statistics in the Monte Carlo method; the initial value of the state estimation covariance matrix is 2 σ 2 /T 0 0 σ σ 2 /T 2σ 2 /T 2 0 0 , (36) P (0|0) = 0 σ 2 /T 0 σ2 0 0 σ 2 /T 2σ 2 /T 2 where σ 2 is the variance of the measurement noise and
For more complex scenario, we used a pair of targets crossing each other at (0, 0). Figure 6 is one of the many samples that uses the detection probability Pd = 0.8 and the clutter density C = 0.3 km−2 . Due to space limitation, only the trajectories of the proposed method is shown here. The figure contains three sets of trajectories: actual target trajectories, measurements of target positions, and the traces calculated by the proposed method. Results of extensive experimentation showed that the proposed method generally tends to separate two crossing targets successfully in most clutter cases. To obtain a reliable result, we repeated the same Monte Carlo simulation with different seeds. The RMS position error is depicted in Fig. 7. For the case Pd = 0.9 and C = 0.01 km−2 , the PDA method fails to track the targets only once, but the RMS error is large. In general, a large error dominates the degradation of tracking performance. Summarized in Table 1 is the amount of improvement in position errors. In this case, the PDA is generally the best but the proposed method performs comparably too. Figure 8 shows the average number of iterations k¯ in case of 50 runs and Pd = 0.9. One can observe that the number k¯ is in the range between 18 and 27. As the clutter density increases, the number of itera-
JEONG and PARK: A MULTIPLE-TARGET TRACKING FILTER USING DATA ASSOCIATION BASED ON A MAP APPROACH
1209 Table 1 The improvement of RMS position errors for the crossing targets. Detection Probability Pd
Clutter Density C(/km2 )
RMS Position Error Improvement
(%) PDA
Proposed
0.8 0.8 0.8 0.8
0.01 0.1 0.2 0.3
37.3 75.4 48.3 45.9
21.0 51.6 20.2 15.9
0.9 0.9 0.9 0.9
0.01 0.1 0.2 0.3
11.6 90.8 72.3 68.9
61.2 64.5 9.9 17.9
56.3
32.8
Total average
(a)
Fig. 8
(b) Fig. 7 The RMS position errors of the crossing targets: (a) Pd = 0.8 and (b) Pd = 0.9.
tion increases only slightly. The number of false alarms in the tracking gate increases according to the clutter density C. The false data slow down the convergence speed, but the numbers are reasonable for actual implementation. One of the advantages of our method is that, like NNF, it isn’t necessary to know the detection probability Pd and the clutter density C. On the contrary, the parameters of the prior knowledge are mandatory for PDA, and become the major cause of better performance. Generally, the proposed method displays better performance than NNF, especially in cases of RMS error rate, by 32.8% on average.
6.
¯ for Pd = 0.9. The average number of iterations k
Conclusion
We have derived a new scheme that computes the data association by a MAP estimation method and uses a new energy function. Unlike PDA, this algorithm does not need any additional information like the probability of detection and the clutter density. Also our algorithm does not need any parameters like the balance control coefficients used in the neural network approach. As a result, in the proposed scheme, there is no need for any trial and error to select values for required coefficients. These properties are important for adapting to unknown environments and for good performance and stability. Acknowledgments This work has been supported by the grants from MARC and ADD during 1999-2000.
IEICE TRANS. FUNDAMENTALS, VOL.E83–A, NO.6 JUNE 2000
1210
References [1] D. Avitzour, “A maximum likelihood approach to data association,” IEEE Trans. Aerosp. Electron. Syst., vol.28, no.2, pp.560–565, April 1992. [2] Y. Bar-Shalom and T.E. Fortmann, Tracking and Data Association, Academic Press, Inc, 1988. [3] Y. Bar-Shalom and X.-R. Li, Estimation and tracking: principles, techniques, and software, Artech House, Inc, 1993. [4] Y. Bar-Shalom and E. Tse, “Tracking in a cluttered environment with probabilistic data association,” Automation, vol.11, pp.451–460, Sept. 1975. [5] A.P. Dempster, N.M. Laird, and D.B. Rubin, “Maximum likelihood from incomplete data via the EM algorithm,” J. Roy. Statist. Soc. Ser. B, vol.39, no.1, pp.1–38, 1997. [6] H. Gauvrit, J.P. Le Cadre, and C. Jauffret, “A formulation of multitarget tracking as an incomplete data problem,” IEEE Trans. Aerosp. Electron. Syst., vol.33, no.4, pp.1242– 1257, Oct. 1997. [7] J.J. Hopfield, “Neural networks and physical systems with emergent collective computational abilities,” Proc. National Academy of Science, pp.2554–2558, 1982. [8] J.J. Hopfield and D.W. Tank, “Neural computation of decisions in optimization problems,” Bilological Cybernetics, vol.52, pp.141–152, 1985. [9] Y.W. Lee, “An optimal adaptive data association scheme for multiple target tracking in radar,” PhD Thesis, POSTECH, 1998. [10] H. Leung, “Neural network data association with application to multiple-target tracking,” Opt. Eng., vol.35, no.3, pp.693–700, March 1996. [11] X. R. Li and Y. Bar-Shalom, “Tracking in clutter with nearest neighbor filters: Analysis and performance,” IEEE Trans. Aerosp. Electron. Syst., vol.32, no.3, pp.995–1010, July 1996. [12] D.G. Luenberger, Optimization by Vector Space Methods, John Wiley & Son, Inc., 1969. [13] V.A. Malyshev and R.A. Minlos, Gibbs Random Fields, Kluwer Academic Publishers, 1991. [14] K.J. Molnar and J.W. Modestino, “Application of the EM algorithm for the multitarget/multisensor tracking problem,” IEEE Trans. Signal Processing, vol.46, no.1, pp.115– 129, Jan. 1998. [15] D. Sengupta and R.A. Iltis, “Neural solution to the multitarget tracking data association problem,” IEEE Trans. Aerosp. Electron. Syst., vol.25, pp.96–108, Jan. 1989.
Hong Jeong received the BS degree from the EE Dept. at the Seoul National University in 1977. In 1979, he received the MS degree from the EE Dept. at KAIST. During 1984-1988, he received the SM, EE, and PhD degrees in EECS Dept. from MIT. During the period of 1979-1982, he taught at the Kyung Book National University. Since 1988, he has worked at POSTECH, where he is now an associate professor. During 1996-1997, he worked in the Bell Labs at Murray Hill. His major research area is the digital signal processing for image, speech, and radar signals.
Jeong-Ho Park received the BS degree from the EE Dept. at the Yonsei University, in 1988, and the MS degree from the EE Dept. at POSTECH in 1990. During 1988-1996, he worked at the LG Precision, Co. in developing advanced radar system, where he is now a senior engineer. Since 1996, he has been working towards the PhD degree at POSTECH. His current research interests include radar signal processing.