Multiple Target Tracking With Asynchronous Bearings-Only

0 downloads 0 Views 255KB Size Report
function θ : {1,...,dk} → {1,...,r} associates measure- ments with targets with θ(j) = i, indicating that the jth measurement is due to the ith target. Then, for j = 1,...,dk.
Multiple Target Tracking With Asynchronous Bearings-Only-Measurements Thomas Hanselmann

Mark Morelande

Dept. of Electrical and Electronic Eng. The University of Melbourne Parkville, VIC 3010 Email: [email protected]

Dept. of Electrical and Electronic Eng. The University of Melbourne Parkville, VIC 3010 Email: [email protected]

Abstract— An algorithm for detection and tracking of multiple targets using bearings measurements from several sensors is developed. The algorithm is an implementation of a multiple hypothesis tracker with pruning of unlikely hypotheses. Tracking conditional on each hypothesis can be performed using any suitable filtering approximation. In this paper a rangeparameterized unscented Kalman filter is used. Each hypothesis describes a track collection with varying number of targets. Final track estimates are obtained by weighted clustering according to hypothesis probabilities and clustered track states. Simulation experiments include arbitrary setup of multiple targets and multiple moving receiver platforms (sensors). The main results are the asynchronous modeling of measurements arrivals which allows an effective and efficient processing in a Bayesian MHT framework.

Asynchronous Bearings-Only Tracker, Multiple Hypothesis Tracker, MHT, RP-UKF. I. I NTRODUCTION Bearings-only tracking has attrackted considerable interest over a long time period. The goal is to have a complete state estimate achieved by bearings-only measurements. This is often referred to as target motion analysis (TMA) or target state estimation. There are many scenarios, from simple linear target models to maneuvering targets and stationary or moving sensor platform(s). All these scenarios share a strong dependence on the geometric setup and highly non-linear observation equations in Cartesian coordinates. Biasing and covariance corrections based on coordinate transformations from polar to Cartesian coordinates have been successfully used [1]. Modified polar coordinates may also be used for better tracking but the state and observation equations become more complicated than in Cartesian coordinates [2]. More closely related to the work presented here is [3], which also applies a multi-hypothesis framework based on Cartesian coordinates and Kalman filter to the problem of bearings-only target motion analysis. While a similar range parametrization is done, there is only a single sensor involved and the emphasis is moving the sensor platform to achieve a reliable range estimate as quickly as possible, where as this paper examines the problem of detecting and tracking multiple targets from asynchronous bearings measurements received at a set of spatially distributed sensor platforms. Asynchronous measurements arise because a target can be observed only

when the rotating target beam is aligned with the sensor. Furthermore, [3] assumes a Gaussian measurement probability density function (pdf) and uses the “information equivalence” of measurements and innovations, as described in [4], where as in this paper a non-linear observation equation and filter approximations are used to approximate the true non-linear measurement pdf. The situation scenario considered in this paper consists of a set of cooperative sensor platforms that pass measurements asynchronously to a centralized fusion center. The sensors are of passive nature, that is, they do not emit a radar beam but only observe hostile beams of enemy targets. Therefore, sensors can only achieve bearings-only measurements and no range information. The sensor platforms have accurate position information that is also available to the fusion center. The foundation of the tracking and detection problem is recursive state estimation. This can be done optimally, in the mean square error sense, by computing the posterior expectation of the random variable constructed by concatenating the states of the individual targets. Computation of the posterior expectation requires the posterior distribution which cannot be found exactly for this problem. As approximation a range-parameterized unscented Kalman filter (RP-UKF) has been used, which is an extension of the RP-EKF [5], with the EKF replaced by the more accurate UKF [6]. A similar parametrization based on Gaussian Mixtures has been done in [7]. In [8] variants of the KF, in particular square root UKF for bearings only tracking in Cartesian coordinates with a linear state and non-linear measurement equation, were investigated and found to be more robust with only little computational overhead. Range parametrization involves hypothesizing a number of different ranges for the initial measurement and then propagating state estimates conditional on each initial range. The posterior distribution approximation is a Gaussian mixture although it soon becomes obvious which of the initial ranges are incorrect and can be discarded. This eventually results in the posterior distribution being approximated by a single Gaussian. Detection of target tracks is performed by enumerating hypotheses regarding the origin of the observed measurements. This is the same approach as the optimal multiple

II. S ITUATION M ODELING



where Fk

=

Qk

=

N (Fk xi,k−1 , Qk )

(1)

  1 Tk I2 ⊗ 0 1   3 T /3 Tk2 /2 I2 ⊗ k2 Tk /2 Tk

(2) (3)

with Tk = tk − tk−1 , I2 being the 2 × 2 identity matrix and ⊗ T  denoting the Kronecker product. Let xk = xT1,k , . . . , xTr,k denote the collection of target states. Assume the presence of m sensing platforms. At time tk the sensor sk ∈ {1, . . . , m} produces bearing measurements of dk targets. These are colT lected into the vector φk = [φk,1 , . . . , φk,dk ] . The association function θ : {1, . . . , dk } → {1, . . . , r} associates measurements with targets with θ(j) = i, indicating that the jth measurement is due to the ith target. Then, for j = 1, . . . , dk φk,j |xk



N (hsk (xθ(j),k ), σ 2 ),

where for a = 1, . . . , m, ha (xi,k , t) =

 arctan

yi,k − ζa (t) xi,k − ξa (t)

III. R ANGE - PARAMETERIZED U NSCENTED K ALMAN F ILTER (RP-UKF) First, d potential ranges r1 , . . . , rd are constructed in collinear range, associated with one-sigma ellipsoids for bearing measurements taken form the origin, as illustrated by figure 1. 120

100

80

60

40

20

Modeling has to take account for the irregular measurement generation times t1 , t2 , . . . , when a rotating target beam hits anyone of the sensors. Let the number of targets be r and T xi,k = [xi,k , x˙ i,k; , yi,k , y˙ i,k ] be the i-th target state at time tk , where (xi,k , yi,k ) is the target position in Cartesian coordinates and the dot notation indicates differentiation with respect to time. The individual target states evolve independently according to xi,k |xi,k−1

with (ξa (t), ζa (t)) the position of sensor a at time t. The aim is to detect and track the r targets and the algorithm to achieve this is described in section IV. An algorithm for achieving this aim will be developed in two steps. First, the simple problem of tracking a single target with no prior information is considered. The approach developed for this problem will form the basis of a procedure for detection and tracking of an unknown number of targets.

y−position

hypothesis tracker (MHT) [9]. The procedure requires the posterior distribution of the target states conditional on each hypothesis, obtained by the RP-UKF, and computation, or approximation, of the posterior probability of each hypothesis. It is assumed here that only measurements from targets are recorded although the procedure can be extended to the case where clutter is also observed. Since the number of hypotheses increases exponentially as measurements are acquired, pruning of unlikely hypotheses is necessary. A major advantage of the current modeling is that the asynchronous update of measurements leads to an efficient implementation such that the number of combinations to assign measurements to track collections is small compared to a scan based data association where many measurements of targets are made at a certain time instance. The paper is organized as follows. Section II describes the scenario. Section III explains the range-parametrization of the UKF. The detection/tracking algorithm with the multiplehypothesis framework that assigns measurements to hypotheses is summarized in section IV. Section V explains two methods of state extraction from the set of hypotheses, which then can be used to form an output statistics for performance evaluation. Experiments and results are given in section VI, followed by the conclusions in section VII.

(4)  (5)

0

−20 −20

0

Fig. 1.

20

40

60

80 x−position

100

120

140

160

180

Generation of hypothesized measurements.

Let ρk = [xk , yk ]T denote the position elements of the target state at time tk and ρ˙ k = [x˙ k , y˙ k ] be the velocity elements. Given the first measurement at time t1 , the posterior density is   d 1 cos(φ1 ) , Rj ) (6) p(x1 |Hφ1 ) = g(ρ˙ 1 )N (ρ1 ; rj sin(φ1 ) d j=1 where g is a prior density for the target velocity, and is conveniently selected to be a Gaussian density, such that (6) is a Gaussian mixture. It will now be shown how this Gaussian mixture approximation to the posterior density is propagated as measurements are acquired. Let Hj be the hypothesis of the event that the jth hypothesized range rj and covariance matrix Rj belong to the target with state x1 .   cos(φ1 ) , Rj ) (7) p(x1 |Hj , φ1 ) = g(ρ˙ 1 )N (ρ1 ; rj sin(φ1 ) p(x1 |φ1 ) = =

d  i=1 d  i=1

w1i p(x1 |Hi , φ1 )

(8)

w1i g(ρ˙ 1 )N (ρ1 ; z1,j , R1,j )

(9)

 cos(φ1 ) and R1,j interpreted as hypothesized with z1,j = rj sin(φ1 ) state and and covariance for the jth range at time t1 . Each of these densities will be propagated using UKF recursions as measurements are acquired. Since the UKF approximates the posterior density by a Gaussian, the posterior density approximation at time tk−1 will be of the form 

p(xk−1 |φ1:k−1 ) =

d 

i wk−1 N (xk−1 |ˆ xik−1|k−1 , Pik−1|k−1 )

i=1

(10) and the prior density is p(xk |φ1:k−1 ) =

d 

i wk−1 N (xk |ˆ xik|k−1 , Pik|k−1 ) (11)

i=1

where x ˆik|k−1 Pik|k−1

= Fk x ˆik−1|k−1 , =

(12)

Fk Pik−1|k−1 FTk

+ Qk

(13)

To find the posterior density it is necessary to use the unscented transformation to approximate the moments in the KF correction. This involves constructing a set of sigma points Xki,1 , . . . , Xki,s weights w1 , . . . , ws . The ith collection of sigma points satisfies s 

wj Xki,j

=x ˆik|k−1

(14)

  T wj Xki,j − x ˆik|k−1 Xki,j − x ˆik|k−1 = Pik|k−1

(15)

j=1

s  j=1

The sigma points are transformed as Fki,j = hsk (Xki,j , tk ) for i = 1, . . . , d, j = 1, . . . , s = 9. These nine sigma points are then used to compute the moment approximations: E(φk |Hi , φ1:k−1 ) ≈

φˆik =

s 

wj Fki,j ,

(16)

j=1

cov(φk , φk |Hi , φ1:k−1 ) ≈ Ski s  2  = σ2 + wj Fki,j − φˆik ,

(17)

j=1

cov(xk , φk |Hi , φ1:k−1 ) ≈ Ψik s    = σ2 + wj Xki,j − x ˆik|k−1 Fki,j − φˆik ,

(18)

j=1

The posterior density is then approximated as p(xk |φ1:k ) =

d 

wki N (xk ; x ˆik|k , Pik|k )

(19)

i=1

where wki

=

x ˆik|k

=

i Cwk−1 N (φk ; φˆik , Ski ),  −1 φk − φˆik , x ˆik|k−1 + Ψik Ski

Pik|k

=

Pik|k−1 − Ψik Ski

−1

T

Ψik ,

(20) (21) (22)

with C such that the weights sum to one. The state at time tk can be estimated using the weighted sum of the state estimates conditional on each range hypothesis: x ˆk|k

=

d 

wki x ˆik|k

(23)

i=1

The UKF has a nice property that it approximates the update by a single Gaussian with its parameters estimated via the sigma points. This yields d components for the posterior mixture made up from the d components of the likelihood mixture times the one predicted UKF component. In [7], a Gaussian mixture filter with Kalman update is used. To calculate the posterior mixture with a Gaussian mixture filter involves propagation of all, say n, prior components of the Gaussian mixture and then multiply with the d components of the likelihood mixture, resulting in nd components of the posterior mixture, which then would need pruning, or, an exponential growth of the components with time would occur. The RP-UKF keeps the number of Gaussians constant for all further updates of the posterior density conditional on the d range position hypotheses. In practice, the weighting assigned to many of the original hypotheses will quickly become negligible. Hypotheses with a negligible weighting can be discarded to save computational expense. IV. D ETECTION AND TRACKING OF MULTIPLE TARGETS It is proposed to start new tracks on each measurement. Let d1 be the number of measurements at time t1 . The prior distribution for each tentative track can be found using the measurement model and prior information. The prior information used here includes range limiting and a prior for the target velocity. The ith target is initiated from the ith measurement, i = 1, . . . , d1 using the procedure described previously in section III. Given a second collection of measurements d2 taken at time t2 there will be several ways of linking the tentative tracks with the measurements. The total number of possibilities is   min(d1 ,d2 )   d1 d2 Q2 = i! (24) i i i=1 These hypotheses take into account all measurements at time t2 being due to new targets (i = 0 in the summation), all measurements but one being due to new targets (i = 1) and so on. Let ϑq , q = 1, . . . , Q2 denote the qth measurement origin hypothesis (MOH). Let aq denote the number of measurements assigned to existing tracks under ϑq . A MOH is defined by a combination of two functions: one selects measurements to be associated with existing targets and one associates existing targets with the selected measurements. Let ψq : {1, . . . , aq } → {1, . . . , d2 } denote the measurement selection function and θq : {1, . . . , aq } → {1, . . . , d1 } denote the association function for the qth MOH, for short: ϑq = ψq ∪θq . The goal is to calculate the posterior density of the target states conditional on each MOH and the posterior probability of each MOH.

The posterior density of the target states conditional on each MOH needs to be approximated since it cannot be computed exactly. This can be done by the RP-UKF algorithm described previously. Using Bayes’ rule, the posterior probability of the qth MOH can be written as, for q = 1, . . . , Q2 , P (ϑq |φ1 , φ2 ) ∝

p(φ2 |ϑq , φ1 )P (ϑq |φ1 ).

(25)

Let bq = d2 − aq denote the number of new tracks proposed under ϑq . It is assumed that bq ∼ f and, given that b tracks are new, the association of tracks with the measurements is uniform. The prior can be found as

     d1 d2 P (ϑq |φ1 ) = f (d2 − d1 ) aq ! (26) aq aq d2 where f is such that b=max(0,d f (b) = 1. New tracks 2 −d1 ) are assumed to be uniformly distributed in the surveillance region. To find the other component in the posterior probability the following expansion is used: p(φ2 |ϑq , φ1 ) = p(φ2 |ϑq , x2 )p(x2 |ϑq , φ1 )dx2 (27) = V

−bq

aq

2

N (φ2,ψq (j) ; hs2 (xθq (j),2 , σ ))

j=1



 p(x2 |x1 )p(x1 |φ1 )dx1 dx2 ,

(28)

where V is the total surveillance volume. The integral (27) cannot be found in closed-form. The method used to approximate this integral will depend on the filtering algorithm used to approximate the state posterior density conditional on the hypothesis. An approximation how to do this will be given later, but first consider a general recursion from time tk−1 to time tk . Assume Qk−1 track collections at time tk−1 . A track collection is determined by the sequence of MOHs taken from times t1 , . . . , tk−1 . Let ϑl,q denote the MOH taken at time tl for the qth track collection and ϑ1:k−1,q denote the MOH sequence for the qth track collection. Each track collection potentially has a different number of tracks. Let rk−1,q denote the number of tracks in the qth track collection at time tk−1 . The number of track collections at time tk is then Qk−1 min(rk−1,q ,dk ) 

Qk

=





q=1

i=0

dk i



rk−1,q i!. i

(29)

Define the function µ : {1, . . . , Qk } → {1, . . . , Qk−1 } which links the MOH sequences up to time tk−1 with the sequences up to tk . The function µ is such that ϑ1:k,q = ϑ1:k−1,µ(q) ∪ ϑk,q , i.e., µ(q) is the parent hypothesis at time tk−1 of the qth hypothesis at time tk . Given the measurements φk , the update algorithm is shown in Table I. The RP-UKF represents the posterior density by a Gaussian mixture and a brief description of how this filter can be used to approximate the joint target state posterior density is now given. The posterior density of the joint multi-target state at

1) Set l = 0. (counter for new hypothesis). 2) For q = 1, . . . , Qk−1 : (loop over existing hypotheses) a) For a = 0, . . . , min(rk−1,q , dk ): (loop over number of existing tracks which have been measured)   rk−1,q  i) Let eq,a = dak a!.  a dk measurement selection ii) Combine a   association functions with a! rk−1,q a functions to give the eq,a branching hypotheses ϑ1:k,l+1 , . . . , ϑ1:k,l+eq,a . iii) For ν = 1, . . . , eq,a : (loop over branching hypotheses) A) Increment the counter l; note that µ(l) = q. B) Perform filtering update conditional on ϑ1:k,l . C) Compute posterior probability of ϑ1:k,l : P (ϑ1:k,l |φ1:k )= p ˆ(φk |ϑ1:k,q ,φ1:k−1 )f (dk −a)P (ϑ1:k−1,l |φ1:k−1 )  , C rk−1,q dk )a! a ( a where C is a normalizing constant. 3) Prune unlikely track collections. TABLE I A RECURSION OF THE DETECTION / TRACKING ALGORITHM

time tk−1 conditional on the qth MOH sequence is written as p(xk−1 |ϑ1:k−1,q , φ1:k−1 ) =  rk−1,q d

 q,c q,c q,c wi,k−1 N (xi,k−1 ; x ˆi,k−1|k−1 , Pi,k−1|k−1 ) . (30) i=1

c=1

A similar approximation to the posterior density of xk conditional on ϑ1:k,q . The prior density is p(xk |ϑ1:k,q , φ1:k−1 ) = V −bk,q ·  rk−1,q d

 q,c q,c q,c wi,k−1 N (xi,k−1 ; x ˆi,k−1|k−1 , Pi,k−1|k−1 ) . (31) i=1

c=1

where x ˆq,c i,k|k−1

=

Fk x ˆq,c i,k−1|k−1 ,

=

q,c Fk Pi,k−1|k−1 FTk

(32) + Qk .

(33)

: Recall the measurement selection function ψk,q {1, . . . , ak } → {1, . . . , dk } and the target selection/permutation function θk,q : {1, . . . , ak,q } → {1, . . . , rk−1,q } for the qth MOH at time tk . These functions can be combined to give the measurement-target association function ζk,q : {1, . . . , rk−1,q } → {0, . . . , dk } such that ζk,q (i) = j > 0 means that the ith target has generated the jth measurement and ζk,q (i) = 0 means that the ith target has generated no measurement. Also, the function λk,q : {1, . . . , bk,q } → {1, . . . , dk } selects measurements used to initiate new targets, i.e., λk,q (i) = j means that the jth measurement initiates the ith new target under the qth MOH. Then the posterior density of the joint multi-target state under ϑ1:k,q is p(xk |ϑ1:k,q , φ1:k ) =  rk,q  d

 q,c q,c q,c wi,k N (xi,k ; x ˆi,k|k , Pi,k|k ) . i=1

c=1

(34)

For i = 1, . . . , rk−1,q , if ζk,q (i) = j > 0, then the posterior density is computed as shown in (20) to (22). If ζk,q (i) = 0, the posterior density is the same as the prior density. For i = rk−1,q + 1, . . . , rk,q , the posterior density of the ith target is found by initialization (6) using the measurement λk,q (i − rk−1,q ). The posterior probability of the event ϑ1:k,q requires computation of the integral p(φk |ϑ1:k,q , φ1:k−1 ) = p(φk |xk , ϑ1:k,q )p(xk |ϑ1:k,q , φ1:k−1 )dxk   a k,q N (φk,ψk,q (l) ; hsk (xθk,q (l),k ), σ 2 ) = l=1 d



p(xk |θ1:k|,q , φ1:k,q )dxk

(35)

(36)

ak,q

≈ V −bk,q

µ(q),i

wθk,q (l),k−1

l=1 i=1

×N (φk,ψk,q (l) ; ψˆθk,q (l),k|k−1 , Sab ), µ(q),i

µ(q),i+(ψk,q (l)−1)d

Sab := Sθk,q (l),k

with .

(37) (38)

A. Modeling of disappearing targets To model disappearing targets the same principle of matching up the number of targets, rk,q from a MOH ϑk,q with new measurements has been used but instead of using all targets of the MOH, all the rk state combinations with rk − 1 targets are considered for matching with the dk new measurements. Let ϑik,q , i = 1, . . . , rk denote the newly created hypothesis with the ith target removed from ϑk,q , the same detection/tracking algorithm is then used to calculate posterior probabilities. To combine the new hypotheses generated from ϑk,q and ϑik,q , i = 1, . . . , rk a weighting of (1 − β)rk and β(1 − β)rk −1 is applied, where β is the probability that a target disappears. These weights correspond to the prior probabilities that all targets survive and one target disappears, respectively. Theoretically, also the disappearance of more than one target should be modeled but for small β the involved weights contain powers of β making the prior very small and therefore this approach is justified. In the simulations a β of 0.01 was selected. V. E XTRACTION OF TARGET STATE ESTIMATES AND STATISTICS

As the developed algorithm calculates posterior probabilities for hypotheses describing a track collection with varying number of targets, there are many ways to define actual tracks. A simple way is just to take the number of targets and their state estimates of the most likely hypothesis and declare those as the number of tracks and track estimates, respectively. However, this would neglect all remaining hypotheses which may have a considerable probabilistic weight. A more complicated method would be to backtrack hypotheses to previous times and work out the most likely track paths. This would require a considerable amount of computational resources and may not be feasible over several time steps with tens or hundreds of hypothesis and their associated state estimates. Therefore,

it was decided to use a clustering heuristic at a given time instance tk as follows. A. Clustering of target state estimates 1) Extract all state estimates from all the hypotheses: X = k {x|x ∈ ∪Q q=1 Hq }, where Hq contains all the state estimates x ˆi,k|k , i ∈ {1, . . . , ak,q } associated with the MOH sequence, ϑ1:k,q , of the qth track collection. 2) Cluster state estimates xi ∈ X and xj ∈ X, i = j, if and only if Vi := (xi − xj )T Pi−1 (xi − xj ) ≤ γ ∧ Vj := (xi − xj )T Pj−1 (xi − xj ) ≤ γ, where Pi and Pj are the respective covariance matrices of the state estimates and γ is a threshold on the ‘uncertainty volumes’ Vi and Vj around their respective state estimates. Assign each point xi ∈ X a cluster label l ∈ L, such that clustered points share the same label, i.e. (xi , xj ) is clustered ⇔ L(xi ) = L(xj ). 3) Associated with the clustering process is a state averaging. Each cluster l ∈ L has an associated cluster state xc (l) and a weight Pc (l) which are calculated as follows: Xlq

=

x ¯(l, q) = Pc (l) =

{x ∈ Hq |L(x) = l}  q 1 x∈X q x : |Xl | > 0 |X q | l

l

undefined : otherwise  P (Hq )δ(|Xlq | > 0)

(39) (40) (41)

q:|Xlq |>0

xc (l) =



P (Hq )¯ x(l, q)/Pc (l)

(42)

q:|Xlq |>0

where Xlq is the set of states belonging to cluster l and that are from hypothesis Hq . x ¯(l, q) denotes the mean of the states in this set. Pc (l) is the associated probability weight to cluster state xc (l), with δ being the Kronecker Delta function. 4) The final track states are defined as the set X and its cardinality, |X|, is declared as the number of tracks: X

=

{xc (l)|Pc (l) ≥ Pmin , ∀ l ∈ L}

(43)

where Pmin and is a threshold such that a cluster state is only accepted as a track state if the sum of the probabilities of contributing hypotheses exceeds this threshold. This allows different, low probability hypotheses with different track collections to contribute to track estimates. A relative high Pmin (0.7 in simulations) ensures that there are only significant track state estimates considered, at least in the case where there are multiple competing hypotheses. This extraction of track state information from track collection hypotheses via clustering is quickly done. It is basically a weighted voting mechanism where each hypothesis has a vote weighted by its posterior probability. Of course if only one hypothesis exists, there is no thresholding possible based on the ‘cluster probability’ Pc (.), which would be the same and equal to one for all potential clusters formed from track collection state estimates within this only hypothesis. But if

1) Get track state estimates x ˆk|k at time tk , either clustered ones or track states from best hypothesis. 2) Match track estimates to true target state xk :   a) χ∗k = arg minχ xk − χˆ xk|k  ∗ b) if |xk − χk x ˆk|k | < Γ all targets are deemed to be in track. TABLE II P ROCEDURE TO DERIVE THE OUTPUT STATISTICS .

there are many hypotheses generated with similar probabilities, this is a way to extract state estimates more reliably and not to account for state estimates that are not supported by many hypotheses and therefore will have a low cluster probability. B. Track Statistics

VI. E XPERIMENTS AND R ESULTS As the results are highly situation dependent, several experiments with typical situations of multiple sensor platforms and targets have been considered but due to space limitations only one will be presented. Within each constellation, different sensor noise in bearing angles are tested (σ ∈ {1o , 3o }). For simplicity target dynamics are linear, but sensor platforms can move arbitrarily. Figure 2 shows the scenario with three targets (circles) and two sensors (diamonds). The black lines are the trajectories with targets and sensors at the end. The red lines are the radar beams of the targets which have a sweep rate of 90o /s, respectively one revolution every 4 seconds. A. Three Targets, Two Sensors Figures 3 and 4 show track statistics (solid line from best hypothesis and dotted line from clustered states), with measurement noise σ = 1o and σ = 3o , respectively. The top plot shows the number of true tracks which are target state estimates that are declared in track, the middle plot shows the number of false tracks which are estimates that are not in track and the lower plot shows the total number of targets. Depending on the threshold Γ, which yields a somewhat arbitrary decision between true and false tracks, the numbers of true and false tracks may be changing, but the sum of true and false tracks equals the number of track estimates, which give a good indication of when new targets are recognized.

#False Tracks #True Tracks

Fig. 2.

#Tracks

To derive an output statistics, estimated track states have to be matched with true target states. This is done by the algorithm given in Table II. Two variants are used in the experiments, either the simple approach of choosing the track state estimates of the best track collection hypothesis ϑk,q∗ , or, the track estimates achieved by the previously described clustering heuristic. The percentage of track estimates declared in track to the total number gives then an indication about the performance of the bearings-only tracking algorithm. For all experiments the threshold Γ was chosen to be 300m. A more advanced criterion for declaring track state estimates in track, would be to use a bound based on the Cramer-Rao Lower Bound (CRLB).

Situation for the 3 targets, 2 sensors case.

4 2 0 40

20

40

60

20

40

60

40

60

2 0 40 2 0

0

20

Time/s

Fig. 3. Output statistics over 100 Monte Carlo runs for the 3 targets, 2 sensor case, σ = 1o . Solid lines for best hypothesis and dotted lines for clustered state estimates

Figures 7 and 8 show the number of hypotheses over time, for σ = 1o and σ = 3o . Figures 5 and 6 show the RMS error for tracks declared in track over time, for state estimates extracted from the best hypothesis and clustered hypotheses. Especially with more noise the clustering heuristics performs better, in particular when several competing hypotheses with the same number of targets are generated. This is evident after about 40 seconds, when best and second best (and probably others as well) hypotheses start having both three targets. Naturally, the clustered state estimates is worse when targets are initialized because not as many hypothesis will support it. Figures 9 and 10 show the number of targets and of best and second best hypothesis in the top half and in the lower half their respective probabilities. The hypothesis explosion in Figure 8 is due to an ambiguous geometry, in conjunction with high noise. This

300

2

250

0 40

20

40

60

2

150

0 40

20

40

60

2 0

100 50

0

20

Time/s

40

60

Fig. 4. Output statistics over 100 Monte Carlo runs for the 3 targets, 2 sensor case, σ = 3o . Solid lines for best hypothesis and dotted lines for clustered state estimates. Slight differences are after 40 seconds and can be clearly seen when enlarging the figure.

0

0

20

Time/s

40

60 3o ,

Fig. 6. RMS errors of the three targets with σ = solid lines for best hypothesis and dotted lines for clustered state estimates.

12

300

10 # Hypotheses

250 RMS PosErr

200 150 100

8 6 4 2

50 0

200 RMS PosErr

#False Tracks #True Tracks #Tracks

4

0 0

20

Time/s

40

60

Fig. 5. RMS errors of the three targets with σ = 1o , solid lines for best hypothesis and dotted lines for clustered state estimates.

leads to a collapse of the probability of the best hypothesis, see Figure 10 and the clustered state estimates are better, see Figure 6. Other scenarios were tested successfully with up to σ = 8o measurement noise but omitted due to space limitations. VII. C ONCLUSIONS The problem of detecting and tracking multiple targets using asynchronous bearings measurements from multiple sensors was considered. An approach based on enumerating all possible measurement origin hypotheses was developed. This approach requires filters to approximate the posterior density of the multi-target state conditional on each hypothesis. A range-parameterized unscented Kalman filter (RP-UKF) has

0

20

Time/s

40

60

Fig. 7. Number of hypotheses over 100 Monte Carlo runs for the 3 targets, 2 sensor case, σ = 1o .

been used as an effective and efficient approximation. Track statistics based on the number of state estimates that where within a threshold of the true target tracks were used to evaluate tracking performance. Two versions of track state estimates where compared: the state estimates from the best hypothesis and a clustered state estimate where all hypotheses were allowed to contribute in conjunction with a weighted voting heuristic. Performance of both state estimates solely form the best hypothesis and those from clustered ones is good. However, in high noise the clustering heuristic starts to perform better. The clustered track state estimates may be interpreted as the best state estimates that can be retrieved from the current set of track collection hypotheses and therefore the addition

Prob H1 #H2 targets#H1 targets

4

60

2

50

0 40

40

60

20

40

60

20

40

60

40

60

2

# Hypotheses

40

0 10

30

0.5

20

Prob H2

0 0.40 0.2

10 0

0

20

Time/s

40

Prob H1 #H2 targets#H1 targets

4 2 0 40

20

40

60

20

40

60

20

40

60

2 0 10

0.5 0 0.40 0.2 0

0

0

20

Time/s

60

Fig. 8. Number of hypotheses over 100 Monte Carlo runs for the 3 targets, 2 sensor case, σ = 3o .

Prob H2

20

Fig. 10. Number of hypotheses over 100 Monte Carlo runs for the 3 targets, 2 sensor case, σ = 3o .

As the number of clustered targets may be less then they hypothesized ones (rk,q , q ∈ Qk ) of any hypothesis, the number of targets may also be reduced by the clustering. The same applies to the inclusion of clutter, which may then be picked up by some hypotheses, containing extra targets. These extra targets will be pruned out quickly as they have no further support by new measurements. The modeling so far has assumed a probability of detection equal to one. With asynchronous update, a missed detection does not matter much, as all the estimates are just propagated according the dynamics and therefore the algorithm performs as best as it can, given the measurement was not available. Future work will concentrate in modeling target identification and clutter. R EFERENCES

0

20

Time/s

40

60

Fig. 9. Number of targets of the two best hypotheses (top) and their associated probabilities (bottom) over 100 Monte Carlo runs for the 3 targets, 2 sensor case, σ = 1o .

of a new hypothesis with at least the same probability as the best hypothesis at each time tk may be accepted. This is not a rigorous mathematical justification, as the set of clustered hypotheses is just an information compression form the original set of hypotheses and adding a set of basically duplicate information would not be justified. Nevertheless, this combined approach seems to outperform the only clustered case. Albeit it seems that the extra clustering is unnecessary, the situation may change when there is clutter. To account for disappearing targets a set of modified hypotheses was generated each of which had one target state estimate taken off. They were then combined with the original set of hypotheses, weighted according to the a priori probability, given an assumed target disappearing probability.

[1] Y. Bar-Shalom and X.-R. Li, Multitarget-Mulitsensor Tracking: Principles and Techniques, 3rd ed. Box U-157, Storrs, CT 06269-2157: YBS, 1995. [2] V. Aidala and S. Hammel, “Utilization of modified polar coordinates for bearings-only tracking,” IEEE Transactions on Automatic Control, vol. 28, no. 3, pp. 283–294, March 1983. [3] T. R. Kronhamn, “Fundamental properties and performance of conventional bearings-only target motion analysis.pdf,” IEE Proc.-Radar, Sonar Navig., vol. 145, no. 4, pp. 247–252, August 1998. [4] Y. Bar-Shalom and F. T. E., Tracking and Data Asscociation, F. Ames, William, Ed. Academic Press Inc., 1988. [5] N. Peach, “Bearings-only tracking using a set of range-parameterised extended kalman filters,” IEE Proceedings- Control Theory Applications, vol. 142, no. 1, pp. 73–80, 1995. [6] S. Julier, J. Uhlmann, and H. F. Durrant-Whyte, “A new method for the nonlinear transformation of means and covariances in filters and estimators,” IEEE Transactions on Automatic Control, vol. 45, no. 3, pp. 477–482, 2000. [7] D. Muˇsicki and R. Evans, “Measurement Gaussian sum mixture target tracking,” in Proceedings of the International Conference on Information Fusion, July 2006. [8] S. Sadu, M. Srinivasan, and T. K. Ghoshal, “Bearing only Tracking using Square Root Sigma Point Kalman Filter,” in Proceedingns of the IEEE Indian Annual Conference, INDICON, 2004, pp. 66–69. [9] D. B. Reid, “An algorithm for tracking multiple targets,” IEEE Transactions on Automatic Control, vol. 24, no. 6, pp. 843–854, 1979.

Suggest Documents