A Comparison of the PMHT and PDAF Tracking

0 downloads 0 Views 203KB Size Report
Both the multi-hypothesis tracker (MHT) and probabilistic data association filter PDAF1,2 track imperfectly- detected targets in clutter via a hard-association ...
A Comparison of the PMHT and PDAF Tracking Algorithms Based on their Model CRLBs Yanhua Ruana , Peter Willetta , and Roy Streitb a Univ.

of Connecticut, Storrs, CT 06269

b NUWC,

Newport, RI 02841

ABSTRACT The PMHT is a very nice tracking algorithm for a number of implementational reasons. However, it relies on a modification on the usual data association assumption, specifically that the event that a target can generate more than one measurement in a given scan is made feasible. In this paper we examine the ramifications of this from the point of view of theoretical estimation accuracy — the Cram´er-Rao lower bound. We find that the CRLB behavior for the PMHT is much like that for the PDAF: there is a scalar “information reduction factor” (IRF) relating the loss of accuracy from measurement-origin-uncertainty. This IRF is explored in a number of ways, and in particular it is found that the IRF for the PMHT is significantly degraded relative to that for the standard measurement model (that of the PDAF) when clutter is heavy. Other topics include the effect of “homothetic” measurements; data fusion; and non-Gaussian measurement. Keywords: CRLB, Target Tracking, Data Association, PDAF, PMHT.

1. INTRODUCTION Assuming a linear motion model and measurements corrupted by Gaussian noise, estimation should be based on the Kalman smoother (if the entire trajectory is desired) or the Kalman filter (if only the target’s present location is wanted). Such tracking is straightforward, but in most surveillance applications the problem is complicated by spurious (false-alarms) and/or incomplete (missed detections) measurements, with the resulting data association problem being of primary interest. Data association refers to a procedure by which measurements are “assigned”, based on their fit with the current trajectory, as correct or false; or, if there are multiple objects of interest, to a particular model. Both the multi-hypothesis tracker (MHT) and probabilistic data association filter PDAF1,2 track imperfectlydetected targets in clutter via a hard-association model. That is, these algorithms enumerate the possible associations between measurements and target(s), and evaluate which is best. Since there are a great many such associations a full enumeration is computationally infeasible, hence in each case the search is suboptimal. The PMHT3 makes modification to the measurement model. The PDAF and MHT assume, quite rightly, that a target can generate at most one measurement per scan; the PMHT takes the measurement/target association process as independent across measurements. By doing so, it is able to render a fully-optimal (under the modified assumption) tracker. The associations become soft — in fact, governed by their posterior probabilities — and the integer-programming problem of target tracking is rendered continuous and amenable to an iterative “hill-climbing” method via the EM algorithm. The tracking methods devolving from the measurement models are very interesting, but of primary concern in this paper is the measurement model itself. To reiterate, the true measurement model (used by the PDAF and MHT — for convenience we call it the “PDAF model” here) is that a given target can generate at most one measurement per scan; for the PMHT the model is that the association process is independent from measurement to measurement (this will be discussed later), and hence the event that several — or even all — measurements come from the same target is valid. This is illustrated in figure 1. Further author information: (Send correspondence to Peter Willett) Yanhua Ruan: [email protected] Peter Willett: [email protected] Roy Streit: [email protected]

Figure 1. Illustration of the difference between the PDAF and PMHT measurement models. There are two measurements and one target, as shown above. The feasible PDAF association events are at left, and those for the PMHT are at right. In each frame, the left two 2’s denote the two measurements, and the arrows their associations. To date, the PMHT has not found widespread acceptance, and this is arguably due to the its performance in the single-target/single-sensor case offering little (or no) improvement over the computationally less-demanding PDAF. Why is this so? While its measurement model may seem at first to be a significant PMHT blemish, it is usually true that the multi-measurement-to-same-target events are assigned low posterior probability, and have little effect. In4,5 the fact that the PMHT can be modified to use the PDAF measurement model while retaining its pleasing “Kalman smoother” structure (discussed shortly) is exploited. The results from this modification are unclear, largely due to the roiling effect of occasional convergence to a local (rather than global) likelihood maximum on perceived tracking performance. Thus, in this paper we approach the question from a theoretical point of view: we examine the relative Cram´er-Rao lower bounds (CRLBs) on estimation from the two cases. In a series of recent papers6—8 it was shown that under the PDAF measurement model the data association uncertainty had its effect on the relevant CRLB solely as a scalar “information reduction factor” (IRF). With this in mind, we here seek answers to the following: • Is there a similar IRF scalar for the PMHT measurement model? (It turns out that there is.) • What is the relative behavior of the PMHT and PDAF information reduction factors? Can the PMHT performance be explained from this? • It has been observed that the PDAF IRF result is true over a wide range of (possibly non-Gaussian) statistical models. Is this also true for the PMHT? • The convergence of the PMHT is often enhanced by the use of a “homothetic” statistical model. How does this affect the IRF? • What happens in the data fusion case, that there are several sensors observing the same target? In the following section we discuss, as background, the operation of the PMHT, the CRLB and its meaning, and the IRF for the PDAF measurement model. We then derive the CRLB for the PMHT. We follow with examples of the relative IRFs, and attempt to answer the above questions.

2. BACKGROUND In the following we discuss the PMHT, the CRLB, and the CRLB for the PDAF measurement model. Results are presented in condensed form as they are available elsewhere in the literature; their appearance here is an attempt to make this paper self-contained.

2.1. The PMHT and its Model t For each t we define {kr (t), zr (t)}m r=1 such that

zr (t) = ykr (t) (t)

(1)

meaning that the r th measurement (out of mt ) at time t comes from model kr (t) — and, of course, kr (t) is unknown. The statistical assumption for the association process is P r(kr (t) = s) = πs

(2)

and that all are independent random variables. There is a natural extension to the multi-target situation, but here we take s ∈ {0, 1}, with the former referring to a (uniformly-distributed) false-alarm, and the latter meaning that the observation is target-generated. Writing the likelihood function, manipulating it, and applying the EM formalism3 eventually yields the optimal iteration: 1. Determine initial values for the trajectory variables x1 (t) for all times t = 1, 2, . . . , T . Set the EM iteration index n = 1. 2. Calculate the posterior association probabilities wrn (t) for target, and for all times t = 1, 2, . . . , T , and measurements r = 1, 2, . . . , mt , according to wrn (t) =

ˆ n (t), R(t)} π1 N {zr (t); y ˆ n (t), R(t)} + πV0 π1 N {zr (t); y

(3)

which has the interpretation that it is the posterior probability (conditioned on the measurements and present track estimates) that the r th measurement at time t arises from target. ˜ ˜(t) and their associated (synthetic) measurement covariances R(t) 3. Calculate the synthetic measurements z for times t = 1, 2, . . . , T , according to ! mt n R(t) r=1 wr (t)zr (t) ˜ ! R(t) ≡ ! mt n (4) z˜(t) ≡ mt n r=1 wr (t) r=1 wr (t)

4. Use a Kalman smoothing algorithm to obtain the estimated trajectory x(t) using synthetic measurements and ˜ covariances {˜ z(t)} and {R(t)}. 5. Increment n = n + 1, and return to step 2 for the next iteration, unless a stopping criterion is reached.

It is hoped that the above derivation motivates our interest in the PMHT: the modified measurement model yields a remarkably simple implementation. It should be noted that there is no need to confine interest to a single target, since an extended derivation shows that there is very little increase in complexity (there are as many Kalman smoothers as targets). Further, the PMHT measurement model can be applied in situations that the model is not linear/Gaussian (e.g.9 ). In such situations the resulting algorithm does not (or may not) reduce to a Kalman smoother implementation, but it is still true that the measurement model yields a smooth likelihood surface, with continuous rather than integer optimization methods applicable.

2.2. The CRLB There is a classical result known as the Cram´er-Rao lower bound (CRLB) (e.g.10,11 ) for mean-square error of an unbiased estimator. Let us assume access to an observation Z which has probability density function (pdf) p(Z; x), meaning that the pdf depends on a parameter vector x which is to be estimated. Then under fairly broad regularity conditions the CRLB has it that # " T ≥ J−1 E [ˆ x(Z) − x] [ˆ x(Z) − x] (5) in which

" # T J ≡ E [∇x log (p(Z; x))] [∇x log (p(Z; x))]

(6)

ˆ (Z) is any unbiased estimator. Again under broad regularity conditions, is Fisher’s information matrix (FIM) and x if a maximum-likelihood estimator (MLE) for x exists, then it achieves the CRLB asymptotically. In this paper we are of course very interested in CRLB’s. It is worth noting that the CRLB is for estimation of a parameter. Thus, an informing example of its applicability might be the estimation of the initial position and velocity of a constant-velocity target. Versions of the CRLB exist for MAP estimation10 — that is, for tracking — but we fear that our results may be obscured by their additional complexity. It is also worth noting that the CRLB is a function of the statistical model rather than of the algorithm used. Thus, one may reasonably object to conclusions about algorithms (PMHT versus PDAF) based upon a CRLB analysis, since the data comes either from one model or the other. We recognize the justice in this, but counter that the CRLB can be thought of as a description of the likelihood surface over which optimization proceeds, and hence implications of the analysis are relevant.

2.3. The CRLB for the PDAF Measurement Model We define the aggregate observation in which the t

th

Z = {Z(1), Z(2), . . . , Z(T )}

(7)

t Z(t) = {zi (t)}m i=1

(8)

observation is

meaning, in the target-tracking situation, that there are mt individual observations which comprise it. Although mt is of course known to the estimator, to compute the MSE it is necessary to average over the possible values of mt , and hence we assume the existence of a probability mass function q(mt ) controlling mt , and of a related measure "(mt ) denoting the probability that a particular measurement is target-originated, given that this true measurement is not missed. We have the following: 1 Conditioned on x, {Z(1), . . . , Z(T )} are independent; 2 All elements of zi (t), (i = 1, . . . , mt ; t = 1, . . . , T ) are independent and identically-distributed (iid); 3 The dependence of the target-generated observations on the unknown parameter x as µt (x) (as a mean-shift); 4 False alarms are Poisson in number and uniformly distributed over the observation volume (or gated volume). In the target tracking scenario Z(t) is comprised of all observations collected at time t, and these observations can be all false-alarms (the detection from the target has been missed), or can contain exactly one true detection and (mt − 1) false alarms. We define (λV )mt e−λV (λV )(mt −1) e−λV + Pd mt ! (mt − 1)!

q(mt ) =

(1 − Pd )

"(mt ) =

Pd (λV )(mt −1) e−λV q(mt ) (mt − 1)!

(9) (10)

as, respectively, the a-priori probability that there are mt (∈ {0, ∞}) observations at time t, and the probability that at least one measurement is target-generated given that there are mt measurements at time t. In the above Pd is the probability of detection, λ is the average number of false-alarms per unit observation volume, and V is the actual observation volume.

In6,7 , a surprising result was obtained. Assuming that the pdf p1 (·) is Gaussian, it becomes possible to write J = κpdaf J0

(11)

where κpdaf is often denoted q2 for historical reasons, and in which J0 is the FIM for the case of no measurement uncertainty. Note the information-reduction factor (IRF) (less than unity) in the proportionality to account for the estimation algorithm’s need to weigh which of its observations (if any) are relevant and which are spurious. At a further degree of generality it was shown in8 that the result (that the FIM with association uncertainty is a scalar multiple of that with no uncertainty) holds true for a wide class of estimation problems. The conditions for this are largely technical, with the exception that the measurement pdf p1 (·) must be either independent in each coordinate or linearly transformable to independent, and must be symmetric. Assuming independence, we have that scalar (here κpdaf ) given by % 1 (z11 ) &2 $ ∂p∂z !∞ 11 P r(χ1 | z11 ) p1 (z11 ) dz11 m=1 q(m)"(m) p1 (z11 ) κpdaf = (12) % 1 (z11 ) &2 $ ∂p∂z 11 p1 (z11 ) dz11 p1 (z11 ) where

P r(χ1 | z11 ) ≡

'

V /2

−V /2

...

'

V /2

−V /2

'

...

z12

) p1 (z1 ) ( ) dz2 . . . dzm dz12 . . . dz1nz !m (1 − "(m)) + #(m) V p (z ) 1 j j=1 m

'

1 V m−1 p1 (z12 , . . . , z1nz )

z1nz

(

#(m) m V

(13)

and z1,j is coordinate j of the first-indexed observation. (It is not in fact relevant which coordinate or which observation index is used.) It is apparent that (12) is not simple to compute; generally one must resort to Monte Carlo integration.

3. THE CRLB FOR THE PMHT MEASUREMENT MODEL 3.1. The Information Reduction Factor Under the PMHT measurement model and the same assumptions as above, we have that + ∞ & ./ mt -% T * , * "(mt ) "(mt ) 1− p0 (zi (t)) + q(mt ) p1 (zi (t) − µt (x)) p(Z; x) = mt mt t=1 m =0 i=1

(14)

t

and, given there are mt observations which comprise Z(t), the event that the observation i is target-originated (or a false alarm) is independent of the event that the observation j, (j '= i) is target-originated (or a false alarm). That is, the target can generate more than one observation at a given time. To derive the FIM, we first examine t p({zi (t)}m i=1 ; x) =

& . mt -% * "(mt ) "(mt ) 1− p0 (zi (t)) + p1 (zi (t) − µt (x)) mt mt i=1

(15)

the probability of the mt observations at time t, parametrized by x. Taking the gradient with respect to x of the logarithm, we get " # T mt t ; x)]) (∇ log [p({z (t)} ; x)]) (16) E (∇x log [p({zi (t)}m = MTt Ft (mt )Mt x i i=1 i=1 in which

Mt



   =   

∂µt (x)1 ∂x1 ∂µt (x)2 ∂x1

∂µt (x)1 ∂x2 ∂µt (x)2 ∂x2

∂µt (x)nz ∂x1

∂µt (x)nz ∂x2

.. .

.. .

... ... .. . ...

∂µt (x)1 ∂xnx ∂µt (x)2 ∂xnx

.. .

∂µt (x)nz ∂xnx

      

(17)

is the Jacobian matrix of µt (x) (we assume that zi (t) and x have respective dimensions nz and nx ), and in which we define + / +m /T  mt t   , "(mt ) "(mt ) , ∇ (p (z (t)−µ (x))) ∇ (p (z (t)−µ (x))) z 1 i t z 1 i t mt 9 "(m ) :mt 9 : × Ft (mt ) ≡ E (18) "(m ) "(m ) "(m )   1− m t p0 (zi (t))+ m t p1 (zi (t)−µt (x)) 1− m t p0 (zi (t))+ m t p1 (zi (t)−µt (x)) t t t t i=1

i=1

We can thus write

J =

T ,

MTt

t=1

>

∞ ,

mt =1

?

q(mt )Ft (mt ) Mt ≡

T ,

MTt Ft Mt

(19)

t=1

where Ft has been defined as the term in brackets — note that the mt = 0 term, corresponding to the lack of a target-generated measurement, has no contribution. We re-write (18) as A @m T t , (∇ [p (z (t) − µ (x))]) (∇ [p (z (t) − µ (x))]) z 1 i t z 1 i t (20) Ft (mt ) = E P r(χi | Z(t); µt (x))2 p1 (zi (t) − µt (x))2 i=1 in which P r(χi | Z(t); µt (x)) ≡ B 1−

#(mt ) mt

C

#(mt ) mt p1 (zi (t) − µt (x)) t) p0 (zi (t)) + #(m mt p1 (zi (t)

(21) − µt (x))

is recognizable as the posterior probability of the event χi that the target-generated observation at time t is zi (t), conditioned on the available data zi (t) and parametrized by µt (x). We immediately observe that if p0 (zi (t)) is uniform, the substitution z˜i (t) = zi (t) − µt (x) removes the effect of µt (x) in the expectation (integration), then we have that Ft ∝ F0 , in which # " T (22) F0 ≡ E (∇z [log(p1 (z))]) (∇z [log(p1 (z))]) corresponds to the measurement-uncertainty-free case. Let p0 (·) = 1/V we write (20) (for i = 1) as @ A . ' m * (∇z [p1 (z1 )]) (∇z [p1 (z1 )])T "(m) 1 "(m) 2 m (1 − Ft (m) = P r(χ1 | z1 ) (z ) dZ ) + p 1 j p1 (z1 )2 m V m j=1

(23)

in which P r(χ1 | z1 ) ≡

(1 −

#(m) m p1 (z1 ) #(m) 1 #(m) m ) V + m p1 (z1 )

(24)

The disappearance of µt (x) is made obvious, and the time dependence has been (notationally) ignored. By symmetry we need only concentrate on the event that the true measurement is labelled i = 1. Further simplification is possible by re-writing (23) as A ' @ T (∇z [p1 (z1 )]) (∇z [p1 (z1 )]) "2 (m) P r(χ1 | z1 ) p1 (z1 ) dz1 Ft (m) = (25) m p1 (z1 )2 in which P r(χ1 | z1 ) ≡ =

'

V /2

...

−V /2

(1 −

'

V /2

−V /2

& m % * "(m) 1 "(m) (1 − (z ) dz2 . . . dzm ) + p 1 j m V m + #(m) m p1 (z1 ) j=2

p1 (z1 ) (1 −

#(m) 1 m )V

p1 (z1 ) #(m) 1 #(m) m ) V + m p1 (z1 )

(26)

Comparison of (25) to F0 =

' @

T

(∇z [p1 (z)]) (∇z [p1 (z)]) p1 (z)2

A

p1 (z) dz

(27)

makes the information reduction from the measurements of uncertain origin obvious. Ft (m) of (25) is proportional to F0 provided p1 (·) is even symmetric with respect to all arguments of it. To show this point, note that this assumption means that P r(χ1 | z1 ) is even symmetric, and hence all off-diagonal terms in Ft (m) are zero. Further, since all elements of z1 are distributed identically, we have that Ft (m) is a multiple of the identity matrix, as is F0 . We thus have > ∞ ? T T T , , , , T Mt q(mt )Ft (mt ) Mt = MTt Ft Mt = MTt κpmht F0 Mt = κpmht J0 (28) J= t=1

mt =1

t=1

t=1

We have already seen that IRF can be written as a scalar multiplied by the identity matrix. Now we only need to compute this scalar !∞

#2 (m) m=1 q(m) m

κpmht =

$

where P r(χ1 | z11 ) ≡

'

z12

...

'

$

% ∂p1 (z11 ) &2 ∂z11

p1 (z11 )

% ∂p1 (z11 ) &2

z1nz

∂z11

p1 (z11 )

P r(χ1 | z11 ) p1 (z11 ) dz11

p1 (z11 ) dz11

p1 (z1 )p1 (z12 , . . . , z1n ) (1 −

(29)

#(m) 1 m )V

+

#(m) m p1 (z1 )

dz12 . . . dz1nz

(30)

It is clear that this IRF is much simpler to compute than is its PDAF-model counterpart in (12).

3.2. Comments The CRLB for the PMHT has been studied before, particularly by Jacyna and Pawlukiewicz12 and by LeCadre and Gauvrit.13 This treatment differs in a number of ways. First, we do not concern ourselves with the multiple-target situation — as those authors note, multiple targets introduce bias, and although a CRLB exists applicable to biased estimators, we feel that its use obscures our goal of comparison. Second, we have derived and have focused on the information reduction factor, which provides a scalar means of comparison to the PDAF results. Third, we fix the PMHT prior probabilities (π0 and π1 ). This last is important. We use 1 − π0 = π1 =

"(mt ) mt

(31)

which is the probability that a given measurement is target-generated, conditioned on the appearance of mt measurements at time t, and parametrized by probability of detection Pd and average number of false alarms λV . A version of the PMHT in which these are estimated is also possible, and the CRLBs for the prior probabilities12 are interesting since they yield information about tracking performance. Our experience has been that the estimated-π PMHT is difficult to use (it can be unstable), and we do not favor it. Finally, note that the CRLB for the PMHT measurement model has the same behavior as that for the PDAF model8 ; that is, there is a scalar information reduction factor over a wide range of measurement noise statistics, not for Gaussian alone.

3.3. Homothetic PMHT Measurement Models It has been observed (e.g.5 ) that convergence of the PMHT can be enhanced by the use of a “homothetic” measurement model. This is a modification on the basic PMHT model such that measurements at scan t can come from any Gaussian density having mean Hx(t) and variance {κ2p R}P p=1 . Typical values used are P = 2, with κ1 = 1 and κ2 = 3. The “homothetic” nomenclature derives from “having the same mean”; that is, the model has been altered

from having one target to P targets, and EM MAP estimation proceeds under the constraint that each of these models has the same track {x(t)}Tt=1 . Some algebra yields that with D E ˆ (t), κ2p R(t) πp N zr (t); y D E (32) wp,r (t) = ˆ (t), κ2p R(t) + πV0 πp N zr (t); y

we have

˜z(t) ≡

!mt !P r=1

p=1

!mt !P r=1

wp,r (t)zr (t)/κ2p

2 p=1 wp,r (t)/κp

R(t) ˜ R(t) ≡ !mt !P 2 r=1 p=1 wp,r (t)/κp

(33)

are the new synthetic measurements and associated covariances, to replace (4) — the PMHT iteration is otherwise intact. We can derive the IRF for the homothetic model: 9 −2 −2 D E D E: −1 !P ' !P 2 2 R (t)z1 zT1 (R−1 (t))T "2 (m) p1 =1 p2 =1 κp1 κp2 N z1 (t); 0, κp2 R(t) N z1 (t); 0, κp1 R(t) dz1 D E ! P #(m) mP 2 N z1 (t); 0, κ2p R(t) + (1 − "(m)) 1 p=1 mP

(34)

V

to be inserted to the numerator of (29).

Figure 2. Two approaches to data fusion via the PMHT. Above, the sequential approach in which data from each sensor forms its own “scan”; this is the only appropriate case for the PDAF measurement model. Below, an alternative approach for the PMHT, in which each sensor’s data is added to form a single scan. 2’s represent measurements.

3.4. The Multi-Sensor Case Since observations from a variety of sensors are usually assumed independent conditioned on a target’s location, it is a straightforward matter to extend the CRLB analysis just presented to the multi-sensor case. Under both the PDAF and PMHT measurement models a second sensor functions only as another observation, hence the IRF, if it applies, is unaffected by fusion. The PMHT appears particularly well-suited for multi-sensor operation, since the assumed independence of associations across measurements makes it of little importance whence each measurement arose. Thus, in14 a scheme was propounded whereby measurements from all sensors were simply added together — overlaid — on a single scan,

for direct PMHT processing. This is presented as the “joint” scheme in figure 2. The “sequential” approach in figure 2 illustrates the more natural scheme in which each scan of data is treated separately. The joint scheme is perhaps more appealing due to its simplicity. The issue here: is there any (theoretical) loss from its use? In fact, the only modification needed is to redefine q(mt ) and "(mt ). Here we present the expressions for the case of two-sensor: q(mt ) "(mt )

(2λV )mt −1 e−2λV (2λV )mt e−2λV (2λV )(mt −2) e−2λV + 2Pd (1 − Pd ) + Pd2 = (1 − Pd )2 mt ! (mt − 1)! (mt − 2)! . (mt −2) −2λV (mt −1) −2λV 1 (2λV ) (2λV ) e e 2Pd2 + 2Pd (1 − Pd ) = q(mt ) (mt − 2)! (mt − 1)!

(35) (36)

The IRF expression (29) can be used directly.

4. RESULTS We explore the results derived above as applied to estimation of initial position and velocity of a two-dimensional constant-velocity target, based on position-only measurements. Some plots have a slightly “choppy” appearance, and this is due to the Monte Carlo integration necessary for their calculation. In figures 3 and 4 are shown the relative IRF’s for the PMHT and PDAF in the Gaussian measurement noise situation. What is most apparent is that there is a significant difference in higher-clutter situations, and that this difference increases with the average number of false alarms per scan. These figures also show the IRF accruing from the “joint” fusion idea, with two identical and independent sensors. From these it is clear that there is indeed some (theoretical) loss from joint versus sequential fusion, although the difference is not great. The degradation is almost certainly due to the change in the prior probabilities: for example, when there is but one measurement at a given sensor it is probably target-generated, yet this information can be lost when “lumped” together with measurements from another sensor. Figures 5 and 6 show the effect of non-Gaussian measurement noise. In the former a generalized Gaussian noise15 model is used, and parametrized by its exponent k. The case k = 1 corresponds to a Laplace (double-sided exponential) distribution, and is relatively heavy-tailed; k = 2 is Gaussian, and k > 2 means that the distribution is light-tailed. In the latter figure the noise model is of a mixture of two Gaussians with variances separated by a factor of 10. It is interesting that the IRF increases (less information loss) for both PDAF and PMHT models as the noise becomes more heavy-tailed. It should be recalled that the plots are of the reduction in estimation efficiency due to measurement uncertainty (and not absolute estimation quality), and hence the effect is assumed to arise from the good light-tailed estimation quality which is sacrificed by clutter and missed measurements. In figures 7 and 8 the IRFs for two-model homothetic PMHT’s are plotted as a function of the variance multiplier κ2 ; the latter figure shows the CRLB for the homothetic model as compared to the “basic” PMHT which does not use homothetics. Of note is that there appears to be an IRF “sweet-spot” (at least in terms of the CRLB) at about κ2 ) 3, which corresponds to our empirically-refined practice.5 It is also interesting that the use of a homothetic model appears to present some advantages over the basic PMHT, which does not.

5. SUMMARY The PMHT is a very nice tracking algorithm for a number of implementational reasons. Its main “evil” is that it relies on an adulterated measurement model, specifically that measurement-to-target associations are independent, with feasible events that more than one measurement per scan may have arisen from a given model. There can be little doubt that this model is inappropriate in most situations, but since the PMHT is fully-optimal given that model, and since other algorithms (like the PDAF) proceed from the correct model but make simplifying assumptions along the way to a working tracker, it is of interest to see how deleterious the model is. In this paper we have investigated the model’s effect via a CRLB analysis. We have found: • The CRLB for the PMHT measurement model behaves as that for the PDAF model: the Fisher Information Matrix under measurement uncertainty is a scalar multiple of that without uncertainty, with that scalar multiple referred to as the “information reduction factor” (IRF).

• The IRF for the PMHT is consistently lower than that for the PDAF. This indicates that estimates performed under the PMHT model will be of lower quality than those under the PDAF model. Further, it appears that the difference increases markedly with clutter density. This may explain the mediocre PMHT performance for tracking a single target in clutter — the PMHT has many good points, but it may be that the goal of beating the PDAF in this situation is unrealistic. • Again as with the PDAF, the PMHT’s CRLB IRF behavior holds true over a reasonable range of non-Gaussian measurement noise distributions. • It appears that the use of a “homothetic” measurement noise in the PMHT is a good idea, particularly for moderate values (9-25) of variance ratio. • There is some (slight) decrease in performance when the PMHT is applied to data fusion in a “joint” versus “sequential” mode. The latter refers to each sensor providing its own scan of data for estimation, while in the former data from all sensors is overlain. We realize that conclusions drawn from a CRLB analysis are open to interpretation, and even debate; and we are certainly aware that the issue of convergence (to a local versus global likelihood maximum) is an important determinant of performance which is not represented in a CRLB. We hope, however, that our results are useful, and feel in particular that the second and fourth bullet above should be.

ACKNOWLEDGMENTS This research has been supported by the Office of Naval Research through NUWC, Division Newport, under contract N66604-98-M-3735.

REFERENCES 1. Y. Bar-Shalom, X.-R. Li, Estimation and Tracking: Principles, Techniques and Software, Artech House, Inc., 1993. 2. Y. Bar-Shalom, X.-R. Li, Multitarget-Multisensor Tracking: Principles and Techniques, YBS Publishing, 1995. 3. R. Streit and T. Luginbuhl, “Probabilistic Multi-Hypothesis Tracking”, NUWC-NPT Technical Report 10,428, February 1995. 4. A. Logothetis, V. Krishnamurthy, and J. Holst, “On Maneuvering Target Tracking via the PMHT”, Proceedings of the Conference on Decision and Control, December 1997. 5. Y. Ruan, P. Willett, and R. Streit, “Making the PMHT the Tracker of Choice” to appear in the Proceedings of the 1999 Aerospace Conference, Snowmass CO, March 1999. 6. T. Kirubarajan and Y. Bar-Shalom, “Low-Observable Target Motion Analysis Using Amplitude Information”, IEEE Transactions on Aerospace and Electronic Systems, pp1367-1384, October 1996. 7. C. Jauffret and Y. Bar-Shalom, “Track Formation with Bearing and Frequency Measurements”, IEEE Transactions on Aerospace and Electronic Systems, pp999-1010, November 1990. 8. P. Willett and Y. Bar-Shalom, “On the CRLB When Measurements are of Uncertain Origin”, to appear in the Proceedings of the 1998 Conference on Decision and Control, Tampa FL, December 1998. 9. E. Giannopoulos, R. Streit, and P. Swaszek, “Probabilistic Multi-Hypothesis Tracking in a Multi-Sensor, MultiTarget Environment”, First Australian Data Fusion Symposium, November, 1996. 10. H. Van Trees, Detection, Estimation, and Modulation Theory, Volume 1, Wiley, 1968. 11. L. Ljung, System Identification: Theory for the User, Prentice-Hall, 1987. 12. G. Jacyna and S. Pawlukiewicz, “Minimum Origin Uncertainty State Estimation (MOUSE) Algorithm Performance Bounds”, MITRE PAPER, March 1998. 13. J.-P. LeCadre and H. Gauvrit, “Approximation of the Cram´er-Rao Bound for Multiple Target Motion Analysis”, Paris Workshop commun GdR ISIS (GT 1) and NUWC, November 1998. 14. C. Rago, P. Willett, and R. Streit, “Direct Data Fusion Using the PMHT: Application Issues”, Proceedings of the 1995 American Control Conference, Seattle WA, June 1995. 15. S. Kassam, Signal Detection in NonGaussian Noise, Springer-Verlag, 1987.

IRF for Gaussian Model as a function of λV. 1

0.9 IRF for Generalized Gaussian Model, λV=0.1, P =.9 D

PMHT PDAF

0.89

0.7 0.88

0.6

PMHT P =1.0 D PDAF PD=1.0 PMHT PD=0.9 PDAF P =0.9 D 2 Sensor PMHT P =0.9

0.5

IRF

IRF

0.8

0.87

0.86

D

0.4

0

0.2

0.4

0.6

0.8

1 λV

1.2

1.4

1.6

1.8

2 0.85

Figure 3. The relative information reduction factors (IRFs) for the PDAF and PMHT measurement models, in the standard situation of Gaussian measurement noise, plotted as a function of clutter density. The 2 sensor plot relates to the “joint” method for data fusion, and should be compared to the dash-dot line (“sequential” fusion).

1

1.2

1.4

1.6

1.8

2 K

2.2

2.4

2.6

2.8

3

Figure 5. The relative information reduction factors (IRFs) for the PDAF and PMHT measurement models, in generalized Gaussian measurement noise, plotted as a function of the distribution’s exponent k.

IRF for Gaussian Model as a function of PD. 1.4

PMHT λV=0 PDAF λV=0 PMHT λV=0.1 PDAF λV=0.1 2 Sensor PMHT λV=0.1

1.2

1

0.8

IRF for Mixed Gaussian Model, λV=0.1, P =.9

IRF

D

0.895

PMHT PDAF

0.6 0.89

0.4

0.885

0.88

0.2

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

PD

Figure 4. The relative information reduction factors (IRFs) for the PDAF and PMHT measurement models, in the standard situation of Gaussian measurement noise, plotted as a function of probability of detection. The 2 sensor plot relates to the “joint” method for data fusion, and should be compared to the dash-dot line (“sequential” fusion).

IRF

0.875

0 0.1

0.87

0.865

0.86

0.855

0.85

0

0.1

0.2

0.3

0.4

0.5 α

0.6

0.7

0.8

0.9

1

Figure 6. The relative information reduction factors (IRFs) for the PDAF and PMHT measurement models, in Gaussian mixture (variance ratio 10) measurement noise, plotted as a function of mixing proportion α.

0.91

0.9

IRF

0.89

0.88

0.87

0.86

0.85

0

1

2

3

4

5 κ

6

7

8

9

10

2

Figure 7. The information reduction factor (IRF) for the PMHT measurement model. The situation is of a homothetic measurement model, with κ2 representing the variance multiplier for the second model.

1.07

1.06

PMHT

1.05

2

IRF(κ )/IRF

1.04

1.03

1.02

1.01

1

0

1

2

3

4

5 κ

6

7

8

9

10

2

Figure 8. The information reduction factor (IRF) for the PMHT measurement model. This is essentially the same as figure 7, but the IRF is compared to that for the case that no homothetic model is used.