Multisensor Multitarget Intensity Filter

0 downloads 0 Views 119KB Size Report
Abstract – A multisensor multitarget intensity filter is derived for N sensors. The multitarget process is assumed to be a Poisson point process, as are the sensor.
Multisensor Multitarget Intensity Filter Roy L. Streit Metron, Inc. Reston, VA, U.S.A. [email protected], [email protected] Abstract – A multisensor multitarget intensity filter is derived for N sensors. The multitarget process is assumed to be a Poisson point process, as are the sensor measurement sets. The sensor data are pooled, but sensor labels are retained. The likelihood function of the pooled data is obtained via the Poisson point process models. The Bayes information updated point process is not Poisson, but it is shown that all its single target marginal probability densities are identical. The Bayes posterior density is approximated by the product of its marginal densities. The marginal single target density is scaled to obtain the intensity of the Poisson point process approximation. The fused multisensor multitarget intensity filter is the average of the sensor intensity filters, provided sensor coverages are identical. The filter for non-identical sensor coverages is also described. Keywords: Multisensor tracking, multisensor fusion, multitarget tracking, intensity filter, Poisson point process, probability hypothesis density, data association.1

1

Introduction

The multisensor multitarget intensity filter for two sensors is derived first. The extension to N ≥ 2 sensors is straightforward. The target process is assumed to be Poisson point process (PPP) that is subjected to a Bernoulli death filter followed by multitarget Markov motion transformation based on a given single target motion model. These assumptions are the same as [1]. Sensors are assumed to produce measurement sets that are realizations PPP’s. These measurement processes are assumed to be conditionally independent given the multitarget state. However, it is not assumed that the sensors produce the same number of measurements, that the sensor measurement spaces are the same, that the sensors have the same probability of target detection, or that targets detected by one sensor are also detected by the other. These issues bedevil multisensor filters. The multisensor intensity filter derived here differs significantly from that given for the multisensor probability hypothesis density (PHD) filter [2, Theorem 7, p. 1169]. In [2] the multisensor PHD filter update takes

the form of a product, while in the present paper the intensity filter is given by an average. These two forms are clearly not equivalent, but the difference may be due to different problem formulations. The approach taken here builds on the Bayesian machinery presented in [1]. Alternatives to this approach for the single sensor case are presented in the interesting series of papers [3,4,5]. See [6] for additional background concerning PHD filters. For those interested in more details concerning PPP’s, see [7].

2

The notation of [1] is followed when possible. Let f k −1|k −1 ( xk −1 ) be the inductively known, sensor independent, multitarget intensity defined on the singletarget state space S = STarget ∪ Sφ . The space Sφ is called the clutter space, and it is disjoint from the target state space STarget . “Clutter-targets” have states in Sφ and model measurement clutter. For further details, see [1]. Normalized intensity is a probability density function (pdf). The predicted multitarget intensity, f k |k −1 ( xk ) = bk ( xk ) + ∫ Ψ Ξk |Ξ k −1 ( xk | xk −1 ) (1 − d k −1 ( xk −1 ) ) f k −1|k −1 ( xk −1 ) dxk −1 , S

(1) is also sensor independent. Let Z ( i ) , i = 1, 2, denote the measurement spaces of the two sensors. The sensor measurement spaces are not identical; in general, they may have different dimensions and units. Let Z k and Wk denote the point measurement random variables for the first and second sensors, respectively, at time tk . For all xk ∈ S , the sensor likelihood functions are pZ(1k)|X k ( z | xk ) , for all z ∈ Z (1) , pW( k)| X k ( w | xk ) , for all w ∈ Z ( 2 ) . 2

(2)

D i Let Pk ( ) ( xk ) , i = 1, 2 , be the probabilities of detection of the sensors. The predicted sensor measurement intensities are, for z ∈ Z (1) ,

λk(1|k)−1 ( z ) = ∫ pZ(1)|X ( z | xk ) PkD (1) ( xk ) f k |k −1 ( xk ) dxk , k

1

Research sponsored in part by the Office of Naval Research (ONR) under contract No. N00014-06-C-0097.

Basics

S

and, for w ∈ Z ( 2 ) ,

1694

k

(3)

λk(|2k)−1 ( w ) = ∫ pW( 2)| X ( w | xk ) PkD ( 2) ( xk ) f k |k −1 ( xk ) dxk . (4) k

k

S

Define the sensor state space coverages by

{

D( s)

C ( s ) = xk : Pk

( xk ) > 0} ⊂ S ,

s = 1, 2 .

(5)

Until the general case is discussed in Section 6, it is assumed that the sensor coverages are identical. Note that identical sensor coverages do not imply that the sensors have identical detection probabilities on the common coverage set.

3

Pooled sensor data

The given sensor data sets υk (1) = ( mk , z1k ,..., zkmk ) ∈X ( Z (1) )

υk ( 2 ) = ( nk , wk1 ,..., wkn ) ∈X ( Z ( 2 ) )

(6)

k

are assumed to be conditionally independent realizations of the Poisson measurement processes ϒ (k1) | Ξ k and ϒ (k ) | Ξ k , respectively, where Ξk is the multitarget state PPP. The pooled data are denoted by the ordered pair υk = (υk (1) ,υk ( 2 ) ) . (7) 2

The sensor labels are retained in the pooled data, but data order within each sensor group is uninformative. Let ϒ k denote the PPP variable for υk .

3.1

Target-to-sensor assignment

The multitarget PPP Ξk is written

(

)

Ξk ≡ Rk , X k1 ,..., X kRk ,

where Rk is the discrete Poisson random variable that characterizes the number of points. Conditioned on Rk , the random variables

{ X kj : j = 1,..., R } k

correspond to

independent and identically distributed (i.i.d.) point targets in the state space S = STarget ∪ Sφ . Let X k denote this common random variable. The pdf of X k is the normalized intensity of Ξk . The likelihood function of the pooled data υk is conditioned on the multitarget realization ξ k = ( rk , x1k ,..., xkrk ) , xkj ∈ S = STarget ∪ Sφ for all j . The multitarget intensity is assumed to be continuous on S . These points are therefore distinct with probability one, and they correspond either to detected targets or to clutter-targets. The usual assignment procedure requires the enumeration, for each realization, of the myriad ways in which the points in ξ k can be assigned to the two sensors. This enumeration includes, for example, assigning a target to both sensors, to one sensor and not the other, and to neither sensor. The sum of this enumeration is taken over all possible realizations ξ k .

This procedure seems far too computationally complex to be practical. The alternative approach taken here relies heavily on the PPP multitarget model and leads to a computationally practical multisensor multitarget intensity filter. The resulting intensity filter is intuitively compatible with the additive theory of Bayesian evidence first propounded by Winter and Stein [8] and later cited by Mahler [9]. Variance reduction issues are discussed in Section 7.5. The rk points in the realization ξ k are i.i.d., and they are distinct with probability one. The continuous multitarget intensity implies that a physically real target is represented by, loosely speaking, a continuous “bump” on the intensity surface in the neighborhood of where the target is located. Therefore, i.i.d. samples drawn from the multitarget intensity can be distinct and yet still correspond to one and the same real target. This important fact has immediate consequences in the targetto-sensor assignment problem. The myriad of real target assignments to the sensors is modeled via the assignment of the i.i.d. sample targets in the realization ξ k to different sensors. Every i.i.d. sample target in the realization is assigned to exactly one sensor. Both sensors can observe the same real target since that target can be represented by more than one sample target in ξ k . Nothing in the multitarget PPP precludes realizations in which all the i.i.d. samples are drawn from one target. However, such events contribute very little to the filter because during the information update step they will be found to have extremely low likelihoods. In other words, multitarget realizations that are inconsistent with the data are essentially discounted during the information update.

3.2

Pooled data likelihood function

The complicated nature of the many possible assignments of targets to sensor is treated entirely via the sample targets in the realization ξ k . Define the list of all partitions of {1,..., mk + nk } into mk and nk parts:

(

)

 Part (1) ( ρ ) , Part ( 2) ( ρ ) :    Ω=  mk + nk  ( mk + nk )!   ρ = 1,...,   ≡ mk ! nk !   mk   Part ( ) ( ρ ) ⊂ {1, 2,..., mk + nk }, i = 1, 2, i

Part ( ) ( ρ ) ∪ Part ( 1

Part

(1)

(8)

2)

( ρ ) = {1, 2,..., mk + nk } ( ρ ) ∩ Part (2 ) ( ρ ) = ∅.

Each partition ρ ∈ Ω corresponds to a hypothesis H ρ about the assignment of targets to sensors. Thus, for the list of targets ( x1k ,..., xkmk + nk ) , H ρ : For s = 1,..., mk + nk , target xks is assigned to sensor i ∈ {1, 2} if s ∈ Part ( ) ( ρ ) . i

1695

(9)

The hypothesis set is mutually exclusive and exhaustive, and each hypothesis in Ω is assumed equally likely to be the correct assignment hypothesis. Let α ∈ Sym ( mk ) and β ∈ Sym ( nk ) denote

be the predicted target intensity of the sensors. The likelihood function pΞk |ϒ1 ,...,ϒk −1 (ξ k | H ρ ) is conditioned on the assignment hypothesis H ρ , so pΞk |ϒ1 ,...,ϒk −1 (ξ k | H ρ ) =

permutations on mk and nk distinct ordered objects, respectively. Denote the j permuted lists by (1)

k

2

(10)

k

Under all hypotheses H ρ , the likelihood function of υk is given by Lϒk |Ξk (υk | ξk , H ρ ) = mk  1 α ( j)  1 pZ( k)|X k zkj | xk ρ   ∑ ∏    mk ! α∈Sym ( mk ) j =1   nk   1 β ( j) 2  × pW( k)| X k wkj | xk ρ ∑ ∏  n !   k β ∈Sym (nk ) j =1    0,

(

)

(

)

  

= m  

 mk + nk     mk 

k + nk 

mk

 

∑ ρ =1

Lϒk |Ξk (υ k | ξ k , H ρ )

k |k −1

(x ) k

   

(14)

k

)

k −1

1

on the hypothesis H ρ , so  − ∫Z (1) λk(1|k)−1 ( z )dz e  mk !  

for rk ≠ mk + nk .

pΞk |ϒ1 ,...,ϒk −1 (ξ k | H ρ ) pϒk |ϒ1 ,..., ϒk −1 (υ k )

mk

 

j =1

 

∏ λk(1|k)−1 ( zkj ) 

(15)

 − ∫Z (2) λk(2|k)−1 ( w)dw n  k e j  (2 ) λk |k −1 ( wk )  . × ∏ nk ! j =1     From (3)-(4) and (13), it follows that



f k(|k)−1 ( x ) dx = ∫

λk(1|k)−1 ( z ) dz

(16)



f k(|k −) 1 ( x ) dx = ∫

λk(|2k)−1 ( w ) dw .

(17)

S

1

Z (1)

and S

2

Z (2 )

Taking the ratio of (14) and (15) gives pΞ k |ϒ1 ,..., ϒk −1 (ξk | H ρ ) pϒk |ϒ1 ,...,ϒk −1 (υk )

(

)

(

)

(18)  mk f (1) x αρ ( j )   nk f (2 ) x β ρ ( j )  k |k −1 k k |k −1 k    =  ∏ (1)  ∏ ( 2 ) . j j j =1 λk |k −1 ( zk )   j =1 λk |k −1 ( wk )     Substituting (11) and (18) into (12) gives the information updated multitarget pdf pΞk |ϒ1 ,...,ϒk (ξ k ) =

k

1

j =1

αρ ( j )

(

The Bayesian information update for the target process is pΞ |ϒ ,...,ϒ (ξ k ) 1

∏f

(1)

 − ∫S fk(|2k)−1 ( x )dx nk  e βρ ( j)  (2 ) f k |k −1 xk × . ∏ nk ! j =1   The likelihood function pϒ |ϒ ,...,ϒ (υ k ) does not depend

for rk = mk + nk

Multitarget information update k

mk

pϒk |ϒ1 ,..., ϒk−1 (υ k ) =

(11) Permutations within the two parts of the partition specified by hypothesis H ρ are assumed equally likely to be the correct assignment, so the sums in (11) are divided by mk ! and nk ! , respectively. Implicit in this formulation of the likelihood function is the fact that the sensor coverages are identical. If the coverages are known a priori to be different, then the targets lying outside the coverage of sensor 1 but inside the coverage of sensor 2 could not be the origin of a sensor 1 measurement, and the likelihood function would need to be changed to reflect this fact. The multisensor intensity filter for non-identical sensor coverages is discussed in Section 7.3.

4

 − ∫S f k(1|k)−1 ( x )dx e  mk ! 

elements of the ordered but

( ρ ) ) ( j ) , j = 1,..., m , ( ( j ) ≡ β ( Part ( ) ( ρ ) ) ( j ) , j = 1,..., n .

α ρ ( j ) ≡ α Part βρ

th

1  mk + nk     mk 

 mk + nk     mk 



ρ =1

(

)

(

αρ ( j ) α ( j) 1 j  mk p (1) f k(|k)−1 xk ρ Zk | X k zk | xk  1 ∑m ∏  mk ! α∈Sym λk(1|k)−1 ( zkj ) ( k ) j =1 

.

(12) From (11), the update is nonzero only if the number of points in the realization ξ k = ( rk , x1k ,..., xkrk ) is

(

)

(

) 

βρ ( j ) β ( j) 2 (2 ) j  n k pW | X wk | xk f k(|k −) 1 xk ρ 1 k k  × k ∑n ∏  n ! β ∈Sym λk(|2k)−1 ( wkj ) ( k ) j =1 

rk = mk + nk . Simplifying (12) requires using the sensor labels. Let i D i f k(|k)−1 ( xk ) ≡ Pk ( ) ( xk ) f k |k −1 ( xk ) , i = 1, 2 , (13)

1696

 

(19)

)  .  

5

Identical single target marginal pdf’s

The information update is a formidable expression that may defy further simplification; however, it has identical single-target marginal pdf’s. Consider the single-target state xks for any fixed s ∈ {1,..., mk + nk } . The sum over partitions splits into two terms: 1   pΞ |ϒ ,...,ϒ (ξ k ) =  m +n   ∑ + ∑  (20) k 1 k k k    ρ ∈Ω ρ ∈Ω .  m  such that such that k    s (1) ρ s ( 2) ρ ∈ ∈ x Part x Part ( ) ( )  k k The marginal pdf of xks (the integral of pΞ k |ϒ1 ,..., ϒk (ξk ) over all single-target variables except xks ) is different for each of the two sums in (20). Consider the first sum. Using Fubini’s Theorem, pass the required integral all the way to the product of the α and β sums. The integral of the β sum is one. The integrals of all the factors in the α summand are also one, except for the one factor that in not integrated over. For each α , this factor corresponds to α ρ ( j ) = s . Hence, the marginal pdf is

(

α −1 ( s )

pZ( k)|X k zk ρ 1 ∑ mk ! α∈Sym m λ (1) 1

( k)

k |k −1

m

1 k = ∑ mk ! j =1

)

| xks f k(|k)−1 ( xks )

(

α −1 ( s )

zk ρ

)

pZ( k)|X k ( zkj | xks ) f k(|k)−1 ( xks ) 1



λk(1|k)−1 ( zkj )

α ∈Sym ( mk )

pZ( k)| X k ( zkj | xks ) f k(|k)−1 ( xks ) 1

mk

∑ j =1

λk(1|k)−1 ( zkj )

.

Similar observations apply to the second sum in (20), except that the only factor that doesn’t integrate to one corresponds to β ρ ( j ) = s . Marginalizing the conditioned variable

( R , X k1 ,..., X ) ϒ ,..., ϒ Rk k

over

X ks+1 ,..., pR , X s |ϒ ,..., ϒ rk , xks k k k 1 X k1 ,...,

X ks−1 ,

(

= ∫ ...∫ pΞ S

S

)

X kRk

1

k

k

mk



pZ( k)|X k ( zkj | xks ) f k(|k)−1 ( xks ) 1

1

λk(1|k)−1 ( zkj )

j =1

(22)

( 2) k |k −1

k

1

(1) (1) j s s m 1  k pZ k |X k ( zk | xk ) f k |k −1 ( xk ) ∑ mk + nk  j =1 λk(1|k)−1 ( zkj ) 

(23)

pW( k)|X k ( wkj | xks ) f k(|k )−1 ( xks )  .  λk(|2k)−1 ( wkj ) j =1  The marginal pdf’s (23) are therefore identical for all s. Marginalizing over X ks in (23) gives nk

+∑

pR

k |ϒ1 ,..., ϒk

2

2

 1, for rk = mk + nk

( mk + nk ) = 

 0, for rk ≠ mk + nk . Hence, the “single target” pdf is numerically identical to (22), as is seen as follows: p X |R ,ϒ ,...,ϒ ( xk ) ≡ p X s |Rk , ϒ ,...,ϒ ( xks ) k

gives the marginal pdf =

1

k

k

pR

k

s k , X k |ϒ1 ,..., ϒk

pR

k |ϒ1 ,..., ϒ k

1

( xks )

k

( mk + nk )

 mk p (1) s ( zkj | xks ) PkD (1) ( xks ) Z |X = m 1+ n  ∑ k k (1) k k  j =1 λk |k −1 ( zkj ) 

(ξk ) ∏ dxki . i =1 i≠s

This pdf is zero if rk ≠ mk + nk . If rk = mk + nk , then from (21) it follows that

 m +n  k  k  mk   

  n wkj | xks ) f xks )  ( ( 1 k p + . ∑ n ∑ λk(|2k)−1 ( wkj ) ρ∈Ω k j =1  such that  2) ( s xk ∈Part ( ρ )  The two summands in (22) are independent of their summation indices. The numbers of terms in the first and second sums are  mk + nk − 1  m + nk − 1  and  k   , nk mk     respectively. Straightforward algebra gives pR , X s |ϒ ,...,ϒ ( mk + nk , xks )

rk

k |ϒ1 ,..., ϒk

1

s k

( 2) Wk | X k

=

(21)

(x ) =

   1 × ∑ m k  ρ∈Ω  ssuch that 1) (  xk ∈Part ( ρ )

k

1

k

s k , X k |ϒ1 ,..., ϒk

1

such that α ρ−1 ( s ) = j

1 = mk

1

pR

(24)

pW( 2)| X s ( wkj | xks ) PkD ( 2) ( xks )  k k  f k |k −1 ( xks ) ∀s. j (2 )  λk |k −1 ( wk ) j =1  Similarly, the joint conditional pdf nk

+∑

p

X k1 ,..., X kmk +nk Rk ,ϒ1 ,...,ϒk

p ≡

( x ,..., x 1 k

Rk , X k1 ,..., X kmk +nk ϒ1 ,...,ϒk

pR

k |ϒ1 ,..., ϒ k

1697

mk + nk k

(m

k

)

+ nk , x1k ,..., xkmk + nk ) (25)

( mk + nk )

is numerically identical to (19).

7

6

7.1

Poisson approximation

The posterior multitarget pdf (19) is approximated as the product of its single-target marginal pdf’s: p X 1 ,..., X mk +nk |R ,ϒ ,...,ϒ ( x1k ,..., xkmk + nk ) k



1

k

k

k

mk + nk

∏ s =1

pX

k |Rk , ϒ1 ,..., ϒk

(26)

( xks ).

f k |k ( xk ) = c p X k |ϒ1 ,..., ϒk ( xk )

(27)

defines a PPP that satisfies (26). The maximum likelihood estimate for c is found by maximizing the likelihood function  exp − cp  ∫ X k |" ( x ) dx mk cp ( x s )  L ( c | ξk ) =   ∏ X k |" k mk ! s =1    exp − cp X |" ( x ) dx nk  ∫ k   s × cp x ∏k X k |" ( k )  n ! k s = m +1  

) (

(

∝ exp ( −2c ) c

)

mk + nk

(

)

)

for all ξ k = ( mk + nk , x1k ,..., xkmk + nk ) .

The maximum likelihood estimate is cˆML = ( mk + nk ) 2 . Substituting this estimate into (27) and using (24) gives D (1) (1) j m 1  k pZk |X k ( zk | xk ) Pk ( xk ) f k |k ( x k ) =  ∑ 2  j =1 λk(1|k)−1 ( zkj )  (28) D ( 2) j nk p ( 2 )  | w x P x ( ) ( ) W |X k k k k  + ∑ k k (2)  f k |k −1 ( xk ) . λk |k −1 ( wkj ) j =1  The intensity (28) is the average of the single-sensor detected target intensities. The intensity of undetected targets averaged over the two sensors is f kUndetected ( xk ) = |k (29) 1 D1 D 2 1 − Pk ( ) ( xk ) + 1 − Pk ( ) ( xk ) f k |k −1 ( xk ) . 2 Adding the intensities of detected and undetected targets gives f k |k ( xk ) = f kUndetected (30) ( xk ) + f k |k ( xk ) . |k The intensity filter for the two sensor case is obtained by substituting (28) and (29) into (30).

{(

) (

)}

Sensors with identical coverages

The general multisensor problem is easily derived from the two sensor case. Let N be the number of sensors and denote the data for sensor s by zk ( s ) ≡ {zkj ( s ) : j = 1,..., mk ( s )}, s = 1,..., N . Let Pk ( ) ( xk ) be the target detection probability for sensor s. If the sensor coverages are all identical, then the total target intensity for the multisensor problem takes the form 1 N  f kFused ( xk ) =  ∑ Ls ( zk ( s ) | xk )  f k |k −1 ( xk ) , (31) |k N s =1   where Ls ( zk ( s ) | xk ) = 1 − PkD ( s ) ( xk ) D s

Both sides of (26) are pdf’s of the multitarget state ( x1k ,..., xkmk +nk ) . For any constant c > 0 the intensity

(

General multisensor case

+

pZ( k )( s )|X k ( zkj ( s ) | xk ) Pk

mk ( s )

D(s )

s

∑ j =1

( xk )

(32) .

λk |k −1 ( z ( s ) ) The “quality” of the sensors is modeled by the target detection probabilities, PkD ( s ) ( xk ) , so there is no further need to weight the terms in the sum (31). Integrating the intensity f kFused ( xk ) over any region |k ℜ in the state space gives the expected number of targets in ℜ : (s)

j k

E  N k |k ℜ = ∫ f kFused ( xk ) dxk |k ℜ 1  N  = ∫  ∑ Ls ( zk ( s ) | xk )  f k |k −1 ( xk ) dxk N ℜ  s =1  = =

N

1 N

∑∫

1 N

∑ E  N ( ) ℜ,

s =1



f k(|k ) ( xk | ℜ ) dxk

(33)

s

N

s k |k

s =1

s where E  N k( |k) ℜ is the estimated expected number of   targets using sensor s. Intuitively, if the individual sensor target count estimates are statistically consistent at time tk , the multisensor target count estimate is also consistent. s The variance of the estimate E  N k( |k) ℜ is equal to the   number of targets (see [10] and the references therein).

7.2

Details of the multisensor case for identical sensor coverages

Let M k = ∑ν =1 mk (ν ) . The two sensor partition (8) now N

has N components

1698

Ω=

{( Part

(1)

( ρ ) ,..., Part ( N ) ( ρ ) )}

Mk   Mk ! , ρ = 1,...,  ≡ 1 " m m N 1 ! m " mk ( N ) ! ( ) ( ) ( ) k k  k 

where the parts satisfy the obvious conditions that generalize those in (8). For ν = 1,..., N , define the permutations α ν ∈ Sym ( mk (ν ) ) . Instead of (10), let

(

α ρν ( j ) ≡ α ν Part (ν ) ( ρ )

) ( j),

j = 1,..., mk (ν ) ,

The likelihood function (11) now becomes Lϒ



k |Ξ k

k

mk (ν ) N  1 αν ( j ) ν = ∏ pZ( k )|X k zkj (ν ) xk ρ ∑ ∏ ν =1  mk (ν ) ! αν ∈Sym ( mk (ν ) ) j =1 

(



) . 

1

×

   mk 

(1) "mk ( N )





  Mk    mk (1) "mk ( N )   

∑ ρ =1

k |Ξ k

k

    

| ξk , H ρ )



k |ϒ1 ,..., ϒ k −1





k |ϒ1 ,..., ϒk −1

k

| Hρ )

(υ k )

.

The expressions (19) and (20) extend in the obvious way. Integrating over all single-target state variables except xks , and following essentially the same argument used to derive (22), gives: 1 pR , X s |ϒ ,...,ϒ ( xks ) =   M 1

k

k

k

   mk 

k



(1) "mk ( N ) 

    ν ν ( ) ( ) j s s ν m N k( ) p 1 Z k | X k ( zk (ν ) | xk ) f k |k −1 ( xk )   ×∑  ∑ . ∑ mk (ν ) j =1 λk(ν|k)−1 ( zkj (ν ) ) ρ ∈Ω ν =1   such that  x s ∈Part(ν ) ( ρ )  k  The summand in braces is independent of the summation index. The number of terms in this inner summand is ( M k − 1)! . mk (1)!" mk (ν − 1)! ( mk (ν ) − 1)! mk (ν + 1)!" mk ( N ) !

Further simplification is straightforward and yields the single-target marginal pdf: p X |R , ϒ ,...,ϒ ( xk ) k

=

k

1 Mk

1

k

N

 mk (ν )

∑ ν =1  

∑ j =1

pZ( )| X ν

k

k

(z

j k

(ν ) xk ) PkD(ν ) ( xk ) 

λk(ν|k )−1 ( zkj )

Non-identical sensor coverages

The restriction to identical sensor coverages arises from the likelihood function (11), which implicitly assumes that all targets can be associated to any sensor. This assumption is incorrect if the sensors are known a priori to have nonoverlapped coverages. However, the assumption of identical coverages can be accommodated by partitioning the target state space into disjoint sets that are covered by the same sensors. Explicitly, let the state space partition S = Σ1 ∪ " ∪ Σ h , Σ i ∩ Σ j = ∅ for i ≠ j, (34)

The information update (12) generalizes to

Mk

The result is cˆML = M k N . The single-target marginal pdf scaled by cˆML yields the multitarget state intensity. Adding the intensity of the undetected targets averaged over the N sensors and rearranging the result gives the multisensor intensity filter (31).

7.3

| ξk , H ρ )

pΞ k |ϒ1 ,..., ϒk (ξ k ) = 

The information updated multisensor multitarget pdf is approximated by the product of its marginal pdf’s. The approximation is extended to a PPP by scaling the singletarget pdf. The maximum likelihood scale factor is found by maximizing the likelihood function L ( c | ξ k ) ∝ exp ( − Nc ) c M k , ∀ ξ k = ( M k , x1k ,..., xkM k ) .

 f k |k −1 ( xk ) .  

be such that each set Σi in the partition is covered by the same sensors. Each sensor belongs to as many different sets Σi in the partition as is needed, that is, a given sensor’s coverage can be distributed over many different sets in the partition. Even with an efficient partitioning strategy, however, the minimum number of sets h in the partition is potentially very large. In any event, however the partition is determined in practice, a different intensity filter is defined for each set Σi by the intensity filter (31), but with the average taken over only those sensors that cover Σi . These filters are restricted to the sets Σi . From (34) these sets are nonoverlapped and their union is all of S . Therefore, the overall multisensor intensity filter is obtained by joining these separate filters in the obvious “quilt-like” fashion. The resulting intensity filter on S may be discontinuous at the boundaries of the sets in the partition (34).

7.4

Bookkeeping-free intensity filter

To avoid the bookkeeping associated with maintaining the partition (34), it is possible to use the intensity filter (31) without change. The cost of doing so, however, is the delayed detection of new targets. To see this, consider the special case in which none of the sensor coverages overlap. Since C ( r ) ∩ C ( s ) = ∅ for all r ≠ s , the target detection probabilities are such that D r D s Pk ( ) ( xk ) Pk ( ) ( xk ) = 0 for all r ≠ s . The average (31) restricted to C ( s ) reduces to

1699

(

)

1 Ls ( zk ( s ) | xk ) + N − 1 f k |k −1 ( xk ) N 1 ≅ f k |k −1 ( xk ) + Ls ( zk ( s ) | xk ) f k |k −1 ( xk ) , xk ∈ C ( s ) , N or, equivalently, f kFused ( xk ) ≅ fk |k −1 ( xk ) |k f kFused ( xk ) = |k

1  L ( z ( s ) | xk ) f k |k −1 ( xk ) , xk ∈ C ( s ) , +N s k 0, xk ∉ C (1) ∪ " ∪ C ( N ) . (35) In this special case, the multitarget intensity is (approximately) the predicted multitarget target intensity in that part of the state space for which no sensor provides coverage, and it is the sum of the predicted intensity and the single-sensor information updated intensity scaled by 1 N in those parts of the state space covered by one

sensor. Thus, the factor of 1 N delays, but does not prevent, the detection of new targets by multisensor multitarget intensity filter. Generally, the more heavily overlapped the sensor coverages are, the faster the multitarget intensity filter (31) will model the true target intensity, and conversely. In other words, the multisensor fusion rule (31) used without maintaining the partition (34) may be best suited to applications in which there is significant overlap in the sensor coverages.

7.5

Multisensor intensity filter variance

The variance of the estimate of the target count is an important quantity, and it is clearly very important in nearly all applications to reduce it if at all possible. Serially averaging the estimated target count over time reduces the variance, but it also introduces time lag – a potentially unacceptable cost in some applications. This would seem to be the only variance reduction procedure available in the single sensor case. In the multiple sensor case an alternative variance reduction procedure is available – averaging over the sensors. For sensors with identical coverages, the multitarget intensity filter (31) averages the information updates of the individual sensors. As is clear from (33), the variance in the fused target count estimate is reduced (compared to a single sensor) by a factor equal to the number of sensors, N. This variance reduction is obtained without introducing latency as in temporal averaging. The information update of the multitarget intensity filter (31) does not, however, significantly reduce the point estimation variance because the information updates of the individual sensors are not multiplied. The sensors are conditionally independent, but their information updates are not multiplied because the multiple sensor data are not assumed to be correctly associated to target across sensors. Consequently, the multisensor intensity filter (31) does not have the same point estimation gain as

in the case of multiple sensors with a priori known targetto-sensor data assignments.

8

Concluding remarks

A multisensor intensity filter is derived for the general case of N sensors. This intensity filter is symmetric in the sensors, that is, the filter is independent of sensor order. In the case of sensors with identical state space coverages, the multisensor intensity filter is shown to be the average of the individual sensor intensity filters. The important case of non-identical sensor coverages is also presented. In this case the multisensor intensity filter at a given point x in state space is the average of the intensity filters of the sensors that cover x. A simple bookkeeping-free intensity filter update is discussed. It is shown that the cost of eliminating the bookkeeping needed to compute the sensor coverage partition (34) is a delay in the detection and termination of targets.

References [1] R. L. Streit and L. D. Stone, “Bayes Derivation of multitarget Intensity Filters,” ISIF International Conference on Information Fusion, Cologne, 2008, submitted. [2] R. P. S. Mahler, “Multitarget Bayes Filtering via First-Order Multitarget Moments,” IEEE Transactions on Aerospace and Electronic Systems, AES-39, 2003, 11521178. [3] O. Erdinc, P. Willett, and Y. Bar-Shalom, “Probability Hypothesis Density Filter for Multitarget Multisensor Tracking,” Proc. of the ISIF International Conference on Information Fusion, Philadelphia, PA, July 2005. [4] O. Erdinc, P. Willett, and Y. Bar-Shalom, “A Physical-Space Approach for the Probability Hypothesis Density and Cardinalized Probability Hypothesis Density Filters,” Proc. of the SPIE Conference on Signal and data Processing of Small Targets, (#6236-43) Orlando, FL, April 2006. [5] O. Erdinc, P. Willett, and Y. Bar-Shalom, “A Physical-Space Approach for the Probability Hypothesis Density and Cardinalized Probability Hypothesis Density Filters,” IEEE Transactions on Aerospace and Electronic Systems, to be published. [6] I. R. Goodman, R. P. S. Mahler, and H. T. Nguyen, Mathematics of Data Fusion, Kluwer, Dordrecht, 1997.

1700

[7] J. F. C. Kingman, Poisson Processes, Clarendon Press, Oxford, 1993. [8] C. L. Winter and M. C. Stein, “An Additive Theory of Bayesian Evidence,” Report LA-UR-93-3336, Los Alamos National Laboratory, 1993. (Accession Number ADA364591) [9] R. P. S. Mahler, “Multitarget Bayes Filtering via First-Order Multitarget Moments,” III Trans. on Aerospace and Electronic Systems, vol. AES-39, 11521178, 2003. [10] B.-T. Vo, B.-N. Vo, and A. Cantoni, “Analytic Implementations of the Cardinalized Probability Hypothesis Density Filter,” IEEE Trans. on Signal Processing, vol. SP-55, 3553-3567, 2007.

1701

Suggest Documents