Optimal Linear Unbiased Filtering with Polar ... - CiteSeerX

10 downloads 0 Views 140KB Size Report
mated covariance is consistent, the expectation of ANEES is 1 [4]. Because the target moves at a constant high velocity, the velocity RMSEs of those three filters ...
Proceedings of the 5th International Conf. Information Fusion Annapolis, MD, USA, July 2002, pp. 1527–1534

Optimal Linear Unbiased Filtering with Polar Measurements for Target Tracking Zhanlue Zhao X. Rong Li Vesselin P. Jilkov Department of Electrical Engineering, University of New Orleans, New Orleans, LA 70148 Phone: 504-280-7416, Email: [email protected] Yunmin Zhu Department of Mathematics, Sichuan University, Chengdu, Sichuan 610064, P. R. China

Abstract – In tracking applications, target dynamics is usually modeled in the Cartesian coordinates, while target measurements are directly available in the original sensor coordinates. Measurement conversion is widely used such that the Kalman filter in the Cartesian coordinates can be applied. A number of improved measurementconversion techniques have been proposed recently. However, they have fundamental limitations, resulting in performance degradation, as pointed out in Part III [3] of a recent survey. This paper proposes a recursive filter that is theoretically optimal in the sense of minimizing the meansquare error among all linear unbiased filters in the Cartesian coordinates. The proposed filter is free of the fundamental limitations of the measurement-conversion approach. Results of an approximate implementation are compared with those obtained by two state-of-the-art conversion techniques. Simulation results are provided. Keywords: Target tracking, measurement conversion, Kalman filter, optimal linear filtering, filter credibility

1

Introduction

In tracking applications, target dynamics is usually modeled in Cartesian coordinates, while the measurements are directly available in the original sensor coordinates, most often in spherical coordinates. Tracking in Cartesian coordinates using spherical measurements is mostly handled by measurement conversion method [3]. The basic idea is to transform the nonlinear measurements into a pseudolinear form in the Cartesian coordinates, and estimate the bias and covariance of the error of the converted measurement, and then use the Kalman filter framework to do tracking. A number of improved techniques have been proposed [2, 7, 8, 5, 6, 1, 9, 10] by different ways of obtaining the bias and covariance of the measurement noise. Measurement

 Research Supported in part by ONR grant N00014-00-1-0677, NSF grant ECS-9734285, and NASA/LEQSF grant (2001-4)-01.

models used in target tracking, including measurement conversion methods, are surveyed in [3]. The pros and cons of various measurement-conversion techniques have been revealed therein with succinct explanation. In particular, it was pointed out that these conversion methods have fundamental flaws, as elaborated in Section 2 of this paper. The idea of developing an optimal linear filter for tracking with linear dynamics and nonlinear measurements was briefly discussed in [3]. In this paper, such an optimal linear filter is developed based on this idea. This development stems from the recognition that the Kalman filter is nothing else but a recursive BLUE (Best Linear Unbiased Estimator) for a linear system; furthermore, although the Kalman filter cannot optimally handle any nonlinear measurement directly, BLUE filter is still optimal for nonlinear measurements. This paper presents such a recursive BLUE filter for tracking with linear dynamics and polar (nonlinear) measurements. For simplicity, this paper deals only with the 2-dimensional case. Extensions to other cases will be reported in a forthcoming. The rest of the paper is organized as follows. In Section 2, we summarize the measurement-conversion method and explain its fundamental flaws. In Section 3 we derive formulas of the recursive BLUE filter for nonlinear observation. In Section 4, the error analysis is given for the implementation of the optimal recursive BLUE filter. In Section 5, We compare the implementation of this new method with two state-of-the-art measurement-conversion techniques by simulation. Our conclusions are given in Section 6.

2

Measurement-Conversion Approach

The measured range and bearing are defined with respect to the true range r and bearing  rm = r + r~;

m =  + ~

the noise r~ and ~ are assumed to be independent with zero mean and standard deviations  r and  , respectively. The converted-measurement method converts the polar coordinates measurement into the Cartesian coordinates by xm = rm cos m = (r + r~) cos( + ~) ym = rm sin m = (r + r~) sin( + ~)

be outlined below. Let z k and xk be the measurement and state at time k; z k 1 stand for the sequence of all past measurements, and E  [y jz ] stand BLUE estimator of y using z . Suppose at time k 1 the state estimator and covariance ^k 1 and Pk 1 . The prediction part of a recursive state is x estimator is as follows.

x k = E  [xk jz k 1 ] x ~k = xk x k

and then transform the above nonlinear form into pseudolinear form with respect to (x; y ), xc = x + x ~c ;

yc = y + y~c

where (xc ; yc ) is the converted (most often) debiased measurement, (x; y ) = (r cos ; r sin ) is the true measurements and (~ xc ; y~c ) is the converted measurement noise. Observe that (1) just divides the transformed debiased measurement into true measurement (x; y ) and measurement noise (~ xc ; y~c ). Note, however, that (~ x c ; y~c ) is actually dependent on the true measurement. The converted measurement method uses the Kalman filter based on (1) to do tracking. All converted measurement techniques differ from each other in the ways that the bias and the covariance of the measurement noise are calculated. Specific comparisons can be found in [6]. Basically, one way is to compute the mean and covariance x c ; y~c ) jrm ; m ] conditioned on the measurement as E [(~ and cov[(~ xc ; y~c ) jrm ; m ]. Another way is to compute them firstly conditioned on the true state and then conx c ; y~c ) jr; ] jrm ; m ] ditioned on measurement as E [E [(~ and E [cov ((~ xc ; y~c ) jr; ) jrm ; m ]. They are referred to as fixed measurement and fixed truth approaches in [6], respectively. As argued in [3], however, we will call them measurement-conditioned (MC) approach and nested conditioning (NC) approach. From the above analysis, all converted measurement techniques have the following fundamental flaws. First, (~ xc ; y~c ) is state dependent; second, its covariance is estimated conditioned on the current measurement or state/measurement; third, the measurement noise sequence f(~xc ; y~c )k g is not white anymore. However, in the assumptions of Kalman filter, the measurement noise is independent of the state, its covariance is not conditioned on the current measurement, and it is white. These fundamental flaws were ignored or overlooked in these techniques. As a result, these techniques are by no means optimal. For example, when the measurement noise is dependent on state, the measurement prediction covariance formula of the Kalman filter should be modified to take into account the crosscorrelation between the state and the measurement noise. All these techniques fail to do such modification.

3

zk

(1)

Recursive BLUE Filter

Notice that the Kalman filter is simply a recursive BLUE filter for a linear system. Nevertheless, BLUE does not require the observation model to be linear. A BLUE filter can

z~k

Pk

Sk Kk

= = = = =

E  [zk z k

j

zk

1

]

zk

cov(x ~k ) cov(~ zk ) cov(x ~k ; z~k )Sk 1

then the update is

x^k = x k + cov(x ~k ; z~k )Sk 1 (zk zk ) Pk = Pk cov(x ~k ; z~k )Sk 1 cov(~zk ; x ~k ) This recursive BLUE filter is optimal provided the process and measurement noises are white and uncorrelated, which is the case for the problem under consideration. This recursive BLUE filter requires the knowledge of the prek , its error covariance Pk , the predicted meadicted state x surement zk , its covariance cov(~ z k ) and crosscovariance ~k ; z~k ). cov(x In view of the above, a recursive BLUE filter is derived as follows. The key is to express the state in the Cartesian coordinates while keeping the measurement error in the polar coordinates.  k and Pk are Because the target model is still linear, x the same as in Kalman filter. We still adopt the nonlinear observation form and calculate E  [zk jz k 1 ]; cov(~ zk ) and ~k ; z~k ) analytically with only x ^ k 1 and Pk 1 given. cov(x First, transform the measurement in polar coordinates into the Cartesian coordinates: xm

= (r + r~) cos( + ~) = x cos ~ y sin ~ x

+p

x2 + y 2

ym

r~ cos ~

p

= (r + r~) sin( + ~) = y cos ~ + x sin ~ y

+p

x2 + y 2

r~ cos ~ + p

y x2 + y 2

x x2 + y 2

r~ sin ~

(2)

r~ sin ~

(3)

r ; ~) are independent, (x; y ) and Note that since (r; ) and (~ (~ r ; ~) are also independent. The tracking system has the nonlinear observation: zk = h(xk ; vk )

(4)

_ y; y_ ]0k , vk = [~ r ; ~]0k , where zk = [xm ; ym ]0k , xk = [x; x; and the white noise v k is independent of the state x k .

As pointed out above, x  k and Pk are the same as in the Kalman filter. For brevity, we will drop the time index k and the conditioning on z k 1 . Now denote x  k and Pk by

x  = [x x_ y y_]0 2

~_) cov(~ x; x ~ cov(x_ ) ~_) cov(~ y; x ~_ x ~_) cov(y;

cov(~ x) 6 cov(x; ~_ x~) P = 6 4 cov(~ y; x ~) ~_ x cov(y; ~)

cov(~ x; y~_) ~ cov(x; _ y~_) cov(~ y; y~_) cov(y~_)

cov(~ x; y~) ~_ y~) cov(x; cov(~ y) ~_ y~) cov(y;

The crosscovariance between measurement z and state x is calculated as follows. First, 2

3 7 7 5

What remains is to calculate E  [zk jz k 1 ]; cov(~ zk ) and cov(~ xk ; z~k ). Notice that E  [(~ rk ; ~k )0 jz k 1 ] = E [(~ rk ; ~k )0 ] since the noise sequence [~ r ; ~]0k is white. Assuming r~  N (0; r2 ) and ~  N (0; 2 ) are independent, we have

Substituting (2) and (3) into the above yields 2

Then

cov(x ~; z~) = E [xz 0 ]

2 2

) = 2 ) = 3

=

= E  [xm ] = E  [x cos ~ +p

x

x2

+

y2

= E  [x] E  [cos ~] ym

y

p

x2

+ y2

r~ sin ~]

= 1 y

E [x2m ]

(6)

Since the BLUE estimator is unbiased and orthogonal to its error, it can be easily shown that

= cov(x ~; z~) = = cov(~ z z~0 ) = =

E [xz 0 ]

x )(x x )0 ] E [x x 0 ] x )(z z)0 ] E [x z0 ]

E [(z

z)(z

E [xx0 ]

E [(x

E [zz 0 ]

x 2 7 6 x  7 1 E 6 x_ 5 4 x y x y_ 3 cov(~ x; y~) ~_ y~) 7 cov(x; 7 5 cov(~ y) ~ cov(y; _ y~)

3 x y x_ y 7 7 y2 5 y_y

(7)

Since

(5) = 1 x  = E  [ym ] = E  [y cos ~ + x sin ~ y x +p r~ cos ~ + p r~ sin ~] 2 2 2 x +y x + y2 = E  [y ] E  [cos ~]

cov(x ~) = E [(x

2

z ) = E [zz 0 ] E [ z z0 ] cov (~   cov(~ xm ) cov(~ xm ; y~m ) = cov(~ ym ; x ~m ) cov(~ ym )

y sin ~ r~ cos ~

3

The measurement prediction covariance is

Thus x m

x2 xy 6 xx_ xy _ 1 E 6 4 xy y 2 xy_ yy _ 2 cov(~ x) 6 cov(x; ~_ x~) 1 6 4 cov(~ y; x ~) ~_ x cov(y; ~)

= 2 2

E [x z0 ]

2

E [sin ~] = 0

1 (1 + e 2 1 E [sin2 ] = (1 e 2 E [sin ~ cos ~] = 0 E [~ r] = 0

3 xy xy _ 7 7 y2 5 yy _

x2 6 xx_ E [xz 0 ] = 1 E 6 4 xy xy_

2 E [cos ~] = e  =2 = 1

E [cos2 ~] =

3 xym xy _ m 7 7 yym 5 yy _ m

xxm 6 xx _ m 0 6 E [xz ] = E 4 yxm yx _ m

z)0 ]

= E [x cos ~ +p

x2

+

y2

r~ cos ~

y

p

x2

+ y2

r~ sin ~]2

= E [x2 cos2 ~ + y 2 sin2 ~ +

x2 x2

+

y2

r~2 cos2 ~ +

y2 x2

+ y2

r~2 sin2 ~]

  1   2 2 1 (1 + e 2 )E x2 + (1 e 2 )E y 2 2 2  2  2 1 1 x y2 + r2 + r2 e 2 E 2 2 2 x + y2     1 = 2 E x2 + 3 E y 2 + r2 2  2  1 2 22 x y2 + r e E 2 x2 + y 2

=

and similarly 2 E [ym ]

E [ z z0 ]

NOTE x  and z are the predicted state and measurement BLUE estimator here.

y sin ~

x

E [xm ym ]

    1 = 2 E y 2 + 3 E x2 + r2 2   2 1 2 22 y x2 + r e E 2 x2 + y 2

= e

2 2

2

E [xy ] + r e

2 2



E

xy x2 + y 2



0.09

we have E [ xm x 0m ]     1 2 E x2 + 3 E y 2 + r2 2  2  1 2 22 x y2 + r e E 21 E [ x2 ] 2 x2 + y 2

cov(~ xm ) = E [x2m ]

=

0.07 0.06 0.05 0.04 0.03 0.02

1 x) + 3 cov(~ y ) + r2 + 3 y2 = 2 cov(~ 2  2  1 2 22 x y2 2 2 E +(2 1 ) x + r e 2 x2 + y 2   +(2 21 )(E [ x2 ] x 2 ) + 3 (E y2 y2 ) (8)  2 cov(~x) + 3 cov(~y) + 12 r2 + 3 y2  2  2 1 x y2 +(2 21 ) x2 + r2 e 2 E (9) 2 x2 + y 2 E [ xm x 0m ]

ym ) = E [x2m ] cov(~

1 = 2 cov(~ y ) + 3 cov(~ x) + r2 + 3 x 2 2  2  2 1 y x2 +(2 21 ) y 2 + r2 e 2 E 2 x2 + y 2   21 )(E y2

+(2



y2 ) + 3 (E [ x2 ]

2 cov(~ y ) + 3 cov(~ x) + 21 ) y2 +

+(2 Let 4 = e

2 2

1 2  e 2 r

1 2  + 3 x 2 2 r 

2 2

E

y 2 x2 x2 + y 2

= e

2 2 cov(~ x; y~)

+ r2 e

+4 x y + 4 (E [ xy]



e

2 2

x 2 ) (10) 

(11)

cov(~ x; y~) + r2 e

2 2 E

x y) 2 2

xy x2 + y 2

+4 x y



σ

0.2

0.25

0.3

θ

21 ) and 32 2

(2

(

2 )

than x 2 in a tracking problem. Besides, the ratio of 22 1 and 32 are as Fig. 1. Usually in a tracking problem, the 0:3rad, so the bearing noise standard deviation   is  below  contribution of (E [ x2 ] x 2 ) and (E y2 y2 ) to cov(~ xm ) will be greatly decreased by ( 2 21 ) and 3 compared with the term 2 cov(x). For example, when   = 0:2rad, (2 2 1 ) = 0:07% and 3 = 4%. Therefore, the dropped 2 2 term is negligible compared with other terms. A similar justification can be applied to (11) and (13). So this approximation is highly accurate. h 2 2i h i It seems that the two terms E xy 2 +xy2 and E x2xy 2 +y involved in cov(~ z ) have either no closed form or a complex expression. We approximate them by Taylor series expansion. The Taylor series expansion of a function f (x; y ) at point (x0 ; y0 ) is

(12)

(14) (15)

f (x ; y ) + (x 0

0

x0 )

@f (x0 ; y0 ) @x

y0 )

@ 2 f (x0 ; y0 ) @x@y  2 2 @ f (x0 ; y0 ) y0 ) @y 2

+ 2 (x

x0 ) (y

+ (y 2

y2 )

are dropped in (9). We justify this approximation as follows. As we know, ( x2 E [ x2 ]) is usually much smaller

y0 )

(16)

2

by Taylor series at ( x; y) and Expanding yx2 +xy2 and x2xy +y 2 then taking expectation yields 

E

y 2 x2 x2 + y 2



 xy

2 2

x 2 + y2 (

1 + 2!

The two terms in (8)   x 2 ) + 3 (E y2

0.15

@f (x0 ; y0 ) @y  @ 2 f (x0 ; y0 ) 1 + (x x0 )2 2! @x2

Error Analysis 21 )(E [ x2 ]

0.1

Fig. 1: Ration of

(13)

x^ = x  + cov(x ~; z~)cov 1 (~z ) (z z) P = P cov(x ~; z~)cov 1 (~z ) cov(~z ; x ~)

(2

0.05

+ (y

With (8), (10) and (12), the optimal recursive BLUE filter has the state update below

4

0 0

E [^ xm y^m ]   xy x2 + y 2 

E

0.01

f (x; y )

21 . Then

cov(~ xm ; y~m ) = E [xm ym ]

2)/λ (λ2−λ1 2 λ3/λ2

0.08



4 y 2 y2 3 x2 cov(x) ( x2 + y2 )3 

8 xy x 2 y2 + cov(x; y ) ( x2 + y2 )3 

4 x2 x 2 3 y2 cov(y ) ( x2 + y2 )3

)

(17)



E

xy x2 + y 2



 x

2

x y

1 + 2! +

The ANEES is defined by

+ y2 (



N X = 1 (xi x ^i )0 Pi 1 (xi x ^i )

2 xy x 2 3 y2 cov(x) 2 2 ( x + y )3

Nn

6 x2 y2 x 4 y4 cov(x; y ) 2 ( x + y2 )3 

2 xy y2 3 x2 + cov(y ) ( x2 + y2 )3

)

(18)

It is reasonable to expect that this approximation is not crude at all since the denominator has higher order than the x 2 + y2 ) is usually numerator in the differential terms and ( much bigger than cov(~ x), cov(~ y) and cov(~ x; y~). Once (17) and (18) are calculated, our proposed recursive BLUE filter can be implemented straightforwardly. This implementation is a highly accurate approximation to the optimal BLUE filter. The above analysis will be verified in the next section. We can see that actual estimation errors are almost always perfectly consistent with the filter’s computed covariance in all simulation results.

5

(19)

i=1

where (xi x ^i ) is the 4-dimensional vector of state estimation errors in sample run i, and N is the total number of samples used in the test. If the estimation error and estimated covariance is consistent, the expectation of ANEES is 1 [4]. Because the target moves at a constant high velocity, the velocity RMSEs of those three filters have little difference in all simulations and will not be present. The performance of the three filters in different cases can be summarized as follows. Case 1 No process noise; measurement errors  r = 4m and  = 10millirad; 100 runs

1

10

Simulation and Comparison

In this section, we compare two state-of-the-art conversion techniques with our proposed method. We choose the measurement-conditioned (MC) approach and one of the nested conditioning (NC) approaches, called fixed measurement and additive fixed truth approach II in [6], respectively. See appendix for their formulas. We denote these two approaches by MC and NC, respectively. For convenience of comparison and computation, we use scenarios similar to that of [6], but simplify it to two dimensions by eliminating elevation. The scenario is as follows. A two-dimensional Cartesian x-y space with a single sensor locates at the origin. Target sampling time is one sec. The coordinates (x; y ) of the target object at time zero are determined by random draws from two independent, Gaussian distributions with means 50 km and 200 km, respectively, and common standard deviation 5 km. The target moves at a constant high velocity (including _ deterdirection and speed), whose components x_ and y; mined by a random draw from a Gaussian distribution, have means 1000m/s and 0m/s, respectively, and a common standard deviation 0:1 km/s. The sensor’s independent measurement errors have standard deviations  r = 4m and  = 10millirad. Following [6], all the filters are initialized with an effectively infinite initial state error covariance matrix, and a highly inaccurate initial state estimate. The tracking period begins 100 sec after time zero and continues for 100 sec. We will compare the average normalized estimation error squared (ANEES) and root-mean-square error (RMSE) of position of these three techniques by increasing the process noise, and then increasing the measurement noise.

0

10

New Method MC NC

−1

10

100

120

140

160

180

200

Fig. 2: ANEES (Case 1) 1600 New Method MC NC

1400 1200 1000 800 600 400 200 100

110

120

130

140

150

(a) Initial Transient 360 New Method MC NC

340 320 300 280 260 240 220 200 150

160

170

180

190

(b) After Transient

Fig. 3: Position RMSE (Case 1)

200

Fig. 2 compares the ANEES of the MC and NC techniques with the proposed filter. The proposed one is much more consistent than the other two. Furthermore, the ANEES of the proposed filter is very close to the ideal unity, which indicates the estimator is almost perfectly consistent. Fig. 3 compares the position RMSE of the three techniques. For convenience, we separate the transient part and steadystate part. It is not easy to tell which one is better in the transient part. However, our method is much more accurate than the other two after transient. Case 2 Process noise Q = 102 I ; measurement errors r = 4m and  = 10millirad; 100 runs

filter has also smaller position RMSE than the other two after transient. Case 3 Measurement errors r = 40m and  100millirad; process noise Q = 102 I ; 100 runs

=

1

10

0

10

New Method MC NC

2 1.8 100

120

140

160

180

200

1.6 1.4

Fig. 6: ANEES (Case 3)

1.2 4

1

1.6

x 10

New Method MC NC

0.8 1.4

0.6 0.4 New Method MC NC

0.2 0 100

120

140

160

180

1.2

200

1

0.8

Fig. 4: ANEES (Case 2) 0.6

2000 New Method MC NC

0.4 100

110

120

130

140

150

(a) Initial Transient 1500 6000 New Method MC NC 5500 1000 5000

500 100

110

120

130

140

150

(a) Initial Transient

4500

4000

800 New Method MC NC

750

3500 150

160

170

180

190

200

(b) After Transient

700 650

Fig. 7: Position RMSE (Case 3) 600 550 500 450 150

160

170

180

190

200

In this case, the measurement noise is increased. The simulation results are plotted in Figs. 6 and 7. Our filter is much more accurate and credible. In particular, the ANEES of our filter is still very close to the ideal unity.

(b) After Transient

Fig. 5: Position RMSE (Case 2)

Case 4 Measurement errors r = 400m and  = 400millirad; process noise Q = 102 I ; 100 runs

In this case, the process noise is increased. As is clear from Figs. 4 and 5, the ANEES of our proposed filter is very close to unity and is better than the other two. Our

In this case, the measurement noise is increased greatly. The simulation results in Figs. 8 and 9 show that ANEES of our filter is still very credible even though position RMSE

1

2.5

10

New Method MC NC 2

1.5 0

10

1 New Method MC NC 100

120

140

160

180

200

0.5 100

120

Fig. 8: ANEES (Case 4)

160

180

200

Fig. 10: ANEES (Case 5)

4

7

140

35

x 10

New Method MC NC

New Method MC NC

6.5

30

6 5.5

25

5 4.5

20

4 3.5

15

3 2.5 2 100

110

120

130

140

150

10 100

110

(a) Initial Transient 4

3.4

130

140

150

190

200

17

x 10

16.5

3.2

New Method MC NC

16

3

15.5

2.8

15

2.6

New Method MC NC

2.4

14.5

2.2

14

2

13.5

1.8

13

1.6

12.5

1.4 150

120

(a) Initial Transient

160

170

180

190

200

12 150

160

170

180

(b) After Transient

(b) After Transient

Fig. 9: Position RMSE (Case 4)

Fig. 11: Position RMSE (Case 5)

is rather large. For this problem, where bearing error is dominant, the tracking performance depends mainly on   (in fact, changing  r = 40m to r = 400m has virtually no impact). Case 5 The coordinates (x; y ) of the target object at time zero are determined by random draws from two independent, Gaussian distributions with means -50 m and 200 m, respectively, and common standard deviation 5 m. The target moves at a constant velocity (including direction and _ determined by a ranspeed), whose components x_ and y; dom draw from a Gaussian distribution, have means 5m/s and 0m/s, respectively, and common standard deviation 0.1 m/s. The sensor’s independent measurement errors have standard deviations  r = 4m and  = 30millirad. process noise Q = 0:12 I . 100 runs

In this case, our filter in this short range application still has a good performance as showed in Figs. 10 and 11.This indicates the approximation of the implementation is not sensitive to the scenario. Similar observations were made for many other cases not presented here. In the simulation conducted, the standard implementation of the MC and NC conversion techniques sometimes exhibited numerical problems; we had to recourse to the numerically more robust Joseph form for the covariance computation. On the contrary, we did not experience any numerical problem with the approximate implementation of our proposed filter. The three filters have essentially the same computational complexity.

6

Conclusion

An optimal recursive filter in the sense of BLUE has been presented for linear dynamics with nonlinear (polar) measurements. This filter operates entirely in the Cartesian coordinates but is free of the fundamental flaws of the measurement-conversion method. The simulation-based comparison of an approximate implementation of the proposed filter with two state-of-the-art measurement-conversion techniques has demonstrated the following. In terms of estimation errors and filter credibility, the proposed filter outperforms significantly the two measurement-conversion techniques in all cases tested; in particular, the proposed filter is almost always perfectly credible in that the actual estimation errors are consistent with the filter’s self-assessment. By abandoning the Kalman filter framework and using optimal BLUE filter in the implementation, the good performance is achieved without increasing the computational complexity. Moreover, the optimal filter framework can be extended to the 3D case and the RUV coordinate systems. Extensions to these cases will be reported in a forthcoming paper.

Appendix The two measurement-conversion techniques provide the following converted measurement in the Cartesian coordinates:



Measurement-conditioned (MC) (referred to as fixedmeasurement approach in [6]): xm Cx~ Cy~ Cx~y~



= = = =

rm cos m A; ym = rm sin m A

0:5R1 T1 R0 cos2 m A2 0:5R1 T2 R0 sin2 m A2 cos m sin m (R1 A4 R0 A2 )

Nested-conditioning (NC) (referred to as additive fixed-truth approach II in [6]): xm Cx~ Cy~ Cx~y~

= = = =

rm cos m S; ym = rm sin m S R0 cos2 m S (S

2A) + 0:5R1 T1 R0 sin m S (S 2A) + 0:5R1 T2 cos m sin m (R0 S (S 2A) + R1 A4 ) 2

where A

= e

0:52

;

R2

2 = rm + 2r2 ;

T1

= 1+e

2 2

2 R0 = rm ;

S=1

cos 2m ;

2 R1 = rm + r2

2 e  + e

T2 = 1

2 0:5

e

22

cos 2m

References [1] Y. Bar-Shalom and X. R. Li. Multitarget-Multisensor Tracking: Principles and Techniques. YBS Publishing, Storrs, CT, 1995.

[2] D. Lerro and Y. Bar-Shalom. Tracking with Debiased Consistent Converted Measurements vs. EKF. IEEE Trans. Aerospace and Electronic Systems, AES29(3):1015–1022, July 1993. [3] X. R. Li and V. P. Jilkov. A Survey of Maneuvering Target Tracking—Part III: Measurement Models. In Proc. 2001 SPIE Conf. on Signal and Data Processing of Small Targets, vol. 4473, pages 423–446, San Diego, CA, USA, 2001. [4] X. R. Li, Z. Zhao, and V. P. Jilkov. Estimator’s Credibility and Its Measures. In Proc. IFAC 15th World Congress, Barcelona, Spain, July 2002. [5] M. Miller and O. Drummond. Coordinate Transformation Bias in Target Tracking. In Proceedings of SPIE Conference on Signal and Data Processing of Small Targets 1999, pages 409–424, 1999. SPIE Vol. 3809. [6] M. D. Miller and O. E. Drummond. Comparison of Methodologies for Mitigating Coordinate Transformation Bias in Target Tracking. In Proc. 2000 SPIE Conf. on Signal and Data Processing of Small Targets, vol. 4048, pages 414–427, Orlando, Florida, USA, April 2000. [7] L. Mo, X. Song, Y. Zhou, and Z. Sun. An Alternative Unbiased Consistent Converted Measurements for Target Tracking. In Proceedings of SPIE: Acquisition, Tracking, and Pointing XI, Vol. 3086, pages 308– 310, 1997. [8] L. Mo, X. Song, Y. Zhou, Z. Sun, and Y. BarShalom. Unbiased Converted Measurements for Tracking. IEEE Trans. Aerospace and Electronic Systems, AES-34(3):1023–1026, 1998. [9] P. Suchomski. Explicit Expressins for Debiased Statistics of 3D Converted Measurements. IEEE Trans. Aerospace and Electronic Systems, AES35(1):368–370, 1999. [10] M. Toda and R. Patel. Performance Bounds for Continuous-Time Filters in the Presence of Modeling Errors. IEEE Trans. Aerospace and Electronic Systems, AES-14(6):912–920, Nov. 1978.

Suggest Documents