An Adaptive Kalman Filter with Dynamic Rescaling of Process Noise

1 downloads 0 Views 102KB Size Report
at instant k , N is the Monte Carlo runs, )(~ kx is the. RMSE for one of the state components of the maneuvering target. For the purpose of simplification, only the ...
An Adaptive Kalman Filter with Dynamic Rescaling of Process Noise Zhansheng Duan Electronic & Information Engr Xi’an Jiaotong University Xi’an, Shaan xi 710049, P. R. China [email protected]

Chongzhao Han Electronic & Information Engr Xi’an Jiaotong University Xi’an, Shaan xi 710049, P. R. China [email protected]

Abstract - In this paper, a novel adaptive Kalman filter is proposed to track maneuvering target with unknown process noise statistics. The process noise statistics are estimated by Sage’s algorithm firstly; then a residual based hypothesis testing method is used to detect the likely divergence of the filter; once detected, a timevarying scale factor matrix is proposed to modify the estimated value of the process noise covariance, thus an adaptive Kalman filtering algorithm with dynamic rescaling of the process noise is derived. By using the mean-adaptive acceleration model, validity of the proposed algorithm is verified by means of Monte-Carlo simulations in two typical target maneuver scenarios. Keywords: adaptive Kalman filtering, dynamic rescaling of process noise, maneuvering target tracking.

1

Introduction

In maneuvering target tracking, target dynamic and sensor measurement equations can usually be expressed by the following stochastic difference equations respectively

Hongshe Dang Electronic & Information Engr Xi’an Jiaotong University Xi’an, Shaan xi 710049, P. R. China [email protected]

measurement noise V (k ) is an m -dimensional zeromean Gaussian white noise sequence with known covariance, and cov[V (k ),V (i )] = R (k )δ ki ;

W (k − 1) , V (k ) and X (0) are mutually independent. Based on the Sage’s Estimator for constant noise statistics[2], an exponentially weighted fading-memory estimator is proposed in [1], which can recursively estimate the unknown time-varying mean and covariance matrices simultaneously. This estimator in essence is that a posteriori probability should be maximized, and it is simple, yet very effective. Unknown process noise covariance matrix can be accurately estimated when the filter is under stationary state, but in some cases, especially such as sudden changes of the system states or parameters, numerical divergence tends to occure for the Kalman filter in which this estimator is used, as mentioned in [3,4].

X(k) =Φ(k, k −1)X(k −1) + B(k −1)U(k −1) +W(k −1) (1)

To solve this problem, the likely filter divergence is detected based on the residual in this paper ; once is detected, a novel time-varying scale factor matrix is proposed to rescale the estimated value of the process noise covariance matrix dynamically.

Z (k ) = H (k ) X (k ) + V (k )

2

(2)

where state vector X (k ) ∈ R , measurement vector n

Z (k ) ∈ R m , deterministic input matrix U (k − 1) ∈ R r , Φ (k , k − 1) , B(k − 1) and H (k ) are known matrices

2.1

Adaptive Kalman Filtering Adaptive Estimation of the Process Noise Statistics

with appropriate dimensions.

For the unknown process noise statistics in Eq. (1), according to [1], the following formulae can be used to estimate them

This paper supposes that process noise W (k − 1) is an n -dimensional Gaussian white noise sequence with unknown statistics, and

qˆ(k + 1) = (1 − d k )qˆ(k ) + d k ⋅ [ Xˆ (k + 1 / k + 1) − Φ(k + 1, k ) Xˆ (k / k )]

E[W(k)] = q(k), cov[W(k),W(i)] = Q(k)δ ki ;

Qˆ (k + 1) = (1 − d k )Qˆ (k ) + d k [ K (k + 1)v(k + 1) v T (k + 1) K T (k + 1) + P(k + 1 / k + 1)

− Φ(k + 1, k)P(k / k )ΦT (k + 1, k)]

1310

(3) (4)

where d k = (1 − b) /(1 − b

k +1

) , b is a forgetting factor,and 0 < b < 1 ; v (k + 1) is the residual,thus v(k + 1)∆Z (k + 1) − H (k + 1) Xˆ (k + 1 / k ) .

2.2

factor

in the calculation of

where

The testing statistic used is:

χ m2 (α )

is an unknown time-varying scale factor.

Vj (k +1)∆E[v(k +1+ j)vT (k +1)] ≡ 0, ( j ≠ 0) . Referencing to [5,6], the above equation is equivalent to the following one

P(k + 1/ k )H T (k + 1) − K (k + 1)V0 (k + 1) ≡ 0 (6)

t (k ) = v T (k ) S −1 (k )v(k ) . If t ( k ) ≤

λ (k + 1)

(5)

According to the orthogonality principle, residuals at different sampling instant is uncorrelated in optimal Kalman filtering, that is to say

H 0 : the Kalman filter is not divergent;

H 1 : the Kalman filter has been diverging.

P(k + 1 / k ) as follows

P(k + 1/ k ) = Φ(k + 1, k )P(k / k )ΦT (k + 1, k) + λ (k + 1) ⋅ Qˆ (k )

Detection of the Likely Divergence

The following hypothesis testing method is constructed to check whether the Kalman filter, which uses Eq. (3) and Eq. (4) to estimate the unknown process statistics, has been diverging.

λ (k + 1) which is larger than or euqal to 1 is added

(where 1 − α is the level of

confidence, which should be set quite high(e.g., 95% or 99%)), then hypothesis H 0 is accepted and the Kalman filter is believed to be not divergent; else hypothesis H 1 is accepted and the Kalman filter is believed to be diverging. According to the theory of optimal Kalman filtering, if hypothesis H 0 exists, residual sequence {v(k )} will be

where V0 (k +1) = E[v(k +1)v (k +1)]. T

That is to say, actual covariance matrix of the residual sequences should equal to its theoretical one under the optimality assumption

V0(k +1) = H(k +1)P(k +1/ k)HT (k +1) + R(k +1) (7) Substituting the P ( k + 1 / k ) in Eq. (7) with Eq. (5), after a simple transformation, it can be obtained that

an zero-mean Gaussian white noise sequence with covariance matrix S(k) = H(k)P(k / k −1)H (k) + R(k) , T

λ (k + 1) H (k + 1)Qˆ (k ) H T (k + 1) = V0 (k + 1)

thus t (k ) will obey a chi-square distribution with m degrees of freedom, where m is the dimension of the measurement vector.

2.3

− R(k + 1) − H (k + 1)Φ(k + 1, k ) P(k / k ) . (8) ΦT (k + 1, k ) H T (k + 1) Defining

Dynamic Rescaling of the Process Noise Covariance

When the Kalman filter is under stationary state, the gain matrix K ( k + 1) of the Kalman filter reaches its minimum, which will result in the minimum contribution of the residual to the estimated state. Once sudden changes of the system states or parameters occurs, although the residual becomes larger, K ( k + 1) is still much smaller due to its recursive calculation from P (k + 1 / k ) ,this can always result in the larger peak value or divergence of the estimation error. So P(k + 1 / k ) should be increased in this case, thus

N(k + 1) = V0 (k + 1) − R(k + 1) − H (k + 1)

Φ(k + 1, k)P(k / k) ΦT (k + 1, k)H T (k + 1)

M (k + 1) = H (k + 1)Qˆ (k ) H T (k + 1) ,

, (9)

(10)

then from Eq. (8), an approximate solution to can be derived as follows

λ (k + 1)

λ (k + 1) when λ0 (k + 1) ≥ 1 λ (k + 1) =  0 when λ0 (k + 1) < 1 1

K (k + 1) can also be increased. For this reason, a scale

1311

where

M (k + 1) = Qˆ (k ) ⋅ H T (k + 1) H (k + 1)

tr[ N (k + 1)] . λ0 (k + 1) = tr[ M (k + 1)]

= ( M ij (k + 1))

(11)

(14)

then from Eq. (13), it can be obtained that

It is not difficult to see that, if different scale factors are multiplied for the position, velocity and acceleration components of the state vector, performance of the Kalman filter is hopeful to be improved, so Eq. (5) can be modified as follows

P(k + 1/ k ) = Φ(k + 1, k)P(k / k )ΦT (k + 1, k) (12) + LMD(k + 1) ⋅ Qˆ (k)

c(k + 1) = tr[ N (k + 1)]

3

∑α i =1

i

⋅ M ii (k + 1) . (15)

Thus the scale matrix can be obtained approximately as follows

α ⋅ c(k +1) when α i ⋅ c(k +1) > 1 λi (k +1) =  i . when αi ⋅ c(k +1) ≤ 1 1

where

LMD(k + 1) = diag[λ1 (k + 1), λ2 (k + 1), λ3 (k + 1)] is a

time-varying scale factor matrix,

,

λi (k + 1)(i = 1,2,3)

are time-varying scale factors for the position, velocity and acceleration components of the state vector respectively.

In above equation, the true value of V0 (k +1) is unknown, according to statistics, it can be approximately calculated by using the arithmetical average method as follows

V0 (k + 1) ≈

According to a priori knowledge of the target being tracked, if the equation

λ1 (k + 1) : λ2 (k + 1) : λ3 (k + 1) = α1 :α 2 :α3

1 k +1 v(i)v T (i) . ∑ k + 1 i =1

(16)

The weighting coefficient for each item in the above

( α i ≥ 1, i = 1,2,3 )can be roughly determined, let

sum formula is

λi (k + 1) = α i ⋅ c(k + 1) i = 1,2,3 . Substituting the P ( k + 1 / k ) in Eq. (7) with Eq. (12), it can be obtained that

H (k + 1) LMD(k + 1)Qˆ (k ) H T (k + 1) = V0 (k + 1) − R(k + 1) − H (k + 1)Φ(k + 1, k ) P(k / k )

1 equally, but in maneuvering target k +1

tracking, the effect of the latest residual should be emphasized much more, this can be realized by using different weighting coefficient for each item in the above sum formula. By using the exponentially weighted fading-memory method, weighting coefficient {β i } can be chosen as follows

. k

β i = β i −1c ; 0 < c < 1 ; ∑ β i = 1 .

ΦT (k + 1, k ) H T (k + 1)

i =0

According to the matrix trace’s characteristics, the above equation can be derived as follows

tr[ H (k + 1) LMD(k + 1)Qˆ (k ) H T (k + 1)] = tr[ LMD(k + 1)Qˆ (k ) H T (k + 1) H (k + 1)] = tr[V0 (k + 1) − R(k + 1) − H (k + 1)Φ (k + 1, k )

Based on the above characteristics of the weighting coefficient {β i } , it can be deduced that

β i = β 0 c i = ek c i , i = 0,1, L, k ,

. (13)

β 0 = ek = (1 − c) /(1 − c k +1 ),

P (k / k )Φ (k + 1, k ) H (k + 1)] T

T

where c is a forgetting factor. Defining

N (k + 1) is as Eq. (9),

1312

In

Eq.

(16),

substituting

the

approximate recursive estimation V0 (k + 1) can be obtained as follows

Qˆ (k +1) = (1 − d k )Qˆ M (k ) + d k [K (k +1)v(k +1)

old weighting

1 with the new one β k +1−i , an coefficient k +1 formula

− Φ(k +1, k ) ⋅ P(k / k )ΦT (k +1, k )]

for

The following initial conditions of the Kalman filter are supposed to be known

k+1

V0 (k +1) = ∑βk+1−i v(i)vT (i)

Xˆ (0 / 0) , P(0 / 0) , qˆ (0) , Qˆ (0) .

i=1

=

1− c 1− c v(k +1)vT (k +1) + ∑ k+1 ck+1−i v(i)vT (i) k+1 1− c . (17) i=1 1− c k

= ek ⋅ v(k +1)vT (k +1) +

c(1− ck ) k 1− c k−i ∑ c v(i)vT (i) 1− ck+1 i=1 1− ck

= (1− ek )V0 (k) + ek ⋅ v(k +1)vT (k +1)

2.4

v T (k +1)K T (k +1) + P(k +1 / k +1) .

Step 10:

3

Simulation and Comparison

Performance of the proposed algorithm is evaluated in terms of 50 runs Monte Carlo simulation in 150 scans. The evaluation criterion is RMSEs of the position, velocity and acceleration component of the state vector, which is defined as

Calculation Steps

1 N

~ x (k ) =

The whole calculation steps of the adaptive Kalman filtering algorithm proposed in this paper in one single sampling cycle can be expressed as follows

∑ (x(k ) − xˆ (k / k )) N

i

2

,

i =1

i

where x ( k ) and xˆ ( k ) stand for the true state component of the maneuvering target and its estimation in the i th run at instant k , N is the Monte Carlo runs, ~ x (k ) is the RMSE for one of the state components of the maneuvering target.

Xˆ (k + 1 / k ) = Φ(k + 1, k ) Xˆ (k / k ) ; Step 1: + B(k )U (k ) + qˆ (k ) Step 2: v(k +1) = Z (k +1) − H (k +1) Xˆ (k +1/ k) ;

ˆ(k) ; Step 3: P(k +1/ k) = Φ(k +1, k)P(k / k)Φ (k +1, k) + Q T

Step 4: Conducting the hypothesis testing described in section 2.2, if

t (k + 1) > χ m2 (α ) , then transfering to

step 5 ; else transfering to step 6 ;

Step 5:

Step 6:

P(k +1/ k) = Φ(k +1, k)P(k / k)ΦT (k +1, k) ; + LMD(k +1) ⋅ Qˆ (k)

K(k +1) = P(k +1/ k)HT (k +1)[H(k +1) P(k +1/ k)HT (k +1) + R(k +1)]−1

;

Step 7: Xˆ (k +1/ k +1) = Xˆ (k +1/ k) + K(k +1) ⋅ v(k +1) ; Step 8:

P(k +1/ k +1) = [I −K(k +1)H(k +1)]P(k +1/ k) ;

Step 9:

qˆ(k +1) = (1− dk )qˆ(k) + dk ⋅[Xˆ (k +1/ k +1) ; − Φ(k +1, k)Xˆ (k / k)]

For the purpose of simplification, only the target motion in one dimension( x directon) is simulated. The target dynamic Eq. (1) used is the mean-adaptive acceleration model, that is to say

1 T (αT −1+ e−αT ) /α 2    X (k) = 0 1 (1− e−αT ) /α  X (k −1)  0 0 e−αT    T 2 / 2 (αT −1+ e−αT ) /α 2       +   T  −  (1− e−αT ) /α  a (k −1) +W (k −1)       e−αT   1   where sampling interval T = 1s , maneuver frequency α = 2 in the simulation. Only position can be measured.The measurement 4 2 noise covariance matrix is R(k) =10 m . The following acceleration variance adaptation algorithm is introduced in [7]

1313

4 − π (a max − &xˆ&(k / k )) 2 when &xˆ&(k / k ) > 0   , σ a2 =  π 2  4 − π (a ˆ ˆ && && − max + x ( k / k )) when x ( k / k ) < 0  π

Acceleration RMSE(m/s*s)

6

the process noise covariance matrix can then be calculated as in [8].The a priori acceleration a max is set to

5

4

3

2

8 gm / s 2 according to [9] in the simulation. Performance

1

of the algorithm proposed in this paper is compared with that of this algorithm.

0

Other parameters in the proposed algorithm is selected as follows

b = 0.97 , c = 0.85 , α1 : α 2 : α 3 = 10 : 1 : 1 . Two typical target maneuver scenarios are simulated, which are designed as follows.

[

0

50

100

150

time(s)

Fig. 1 : position, velocity and acceleration RMSE in case 1, where solid line stands for the proposed algorithm, dotted line stands for the algorithm in [7].

[

]

2T

Case 2: X(0) = 30 km 300m/ s 0 m/ s . The target moves with constant velocity during 0~50s, its velocity jumps to 350 m / s at instant k=50, and then it reverts to CV motion during 50~150s.

]

2T

Case 1: X(0) = 30 km 300m/ s 20 m/ s . The target moves with constant acceleration during 0~50s, its

100

90

2

Position RMSE(m)

acceleration jumps to 25m / s at instant k=50 , and then it reverts to CA motion during 50~150s. 90 85

Position RMSE(m)

80

80

70

60

50

75

40

70 65

20

40

60

80

100

120

140

time(s)

60 55

60

50 45

50

35

20

40

60

80

100

120

Velocity RMSE(m/s)

40 140

time(s) 30 28

40

30

20

Velocity RMSE(m/s)

26

10 24 22

0

20

50

100

time(s)

18 16 14 12 10

0

20

40

60

80

100

120

140

time(s)

1314

150

[3] Hanguo Zhang, Hongyue Zhang. Method to Prevent the Divergence of Adaptive Kalman Filtering. Control and Decision, 1991, Vol 6, No. 1, pp. 53-56.

4

Acceleration RMSE(m/s*s)

3.5 3

[4] Hongcai Zhang, Youmin Zhang, Zhibin He. A Robust Adaptive Extended Kalman and Its Application in Flight State Estimation. Information and Control, 1992, Vol 21, No. 6, pp. 343-348.

2.5 2 1.5 1

[5] Donghua Zhou, Yugeng Xi, Zhongjun Zhang. Suboptimal Fading Extended Kalman Filtering for Nonlinear Systems. Control and Decision, 1990, No. 5, pp. 1-6.

0.5 0

0

50

100

150

time(s)

Fig. 2 : position, velocity and acceleration RMSE in case 2, where solid line stands for the proposed algorithm, dotted line stands for the algorithm in [7]. Compared with the process noise covariance matrix adaptation algorithm introduced in [7], it is easy to see that the algorithm proposed in this paper not only has smaller RMSE, but also has strong robustness to the sudden changes in system states; furthermore, it has acceptable RMSE peak value and convergence time. Yet the RMSE peak value at maneuver onset instant is still a little larger, so further improvements should be studied toward this .

4

Conclusions

[6] Donghua Zhou, Yugeng Xi, Zhongjun Zhang. A Suboptimal Multiple Fading Extended Kalman Filter . Acta Automatica Sinica ,1991 , Vol 17, No. 6, pp. 689-695.

[7] X.R. Li and V.P. Jilkov, A Survey of Maneuvering Target Tracking: Dynamic Models. Proc. 2000 SPIE Conf. on Signal and Data Processing of Small Targets, vol.4048, Orlando, Florida, USA, April 2000,pp.212-236 [8] Robert A. Singer. Estimating Optimal Tracking Filter Performance for Manned Maneuvering Targets. IEEE Trans. on AES, July 1970, Vol AES-6, No.4, pp.473-483.

[9] Qingyu Cai, Yi Xue, Boyan Zhang. Data Processing and Simulation Technology for Phased-Array Radar. National Defence Industry Press, Beijing, 1997.

A novel adaptive Kalman filter is proposed to track maneuvering target with unknown process noise statistics in this paper. The work is based on the Sage’s estimator for the unknown process noise statistics, once the likyly filter divergence is detected by the residual based hypothesis testing method, a time-varying scale factor matrix is proposed to modify the estimated value of the process noise covariance, which is deduced under the optimality assumption of the Kalman filter. The new method mainly aims at overcoming the likyly divergence resulted from the sudden changes of the system states or parameters. From the simulation results, we can see that the algorithm proposed in this paper is convergent, and it has strong robustness to the sudden changes in the acceleration and velocity components of the system state; furthermore, RMSE peak value and convergence time are both acceptable.

References [1] Zili Deng, Yixin Guo. Dynamic Prediction of Oil Output and Water Output in Oil Field. Acta Automatica Sinica , 1983 , Vol 9, No. 2, pp. 121-126. [2] A. P. Sage, G. W. Husa . Adaptive Filtering with Unknown Prior Statistics. 1969 Joint Automatic Control Conference, pp. 760-769.

1315

Suggest Documents