Practical Approaches to Kalman Filtering with Time ... - CiteSeerX

16 downloads 0 Views 5MB Size Report
Apr 2, 2012 - here as the method of Bryson, has been proposed by Bryson and Henrikson [3]. However, there is a. 1-epoch latency in the measurement ...
I. INTRODUCTION

Practical Approaches to Kalman Filtering with Time-Correlated Measurement Errors KEDONG WANG Beihang University China YONG LI CHRIS RIZOS University of New South Wales Australia

When the measurement errors in Kalman filtering are time correlated, time-differencing approaches are conventionally applied to deal with the time-correlated errors, but they are subject to practical limitations, such as time latency and numerical issues that stem from matrix inversion. This paper proposes two new algorithms to resolve the issues by augmenting the time-correlated elements of the measurement errors into the state vector. To avoid the singularity problem of the state error covariance matrix, the gain matrix is regularized in the first algorithm and a small positive quantity is added to the diagonal elements of the state error covariance matrix in the second algorithm. The two new state-augmented algorithms are easier to keep convergent and have no time latency. Two simulations with a one-degree model and a six degree-of-freedom model demonstrate the proposed algorithms.

Manuscript received December 22, 2009; revised July 18, 2010 and August 2, 1010; released for publication October 12, 2011. IEEE Log No. T-AES/48/2/943842. Refereeing of this contribution was handled by D. Gebre-Egziabher. This research is sponsored by the China Aviation Science Foundation under grants No. 20100851017 and No. 20100818015. Part of this work was completed when the author visited the University of New South Wales with the support of the China Scholarship Council. Authors’ addresses: K. Wang, GNC, School of Astronautics, Beihang University, Beijing 100191, China, E-mail: ([email protected]); Y. Li and C. Rizos, School of Surveying & Spatial Information Systems, University of New South Wales, Sydney NSW 2052, Australia. c 2012 IEEE 0018-9251/12/$26.00 °

A prerequisite for optimal estimation using a standard Kalman filter (KF) is that the system has no time-correlated process and measurement errors. However, this is not always the case. The approach to augment the time-correlated process error into the state vector has proved to be effective both theoretically and practically [1, 2]. However, if the measurement errors are time correlated, the state vector augmentation approach is prone to divergence because of the singularity of the updated state error covariance matrix [2—5]. Therefore, a more complex approach based on time differencing, referred to here as the method of Bryson, has been proposed by Bryson and Henrikson [3]. However, there is a 1-epoch latency in the measurement updating because the current measurement has to be used to update the last epoch state vector. To resolve this problem, Petovello, et al. [6] recently proposed the modified time-differencing approach, here referred to as the method of Petovello, which is theoretically similar to the method of Bryson. However, the method of Bryson is more likely to converge than the method of Petovello because it does not need the time-consuming inverse of ©, using instead the inverse of the modified measurement error matrix R¤k (see (4) and (30)), which always exists because R¤k is a positive definite matrix for most applications. Furthermore, it is difficult to calculate the inverse of © if the state vector dimension n is very large or the transition matrix is ill conditioned. Both the Bryson and the Petovello methods require additional computations in comparison with the standard KF. This paper proposes two new algorithms, the so-called Tikhonov KF and perturbed-P algorithms. They are derived by augmenting the time-correlated error into the state vector. To avoid the singularity problem of the matrix P, the Tikhonov KF algorithm regularizes the computation of the gain matrix K. However, the perturbed-P algorithm adds a small quantity to the diagonal elements of the state error covariance matrix P to keep it positive definite. The effectiveness of the Tikhonov KF and the perturbed-P algorithms is demonstrated using simulated data. In comparison with the methods of Bryson and Petovello, the invertibility of the Tikhonov KF and the perturbed-P algorithms is ensured because of the avoidance of calculation of the inverse of © or R¤k . Compared with the Tikhonov KF algorithm, the computational burden of the perturbed-P algorithm is reduced because of the avoidance of the iterative procedure of seeking the regularizing parameter. In the following sections, the standard KF, the method of Bryson, the method of Petovello, the Tikhonov KF algorithm, and the perturbed-P algorithm are described for a system with a time-correlated measurement error. A concise derivation of the method of Bryson is provided, and

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 48, NO. 2

APRIL 2012

1669

the theoretical basis of the perturbed-P algorithm is discussed. Then simulations are conducted to compare the performance of the Tikhonov KF and perturbed-P algorithms with respect to the other three algorithms. Application of these algorithms to the case of loosely coupled global positioning system and inertial navigation system (GPS/INS) integration is investigated via simulations. II. STATEMENT OF THE PROBLEM AND THE CURRENT APPROACHES A. Statement of the Problem The discrete system with the time-correlated measurement error can be expressed by the following equations: xk = ©k¡1 xk¡1 + ¡k¡1 wk¡1 zk = Hk xk + vk vk = ªk¡1 vk¡1 + ³k¡1

(1)

E(wk ) = E(³k ) = 0 E(wk wTl ) = Qk ±kl ,

T

E(³k ³l ) = Rk ±kl ,

T

E(wk ³l ) = 0

where xk is the state vector at time tk , ©k¡1 is the state transition matrix from epoch tk¡1 to tk , ¡k¡1 is the state error matrix, wk¡1 is the process noise vector, zk is the measurement vector, Hk is the measurement matrix, vk is the measurement error, ªk¡1 is the transition matrix of the time-correlated error, ³k¡1 is Gaussian white noise with zero mean, E(x) is the expectation of x, ±kl is the Kronecker delta function, and Qk and Rk are the covariance matrices of wk and ³k , respectively. Although wk and ³k are both Gaussian white noise with zero means, the measurement error vk is time correlated. The three current approaches–the standard KF, the method of Bryson, and the method of Petovello–to the solution of (1) are presented next. B. The Standard KF

C.

The Method of Bryson

Bryson and Henrikson [3] proposed a time-differencing algorithm, referred to here as the method of Bryson, as mentioned previously. A concise derivation of the method can be found in Appendix A. By time differencing the measurements and decoupling the time-differenced measurement errors from the process errors, some parts of the system of (1) are rewritten and the time prediction and measurement updating in the standard KF form are summarized as follows. z¤k = zk+1 ¡ ªk zk H¤k = Hk+1 ©k ¡ ªk Hk Sk = Qk ¡kT HTk+1 R¤k = Hk+1 ¡k Qk ¡kT HTk+1 + Rk

Updating: Kk = Pk (¡)HTk [Hk Pk (¡)HTk + Rk ]¡1 xˆ k (+) = xˆ k (¡) + Kk [zk ¡ Hk xˆ k (¡)] Pk (+) = (I ¡ Kk Hk )Pk (¡):

(3)

(4)

Jk = ¡k Sk (R¤k )¡1 Q¤k = ¡k [Qk + Sk (R¤k )¡1 STk ]¡kT ©¤k¡1 = ©k¡1 ¡ Jk¡1 H¤k¡1 : Prediction: xˆ k (¡) = ©¤k¡1 xˆ k¡1 (+) + Jk¡1 z¤k¡1 ¤ Pk (¡) = ©¤k¡1 Pk¡1 (+)©¤T k¡1 + Qk¡1 :

Although the system of (1) cannot, in theory, be optimally processed directly by the standard KF, the standard KF may still be used to obtain a suboptimal solution when the measurement errors at different times are not so seriously correlated, i.e., vk can be approximated by the white noise term ³k . The approach is summarized as Prediction: xˆ k (¡) = ©k¡1 xˆ k¡1 (+) (2) Pk (¡) = ©k¡1 Pk¡1 (+)©Tk¡1 + Qk¡1 :

1670

where xˆ k is the estimate of xk ; (¡) and (+) denote the value before and after the measurement updating, respectively; Pk is the state error covariance matrix; Kk is the gain matrix; and I is the identity matrix. The standard KF provides an approximate solution to the problem, because Rk is the covariance matrix not of vk but of ³k . If the measurement errors at different times are strongly correlated, the filtering error because of the approximation cannot be ignored any longer, as demonstrated in Section IV. The following two algorithms have been proposed to solve this problem.

(5)

Updating: ¤ ¤T ¤ ¡1 Kk = Pk (¡)H¤T k [Hk Pk (¡)Hk + Rk ]

xˆ k (+) = xˆ k (¡) + Kk [z¤k ¡ H¤k xˆ k (¡)]

(6)

Pk (+) = (I ¡ Kk H¤k )Pk (¡): The algorithm is similar in principle to the following method of Petovello except that the inverse of the transition matrix is not required; rather, just the inverse of R¤k is needed. According to (4), however, R¤k is always positive definite if Rk is positive definite, which guarantees the existence of the inverse of R¤k . D. The Method of Petovello Petovello, et al. [6] proposed a revised time-differencing algorithm. The time prediction

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 48, NO. 2 APRIL 2012

step is the same as in (2); however, the measurement updating step is different. z¤k = zk ¡ ªk¡1 zk¡1 H¤k = Hk ¡ ªk¡1 Hk¡1 ©¡1 k¡1 T T T T Sk = Qk¡1 ¡k¡1 (©¡1 k¡1 ) Hk¡1 ªk¡1

T £ HTk¡1 ªk¡1 + Rk¡1 :

Updating: Kk =

xak =

·

xk

©ak¡1 =

·

©k¡1

0

0

a ¡k¡1 =

·

¡k¡1

ªk¡1 ¸ 0

wak¡1 =

·

(7)

T ¡1 T R¤k = ªk¡1 Hk¡1 ©¡1 k¡1 ¡k¡1 Qk¡1 ¡k¡1 (©k¡1 )

[Pk (¡)(H¤k )T

where

Hak + Sk ]

£ [H¤k Pk (¡)(H¤k )T + R¤k + H¤k Sk + STk (H¤k )T ]¡1 xˆ k (+) = xˆ k (¡) + Kk [z¤k ¡ H¤k xˆ k (¡)]

(8)

Pk (+) = Pk (¡) ¡ Kk [H¤k Pk (¡)(H¤k )T + R¤k + H¤k Sk + STk (H¤k )T ]KTk :

The inverse of ©k¡1 is needed for this algorithm. This increases the computational burden if the dimension of ©k¡1 is very large and may lead to divergence when ©k¡1 is ill conditioned. Therefore, the algorithm may encounter some practical problems even though its advantage over the standard KF algorithm has been proved. The methods of Bryson and Petovello have approximately the same level of computing burden, which is shown by simulations in Section IV. Because of the complexity of the two algorithms, both have a heavier computational burden than does the standard KF. In particular, if the system is time varying or the dimension of the state is large, the computing resources required increase dramatically. III. THE STATE-AUGMENTED APPROACHES Bryson and Henrikson [3] discussed the approach to augment the time-correlated measurement error into the state vector, concluding that this approach is invalid because of the singularity of state error estimate covariance matrix Pk (+). However, this approach leads to a comparatively simple algorithm if the singularity of Pk (+) can be overcome in some way, such as the Tikhonov KF algorithm and the perturbed-P algorithm discussed in the following sections.

The state vector is augmented with the time-correlated measurement error so that the system of (1) becomes

zk = Hak xak

0 wk¡1 ³k¡1

= [Hk

a Qak¡1 = ¡k¡1

¸

¸

I

I] · Qk¡1 0

0 Rk¡1

¸

a )T : (¡k¡1

There are no measurement errors in the augmented system, which means that for the augmented state the measurement errors can be ignored. If the standard KF is applied, the updating of the filter is difficult because Pk (+) converges to become a singular matrix in a steady state. Pk (+) is immediately singular in the special case when all states are directly measured, as proved later in (12)—(15). A standard KF is applied to such system. Prediction: xˆ ak (¡) = ©ak¡1 xˆ ak¡1 (+) Pk (¡) = ©ak¡1 Pk¡1 (+)(©ak¡1 )T + Qak¡1 :

(10)

Updating: Kk = Pk (¡)(Hak )T [Hak Pk (¡)(Hak )T ]¡1 xˆ ak (+) = xˆ ak (¡) + Kk [zk ¡ Hak xˆ ak (¡)] Pk (+) =

(11)

(I ¡ Kk Hak )Pk (¡):

In (10), Pk+1 (¡) converges to Pk (+) if ©ak is near unity and Qak is small [2, 3]. Then, Kk+1 is difficult to calculate if the matrix Hak+1 Pk+1 (¡)(Hak+1 )T is singular according to (11), so the measurement updated state xˆ ak+1 (+) is also easily divergent and the iteration of the algorithm is difficult to carry out. Pk (+) is always the zero matrix if all states are directly observable, no matter whether ©ak is near unity and/or Qak is small, which is proved later. If Pk (¡) is singular, Pk (+) is also singular according to the last formula in (11). If Pk (¡) is positive definite, then Pk (+) = (I ¡ Kk Hak )Pk (¡)

A. The State-Augmented Algorithm and Its Problem

a wak¡1 xak = ©ak¡1 xak¡1 + ¡k¡1

vk

¸

(9)

= Pk (¡) ¡ Pk (¡)(Hak )T [Hak Pk (¡)(Hak )T ]¡1 Hak Pk (¡):

(12) According to the matrix inversion lemma [7], (A ¡ BC¡1 D)¡1 = A¡1 + A¡1 B(C ¡ DA¡1 B)¡1 DA¡1 :

WANG, ET AL.: KALMAN FILTERING WITH TIME-CORRELATED MEASUREMENT ERRORS

(13) 1671

a T a Let A = P¡1 k (¡), B = ¡(Hk ) , D = Hk , and C = °I (° is a nonzero scalar). Then (13) becomes ¡1 a T a ¡1 [P¡1 k (¡) + ° (Hk ) Hk ]

= Pk (¡) ¡ Pk (¡)(Hak )T

£ [°I + Hak Pk (¡)(Hak )T ]¡1 Hak Pk (¡):

(14)

If ° ! 0, the right side of (14) approaches Pk (+), while the left side approaches the zero matrix if all states are measured, meaning that [(Hak¡1 )T Hak¡1 ] is invertible. That is, Pk (+) converges to the zero matrix. Therefore, Pk (+) is always singular, no matter whether Pk (¡) is invertible or singular, if all states are directly observable. Similarly, given that Pk¡1 (¡) is invertible, then

of ®. In this paper, an iterative method is applied to search for the optimal value of ® [9—11]. The iteration is as follows: 1) Let y = [1 1 ¢ ¢ ¢ 1]m£1 and m be the column number of Hak . At the first step, let n = 0. The initial value of the vector u can be calculated by the least squares method, that is, un = (WTk Wk )¡1 WTk y. 2) Let ¯ equal a small positive value, such as 1 £ 10¡4 . Then, the estimate of ® is calculated as follows: (y ¡ Wk un )T (y ¡ Wk un ) : ®n = ¯(1 + uTn un ) ¡ 1 + uTn un (17) 3) The vector u and ® are then updated: un+1 = (WTk Wk + ®n I)¡1 WTk y

Pk (¡) = ©ak¡1 Pk¡1 (+)(©ak¡1 )T + Qak¡1

®n+1 = ¯(1 + uTn+1 un+1 )

= ©ak¡1 fPk¡1 (¡) ¡ Pk¡1 (¡)(Hak¡1 )T £ [Hak¡1 Pk¡1 (¡)(Hak¡1 )T ]¡1 Hak¡1 Pk¡1 (¡)g £ (©ak¡1 )T + Qak¡1 :

(15) Qak¡1 .

Substituting the result in (12) yields Pk (¡) ! Qak¡1 is the covariance matrix of the augmented state errors and it is symmetric and diagonal in shape. If there are no errors for some states, Qak¡1 is singular; thus, the calculation of Kk in (11) is difficult to carry out. This is usually the case, as discussed in Chapter 9 of Grewal and Andrews [8]. Equations (12)—(15) imply that the algorithm diverges because of the singularity of the matrices Pk (+), Qak¡1 , and Hak Pk (¡)(Hak )T . Because Qak¡1 is a prior condition, the effective methods should mitigate the influence of the singularity of Pk (+) and Hak Pk (¡)(Hak )T on the filtering. We explore two approaches to resolving these singularities: (1) the Tikhonov KF algorithm, which utilizes a Tikhonov regularization to regularize Hak Pk (¡)(Hak )T and thus keep the smoothness of the calculation of Kk , and (2) the perturbed-P algorithm, which adds a small quantity to the diagonal elements of Pk (+) to make Pk (+) positive definite so as to ensure Pk+1 (¡) is positive. B. Tikhonov KF Algorithm Because the calculation of the inverse of matrix Hak Pk (¡)(Hak )T causes divergence of the algorithm, one approach to avoiding divergence is to regularize Hak Pk (¡)(Hak )T before its inverse is calculated. According to the Tikhonov regularization criterion, Kk is modified as follows: Wk = Hak Pk (¡)(Hak )T Kk = Pk (¡)(Hak )T [®I + WTk Wk ]¡1 WTk where ® is the regularization parameter. There are many methods to determine the appropriate value 1672

(16)

(18)

(y ¡ Wk un+1 )T (y ¡ Wk un+1 ) : ¡ 1 + uTn+1 un+1 4) If kun+1 ¡ un k < " or n > N, the iteration is terminated and ®n+1 is output as the value of ®. " is a small value, such as 1 £ 10¡6 . N is a threshold iteration number, such as 50. Otherwise, let un = un+1 , ®n = ®n+1 , n = n + 1, and go back to step 3 until the value is found. The Tikhonov regularization is an approximation method, although the approximation error can be controlled in a reasonable range if a proper value of ® is selected. However, the estimation of ® is a time-consuming procedure, especially if a sophisticated iterative algorithm is applied. For example, the simulation in the succeeding section shows that the searching program for the value of ® usually converges in 20 iterations. If the value of u0 cannot be acquired because of the singularity of (WTk Wk ), although the scenario is not encountered in the following simulations, the Tikhonov regularized solution as (34) in Appendix B can be set as the initial value of u0 , i.e., u0 = (WTk Wk + ¯I)¡1 WTk y. The total least squares approach with the Tikhonov regularizing term [9] is presented in Appendix B. The KF with the Tikhonov regularization applied is referred to hereafter as the Tikhonov KF algorithm. C.

Perturbed-P Algorithm

Because the Tikhonov regularization method may be time consuming and possibly lead to a large regularization error, a computationally efficient method is proposed. We refer to this new method as the perturbed-P algorithm. By adding a small value to the diagonal elements of the matrix Pk (+), the perturbed Pk (+) becomes Pk (+) = (I ¡ Kk Hak )Pk (¡) + ¸I

(19)

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 48, NO. 2 APRIL 2012

where ¸ is a small value (such as 1 £ 10¡6 ). The matrix Pk (+) becomes positive definite. To analyze the effect of the perturbation, the state error prediction and estimate covariance matrices are derived. The state error prediction covariance at the epoch tk+1 is Pk+1 (¡) = ©ak Pk (+)(©ak )T + Qak

IV. SIMULATIONS

= ©ak fPk (¡) ¡ Pk (¡)(Hak )T [Hak Pk (¡)(Hak )T ]¡1 £ Hak Pk (¡) + ¸Ig(©ak )T + Qak :

(20)

Similar to the analysis in Section IIIA, the preceding matrix converges to the level of process noise plus a perturbation, i.e., Pk+1 (¡) ! ¸©ak (©ak )T + Qak . Therefore, Pk+1 (¡) is positive definite, so the gain matrix at epoch tk+1 can be calculated even though Qak is small. Then, the state error estimate covariance at the epoch tk+1 can be calculated as Pk+1 (+) = (I ¡ Kk+1 Hak+1 )Pk+1 (¡) + ¸I

Two system examples are simulated to investigate the performance of the perturbed-P and Tikhonov KF algorithms compared to the methods of Bryson and Petovello. The first system is a one-dimensional system to compare the approaches without the coupled factors. The second system is a six degree-of-freedom vehicle or aircraft model to evaluate the approaches’ performance for a more complex application. All of the algorithms were programmed in MATLAB (R2008b) and run on a computer with an Intel Core 2 Duo central processing unit (CPU) (2.4 GHz) and 3.25 GB of random access memory. A. One-Dimensional Simulation

= Pk+1 (¡) ¡ Pk+1 (¡)(Hak+1 )T

£ [Hak+1 Pk+1 (¡)(Hak+1 )T ]¡1 £ Hak+1 Pk+1 (¡) + ¸I:

words, the algorithm converges, and the singularity problem of the state error covariance has been overcome. The following simulations confirm that it is not difficult to determine an appropriate value of ¸.

(21)

Similarly, Pk+1 (+) ! ¸I, which ensures the positive definiteness of Pk+2 (¡), and so on, for the epochs that follow. In other words, the convergence of the whole procedure is ensured. However, ¸ should be small enough; otherwise, the residual error because of the perturbation may accumulate with time, which may lead to degraded performance, or even the divergence, of the algorithm. If ¸ is small enough, Pk+1 (+) converges to ¸I and the algorithm error is controlled. However, Pk+1 (+) is vulnerable to unexpected disturbances such as numerical error if the value of ¸ is too small. A simple time-invariant scalar system can be used to demonstrate the effectiveness of the perturbation. In the system, ©ak = Á, Qka = q, and Hka = h. Given that Pk (¡) 6= 0, Kk = 1=h so that Pk (+) = 0 and Pk+1 (¡) = q according to (11). If q is very small or zero, the calculation of Kk+1 is prone to divergence. However, Pk (+) = ¸ according to (19). In the next iteration, Pk+1 (¡) = ¸Á2 + q, Kk+1 = 1=h, and Pk+1 (+) = ¸, which means that the state estimate covariance converges a to ¸. Substituting Kk+1 into (11), we get xˆ k+1 (+) = zk+1 =h, which means that the state estimate depends on the current measurement. This is reasonable because measurement errors can be ignored in the state-augmented system, as mentioned previously. Because of the perturbation of ¸, the filtering process proceeds even though the process error covariance q is small or equals zero. Pk (+) converges to a small value–the perturbation parameter ¸. In other

In the first simulation, a vehicle is assumed to be moving at a constant velocity on a straight road. It can be described as a time-invariant linear system. The states include the position and velocity of the vehicle. The observation of the system is the velocity of the time-correlated error, which is described as a first-order Gauss-Markov process. The discrete model of the system is defined in (1), where x = [x1 x2 ]T ; x1 and x2 are the position and velocity of the vehicle, respectively; ¸ · 1 1 ©k¡1 = 0 1 ¸ · 1 0 ¡k = 0 1 · ¸ 0:01 0 T E(wk wk ) = Qk = 0 1 £ 10¡4 Hk = [0 1]

and zk and vk are the scalar measurements and their errors, respectively. The continuous form of the measurement error is r 2 1 _v(t) = ¡ v(t) + ¾ &(t) (22) ¿ ¿ where v(t) is the scalar error at time t; ¿ and ¾ are the time constant and the standard variance of v(t), respectively; and &(t) is a zero-mean Gaussian white noise that has a standard variance of 1. Equation (22) can be discretized further as in [1] as vk = ªk¡1 vk¡1 + &k¡1

(23)

2 where ªk¡1 = e¡¢tk¡1 =¿ and E(&k &l ) = ¾ 2 (1 ¡ ªk¡1 )±kl . The larger the time constant ¿ , the more correlated the

WANG, ET AL.: KALMAN FILTERING WITH TIME-CORRELATED MEASUREMENT ERRORS

1673

measurement errors at different times. In particular, if ¿ approaches infinity, the measurement errors become random constants and time correlated; that is, vk = vk¡1 . On the other hand, if ¿ is smaller, the measurement errors are less correlated and they become more like a white noise process. In the simulations ¾ = 10 m/s, ¿ = 10 s, ¢tk = 2 ¢T = 1 s, Rk = ¾ 2 (1 ¡ ªk¡1 ), and T = 1024 s. The initial values of the state vector and its error covariance are ¸ · 0:01 0 ˆx0 (¡) = [0 1]T : and P0 (¡) = 0 0:01 ¢T is the observation update period, and T is the simulation duration. All time-invariant parameters are calculated once at the beginning of the simulations. The simulation is run 102 times so that the estimated position errors are calculated as circular error probability (CEP) (50%). Fig. 1(a) depicts the CEP errors of the four algorithms. The position errors of the perturbed-P algorithm in 102 iterations are depicted in different shades of grey and black in Fig. 1(b). Fig. 2(a) shows the operating times of the four algorithms and the standard KF with iterations. Fig. 2(b) shows the average operating times of the four algorithms and the standard KF algorithm in a histogram. The “operating time” here refers to the computing time to complete 1 iteration. There are in total 102 iterations and 1023 filtering epochs at each iteration. The four algorithms have almost the same position accuracy, but the operating time of the perturbed-P algorithm is lowest and about 32.5% less than that of the method of Bryson. The operating time of the standard KF should be lowest or no greater than that of the perturbed-P algorithm. In the MATLAB program Pk (+) is implemented as (I ¡ Kk Hk )Pk (¡)(I ¡ Kk Hk )T +Kk Rk KTk instead of (I ¡ Kk Hk )Pk (¡) to keep the symmetry of Pk (+). The computing time of Kk Rk KTk in the standard KF is much longer than the computing time of the addition of the perturbed matrix ¸I in the perturbed-P algorithm, although both Kk and Rk are scalars in the simulation. Fig. 2(a) and (b) demonstrate that the perturbed-P algorithm has almost the same computing load as the standard KF, which is confirmed by the simulation results in Section IVB. The implication is that the perturbed-P algorithm is more practical, but its most significant advantage over the other three algorithms is that it reduces the algorithm complexity without loss of convergence. The small fluctuations among the iterations may be because of the dynamic CPU speed.

measurement errors. This is the case when the GPS position error is time correlated. The difference between GPS and INS positions is used as the observation of the KF update step. The observation consists of components in three directions (east, north, and up), which are all contaminated by the first-order Markov noises. The state and measurement equations in the local geographic frame are modeled as x_ = Fx + w z = Hx + v

where x = [¢r ¢v ' ´ "]T , including the position error ¢r, the velocity error ¢v, the attitude error ', the accelerometer bias ´, and the gyroscope bias ". The biases ´ and " are modeled as first-order Markov processes [12]. The details of (24) are given in Appendix C. The state equation is time varying because the matrix F is time varying. The time-correlated measurement error is modeled as v_ =ª v + ³ ª =¡

1 ¿mea

I3£3

(25)

2 I3£3 , ¿mea where E[³(t)³T (¿ )] = Q±(t ¡ ¿ ), Q = ¾mea is the time constant of GPS error, and ¾mea is the standard deviation of GPS measurement noise. In the simulations, ¿mea = 10000 s, ¾mea = 10 m, the standard deviations of accelerometer noises in the three directions are 0:5 £ 10¡4 g, the standard deviations of gyroscope noises in the three directions are 0.5 deg/h, and ¿gyro = ¿acc = 10000 s. The vehicle trajectory is a level flight from east to west along the longitude for 1024 s. The value of the correlated time constant ¿mea is deliberately set as large as 10000 s to evaluate the effectiveness of the algorithms in the case of heavily correlated measurement errors. Fig. 3(a) and (b) depict the level position errors from the standard KF, the method of Bryson, the method of Petovello, the Tikhonov KF algorithm, and the perturbed-P algorithm. The plots show that the standard KF algorithm has the lowest accuracy among the algorithms. The Tikhonov KF and the perturbed-P algorithms have similar accuracy to the methods of Bryson and Petovello. The time required for each algorithm is 93.6 ms (standard KF), 148.5 ms (Bryson), 169.8 ms (Petovello), 173.9 ms (Tikhonov KF), and 130.4 ms (perturbed-P). Therefore, the perturbed-P algorithm has the best performance in terms of accuracy and computation speed. This is consistent with the results in Section IVA.

B. Six Degree-of-Freedom GPS/INS Simulation

V.

All algorithms are applied in the integration of GPS and INS, with simulated time-correlated

Five algorithms, namely, the standard KF, the method of Bryson, the method of Petovello,

1674

(24)

CONCLUSION

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 48, NO. 2 APRIL 2012

Fig. 1. Estimated position errors with time. (a) CEP errors of method of Bryson, method of Petovello, Tikhonov KF algorithm, and perturbed-P algorithm. (b) Position errors of perturbed-P algorithm in 102 iterations.

the Tikhonov KF algorithm, and the perturbed-P algorithm, have been compared as methods to solve the optimal state estimation problem when observations are contaminated by time-correlated

errors. Although both the Bryson and the Petovello methods are theoretically optimal, they have their own defects. The method of Petovello may encounter a divergence problem in some situations because

WANG, ET AL.: KALMAN FILTERING WITH TIME-CORRELATED MEASUREMENT ERRORS

1675

Fig. 2. Operating times of standard KF, method of Bryson, method of Petovello, Tikhonov KF algorithm, and perturbed-P algorithm. (a) Operating times in iterations. (b) Average operating times in histogram form.

of the requirement to compute the inverse of ©. The method of Bryson, however, has a 1-epoch latency. The Tikhonov KF algorithm solves the singularity problem of the augmented state approach, but its computation time may be very large if 1676

a sophisticated method is applied to select the regularization parameter. By adding a small quantity to the diagonal elements of matrix P, the perturbed-P algorithm overcomes the defects of the methods of Bryson and Petovello. The analyses show that the

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 48, NO. 2 APRIL 2012

Fig. 3. Horizontal position errors with time for loosely coupled GPS/INS with observed positions. (a) Standard KF, method of Bryson, and perturbed-P algorithm. (b) Method of Petovello and Tikhonov KF algorithm.

covariance of the perturbed-P algorithm converges to the perturbation quantity. Therefore, the accuracy of the perturbed-P algorithm may be controlled within a reasonable range by appropriately selecting the perturbation quantity. In comparison with the methods of Bryson and Petovello, the perturbed-P

algorithm requires fewer computing resources and is easier to keep convergent. Although the Tikhonov KF algorithm also solves the singularity problem encountered in the augmented system, the determination of the regularization parameter can be time consuming.

WANG, ET AL.: KALMAN FILTERING WITH TIME-CORRELATED MEASUREMENT ERRORS

1677

APPENDIX B. DERIVATION OF THE TOTAL LEAST SQUARES SOLUTION BASED ON THE TIKHONOV REGULARIZATION CRITERION [9]

APPENDIX A. DERIVATION OF METHOD OF BRYSON Time differencing of the measurements is performed to whiten the measurement errors:

Assume a linear ill-posed equation Au = y

z¤k = zk+1 ¡ ªk zk

where A is a n £ m ill-posed matrix and n > m, u is a n £ 1 vector, and y is a m £ 1 vector. The Tikhonov function for (32) is

= (Hk+1 ©k ¡ ªk Hk )xk + (Hk+1 ¡k wk + ³k ) = H¤k xk + v¤k

(26)

Efv¤k g = EfHk+1 ¡k wk + ³k g = 0

= (Hk+1 ¡k Qk ¡kT HTk+1 + Rk )±kj = R¤k ±kj :

The new measurement error v¤k is now whitened, but it is correlated with the process error wk . The correlated covariance is T Efwk v¤T j g = Efwk (Hj+1 ¡j wj + ³j ) g

(27)

Therefore, further treatment is needed to decorrelate the two errors: xk = ©k¡1 xk¡1 + ¡k¡1 wk¡1

= Efw¤k g

©¤k¡1 xk¡1

+ Jk¡1 z¤k¡1

+ w¤k¡1

(28)

= Q¤k ±kj :

(37)

where − denotes the Kronecker product of matrices. The necessary conditions for minimizing the Lagrange function are

¤ ¤T Efw¤k v¤T j g = Ef(¡k wk ¡ Jk vk )vj g

(29)

The following condition should be satisfied: (30)

@ T (u, », b) = 2(» ¡ ¸) = 0 @» ¯ @ T (u, », b) = 2[b + (u − I)¸] @b ¯

(38)

T

= 2vec(B + ¸u ) = 0

Substituting (30) into (28), Q¤k is derived as (31)

All equations in (4) correspond to (26)—(31). Now, the new process error w¤k and measurement error v¤k are zero mean and white and are independent of each other. The standard KF can be applied to obtain (5) and (6). 1678

(35)

T¯ (u, », b) = » T » + bT b + ¯uT u + 2¸T (y ¡ » ¡ Au + Bu)

vec(Bu) = (uT − I)vec(B) = (uT − I)b

New process error w¤k is obtained, which has a zero mean and covariance Q¤k . To make w¤k and v¤k uncorrelated,

Q¤k = ¡k [Qk + Sk (R¤k )¡1 STk ]¡kT :

If L = I, the solution us determined by the regularization parameter ®. An appropriate value of ® is important because its value determines the numerical stability of the regularization [11]. In this paper, the total least squares approach based on the Lagrange method [13] is applied to iteratively determine the value of ® by assuming (32) is randomly perturbed. The perturbed equation of (32) is

where ¸ is the Lagrange multiplier vector and b is the column vector of the matrix B. By denoting the column vector of a matrix as vec(¢), the following identity holds:

¤ ¤ T Efw¤k w¤T j g = Ef(¡k wk ¡ Jk vk )(¡j wj ¡ Jj vj ) g

Jk = ¡k Sk (R¤k )¡1 :

(34)

(36)

= Ef¡k wk ¡ Jk v¤k g = 0

= (¡k Sk ¡ Jk R¤k )±kj = 0:

(33)

where B and » are the perturbed matrix and vector, respectively. The Lagrange function with the Tikhonov regularization term is

(©k¡1 ¡ Jk¡1 H¤k¡1 )xk¡1

+ Jk¡1 z¤k¡1 + (¡k¡1 wk¡1 ¡ Jk¡1 v¤k¡1 )

u = (AT A + ®LT L)¡1 AT y:

(A ¡ B)u = y ¡ »

+ Jk¡1 (z¤k¡1 ¡ H¤k¡1 xk¡1 ¡ v¤k¡1 ) =

T® (u) = kAu ¡ yk2 + ®kLuk2

where ® is a regularization parameter, L is a regularizing operator matrix, and k ¢ k2 is the 2-norm. The solution to minimize the function is

T Efv¤k v¤T j g = Ef(Hk+1 ¡k wk + ³k )(Hj+1 ¡j wj + ³j ) g

= (Qk ¡kT HTk+1 )±kj = Sk ±kj :

(32)

@ T (u, », b) = 2(¡AT ¸ + BT ¸ + ¯u) = 0: @u ¯ The solution of the vector u of (38) is u = (AT A + ®I)¡1 AT y ® = ¯(1 + uT u) ¡

(y ¡ Au)T (y ¡ Au) : (1 + uT u)

(39)

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 48, NO. 2 APRIL 2012

Equation (39) can be solved iteratively, e.g., by applying the procedure in Section IIIB. APPENDIX C. DERIVATION OF THE MODEL FOR THE GPS/INS INTEGRATION The equations of the KF for a loosely coupled GPS/INS system are composed of the position, velocity, and attitude equations and the error equations of the gyroscopes and the accelerometers. The linearized strapdown INS error equations in the local geographic frame (the east-north-up frame) and the gyroscope and accelerometer error equations are ¢r_ = F1 ¢r + F2 ¢v

F5 = ¡skewsymm(f b )

(45)

F6 = F10 = Cnb 2

(46) 3

6 F7 = 4

7 !ie sin ' 0 ¸_ cos '=(RN + h) 5 ¡!ie cos ' ¡ ¸_ sec ' 0 ¸_ sin '=(RN + h)

(47)

0 6 F8 = 4 ¡1=(RN + h)

£ (['_

(40)

"_ = F12 " + w2

F12 = diag([¡1=¿gyro T

where ¢x means the error of x; ¢r = [¢' ¢¸ ¢h] , ¢v = [¢ve ¢vn ¢vu ]T , ' = [Áe Án Áu ]T , ´ = [´x ´y ´z ]T , " = ["x "y "z ]T , w1 = [w1x w1y w1z ]T , and w2 = [w2x w2y w2z ]T ; ', ¸, and h are the latitude, the longitude, and the altitude of the vehicle, respectively; ve , vn , and vu are the velocities of the vehicle in the east, north, and up directions, respectively; Áe , Án , and Áu are the attitude errors of the vehicle in the east, north, and up directions, respectively; ´x , ´y , and ´z are the accelerometer errors in the body frame; "x , "y , and "z are the gyroscope errors in the body frame; w1 and w2 are the noises of the first-order Markov processes; and wij (i = 1, 2; j = x, y, z) are zero-mean Gaussian white noises. The matrices Fi (i = 1—12) are 3 2 _ 0 0 ¡'=(R M + h) 7 6 _ (41) F1 = 4 ¸_ tan ' 0 ¡¸=(R + h) 5 N

0

6 F2 = 4 sec '=(RN + h) 0

1=(RM + h) 0 0 0

0

_ cos ' ¡ (!ie + ¸)

F11 = diag([¡1=¿acc

0

7 05

0

¡ tan '=(RN + h)

´_ = F11 ´ + w1

0

1=(RM + h) 0

3

(48)

0

F9 = skewsymm

_ = F7 ¢r + F8 ¢v + F9 ' + F10 " '

0

_ ¡'=(R M + h)

0

2

¢v_ = F3 ¢r + F4 ¢v + F5 ' + F6 ´

2

0

3

7 05

03£3

(42)

2!ie (vu sin ' + vn cos ') + vn ¸_ sec ' 0 6 F3 = 4 ¡2!ie ve cos ' ¡ ve ¸_ sec ' 0 ¡2!ie ve sin '

3

0

7 0 5

0 2g=R

(43)

2

_ M + h) tan ' ¡ h_ '(R 6 RN + h 6 F4 = 6 6 ¡2(! + ¸) _ sin ' 4 ie _ cos ' 2(! + ¸) ie

¡ 1=¿gyro

(49)

¡ 1=¿acc ]T ) ¡ 1=¿gyro ]T )

(50) (51)

where RM , RN , and R are the Earth’s meridian, the Earth’s prime vertical, and the mean radii of curvature, respectively; !ie = [0 !e cos ' !e sin ']T ; !e is the angular rate of Earth rotation; g is the normal gravity; '_ = vn =(RM + h), ¸_ = ve sec '=(RN + h), and h_ = vu ; skewsymm(¢) and diag(¢) mean the skew-symmetric and diagonal matrices of a vector, respectively; f b is the specific force in the body frame sensed by the accelerometers; and Cnb is the rotation matrix from the body frame to the navigation frame (the local geographic frame). Therefore, F, H, and w in (24) can be derived as follows: 2 3 F1 F2 03£3 03£3 03£3 6 F F4 F5 F6 03£3 7 6 3 7 6 7 6 F = 6 F7 (52) F8 F9 03£3 F10 7 7 6 7 4 03£3 03£3 03£3 F11 03£3 5

1

2

¡ 1=¿acc

_ sin ']T ) ¡ (!ie + ¸)

03£3

03£3

03£3

F12

H = [I3£3

03£12 ]

(53)

w = [01£9

wT1

(54)

wT2 ]T :

REFERENCES [1]

Brown, R. G. and Hwang, P. Y. C. Introduction to Random Signals and Applied Kalman Filtering With MATLAB Exercises and Solutions (3rd ed.). New York: John Wiley & Sons, 1997, 94—96, 296—299.

3 _ sin ' ¡(2! + ¸) _ cos ' (2!ie + ¸) ie 7 7 7 7 _ _ ¡h=(RM + h) ¡' 5 2'_ 0

WANG, ET AL.: KALMAN FILTERING WITH TIME-CORRELATED MEASUREMENT ERRORS

(44)

1679

[2]

[3]

[4]

[5]

[6]

[7]

1680

Gelb, A. Applied Optimal Estimation. Cambridge, MA: MIT Press, 1974, 133—136. Bryson, Jr., A. E. and Henrikson, L. J. Estimation using sampled data containing sequentially correlated noise. Journal of Spacecraft and Rockets, 5 (1968), 662—665. Popescu, D. C. and Zeljkovic, L. Kalman filtering of colored noise for speech enhancement. In Proceedings of the 1998 IEEE International Conference on Acoustics, Speech and Signal Processing, Seattle, WA, May 1998, 997—1000. Sun, S. L. and Deng, Z. L. Optimal fusion Kalman filter for systems with colored measurement noises. In Proceedings of the 5th World Congress on Intelligent Control and Automation, Hangzhou, China, June 2004, 1562—1566. Petovello, M. G., O’Keefe, K., Lachapelle, G., and Cannon, M. E. Consideration of time-correlated errors in a Kalman filter applicable to GNSS. Journal of Geodesy, 83 (2009), 51—56. Bierman, G. J. Factorization Methods for Discrete Sequential Estimation. New York: Academic Press, 1977, 26—27.

[8]

[9]

[10]

[11]

[12]

[13]

Grewal, M. S. and Andrews, A. P. Kalman Filtering: Theory and Practice Using MATLAB® (3rd ed.). New York: John Wiley & Sons, 2008, 427—508. Yuan, Z. C., Shen, Y. Z., and Zhou, Z. B. Regularized total least-squares solution to ill-posed error-in-variable model. Journal of Geodesy and Geodynamics, 29 (2009), 131—134. Renaut, R. A. and Guo, H. B. Efficient algorithms for solution of regularized total least squares. SIAM Journal of Matrix Analysis and Applications, 26 (2005), 457—476. Nair, M. T., Hegland, M., and Anderssen R. S. The trade-off between regularity and stability in Tikhonov regularization. Mathematics of Computation, 66 (1997), 193—206. Nassar, S. Improving the inertial navigation system (INS) error model for INS and INS/DGPS applications. UCGE Rep. 20183 Ph.D. thesis, University of Calgary, Canada, 2003, 83—98, 145—148. Schaffrin, B. and Felus Y. A. An algorithmic approach to the total least-squares problem with linear and quadratic constrains. Studia Geophysica ET Geodaetica, 53 (2009), 1—16.

IEEE TRANSACTIONS ON AEROSPACE AND ELECTRONIC SYSTEMS VOL. 48, NO. 2 APRIL 2012

Kedong Wang is an associate professor at the School of Astronautics, Beihang University, Beijing, China. He obtained a Ph.D. in precision instrument and machinery from Tsinghua University in 2003. From September 2008 to August 2009, he was a visiting fellow at University of New South Wales. His research interests include the integration of GNSS/INS, terrain-aided navigation, INS, and optimal estimation and its applications.

Yong Li obtained a Ph.D. in flight dynamics in 1997. He is a senior research fellow at the School of Surveying & Spatial Information Systems, the University of New South Wales (UNSW), Sydney, Australia. He worked at the Beijing Institute of Control Engineering for GPS aerospace applications from 1997 to 2000. From 2000 to 2002, he was a Science and Technology Agency fellow at the Japanese Aerospace Exploration Agency (formerly the National Aerospace Laboratory of Japan). Before joining UNSW, he worked on the GPS sports application at Royal Melbourne Institute of Technology University under the Cooperative Research Centre for Micro Technology from 2002 to 2004. His current research interests include multisensor integration of GPS, INS, and pseudolites (Locata), attitude determination, GPS receiver technologies, and optimal estimation/filtering theory and applications.

Chris Rizos is currently head of the School of Surveying & Spatial Information Systems at the UNSW, Sydney, Australia. He has been researching the technology and applications of GPS since 1985 and established more than a decade ago the Satellite Navigation and Positioning group at UNSW, today the largest academic GPS/global navigation satellite system (GNSS) and wireless location technology research laboratory in Australia. Rizos is the president of the International Association of Geodesy and a member of the Executive and Governing Board of the International GNSS Service. He is an Editor-in-Chief of the Journal of Applied Geodesy, and a member of the editorial boards for several other navigation/GPS/GNSS journals and magazines. He is an author or coauthor of more than 500 journal or conference papers. WANG, ET AL.: KALMAN FILTERING WITH TIME-CORRELATED MEASUREMENT ERRORS

1681

Suggest Documents