The Optimality of a Class of Distributed Estimation Fusion Algorithm

0 downloads 0 Views 72KB Size Report
Let the local estimate of sensor i at time instant k − 1 be x. (i) k−1|k−1 with the corresponding covariance matrix P. (i) k−1|k−1 . Then the local BLUE estimate of ...
The Optimality of a Class of Distributed Estimation Fusion Algorithm Zhansheng Duan

X. Rong Li

Department of Electrical Engineering University of New Orleans New Orleans, LA 70148, U.S.A. Email: {zduan,xli}@uno.edu

Abstract—When the measurement noises across sensors at the same time may be correlated, for linear minimum mean-squared errors (LMMSE) estimation, a systematic way to handle the corresponding distributed estimation fusion problem is proposed in this paper based on a unified data model for linear unbiased estimation. The optimality (equivalence to the optimal centralized estimation fusion) of the new optimal distributed estimation fusion algorithm is then analyzed. A necessary and sufficient condition of the optimality for the general case and sufficient conditions for two special cases are given. Comparisons with the existing distributed estimation fusion algorithms are also discussed.

Keywords: Estimation fusion, distributed fusion, centralized fusion, cross correlation, linear minimum meansquared errors. I. I NTRODUCTION Estimation fusion, or data fusion for estimation, is the problem of how to best utilize useful information contained in multiple sets of data for the purpose of estimating an unknown quantity—–a parameter or process (at a time) [1]. There are two basic estimation fusion architectures: centralized and decentralized/distributed (also referred to as measurement fusion and track fusion in target tracking, respectively), depending on whether the raw measurements are sent to the fusion center or not. In centralized fusion, all raw measurements are sent to the fusion center, while in distributed fusion, each sensor only sends in the processed data. With reduced communication demands for the networks, distributed fusion usually has faster real-time processing and stronger fault-tolerance abilities. Centralized fusion, despite of its heavy computational burden at the fusion center and poor survival ability, can always provide globally optimal fused estimates provided the processing ability of the processor at the fusion center and communication bandwidth can satisfy the requirements. This topic has been researched for many years and there are a lot of results available. For example, [2] proposed three types of centralized fusion methods which include parallel filter, sequential filter and data compression filter. [3] and [4] proposed a two-sensor track-to-track fusion algorithm which is optimal in the sense of maximum likelihood (ML) for the Research supported in part by NSFC grant 60602026, Project 863 through grant 2006AA01Z126 and Navy through Planning Systems Contract # N68335-05-C-0382. The authors are also with the College of Electronic and Information Engineering, Xi’an Jiaotong University.

Gaussian case. For more than two sensors, [5], [6] proposed a track-to-track fusion algorithm which is optimal in the sense of ML (for the Gaussian case) and weighted least-square (WLS). [7], [8] proposed a decentralized structure to reconstruct the optimal global estimate when the measurement noises across sensors are uncorrelated. [9] proposed a decentralized structure to reconstruct the optimal global estimate when the measurement noises across sensors are correlated. [1] proposed unified fusion rules in the sense of best linear unbiased estimate (BLUE) and WLS for all fusion architectures with arbitrary correlation of local estimates or observation errors across sensors or across time. [10] proposed a state estimation fusion algorithm which is optimal in the sense of maximum a posteriori (MAP). Theoretically speaking, centralized fusion is nothing but an estimation problem with data from multiple sources. Usually distributed fusion is more challenging in terms of performance, channel capacity, reliability, survivability, information sharing, etc. Most of the existing distributed fusion algorithms need to consider either the correlation among local estimation errors due to the common process noise, prior, correlation among measurement noises across sensors, etc., or the correlation between the estimatee and local estimation errors, or both of them in some cases. Usually the calculation of these two classes of correlation are very hard and extra transmission would be necessary in some cases which actually violates the original motivation to resort to distributed fusion. Some other distributed fusion algorithms were obtained by ingenious manipulation of the centralized fusion algorithm with the replacement of the raw measurement information by the local estimates. This approach is not systematic and not easily extendable to more general cases. In this paper, we assume that the measurement noises across sensors at the same time are generally correlated, which is more general than the assumption in most existing work where uncorrelatedness assumptions are widely used. In practice, as pointed out in [1], [11], [12], the measurement noises across sensors are often correlated possibly due to the following facts: the measurement noises across sensors may depend on the common estimatee (the quantity to be estimated); all the sensors observe in the same noisy environment (such as the noise jamming generated by a target or the atmospherical disturbances); coordinates transformation and so on. So it is

1285

more general to assume that the measurement noises can be correlated across sensors, and this will certainly increase the difficulty to obtain optimal distributed estimation fusion. In this paper, for the case in which the measurement noises across sensors at the same time can be correlated, one systematic way to handle the corresponding distributed fusion problem is proposed based on a unified data model for linear unbiased estimator. The optimality (equivalence to centralized estimation fusion) of the new algorithm is then analyzed. A necessary and sufficient condition of the optimality for the general case and sufficient conditions of the optimality for two special cases are given. The paper is organized as follows. Sec. II formulates the multiple sensor estimation fusion problem for dynamic systems. Sec. III describes local estimate and its unified linear data model. Secs. IV and V presents the distributed fusion algorithm based on the local unified linear data model and analyzes its corresponding optimality, respectively. Sec. V provides conclusions and future work.

then the stacked measurement equation at the fusion center w.r.t. all N sensors can be written as zk = Hk xk + vk It can be easily seen that E [vk ] = 0l×1 , l = cov [vk , vj ] = Rk δkj where Rk = cov [vk ]  (1) Rk  (2,1)  Rk = ..   . Rk

FORMULATION

Consider the following dynamic system

where xk ∈ Rn , E [wk ] = 0nw ×1

(N,2)

···

.. .

Rk

x ˆck|k−1 = Fk−1 x ˆck−1|k−1

cov [x0 , wk ] = 0n×nw

c Pk|k−1

(i)

x ˆck|k

(i)

(i)

= =

x ˆck|k−1

h i (i) (i) zk ∈ Rmi , E vk = 0mi ×1 h i (i) (i) (i) (i) cov vk , vj = Rk δkj , Rk > 0 h i h i (i) (i) cov wj , vk = 0nw ×mi , cov x0 , vk = 0n×mi h i (i) (j) cov vk , vl = 0mi ×mj , i 6= j, k 6= l h i (i) (j) (i,j) cov vk , vk = Rk , i 6= j

  T (1) vk = vk



(1) Hk

T

(2) zk

T

 T (2) Hk (2) vk

T

···



(N ) zk



··· ···



T T

(N ) Hk

(N ) vk

T T

T T

(N )

Rk

    

+

Kkc



zk −

+

Gk−1 Qk−1 GTk−1

(5)



(6)

Hk x ˆck|k−1

c Hk Pk|k−1 HkT

(7)

+ Rk

In optimal distributed estimation fusion, the fusion center tries to construct the optimal estimate of the system state from the locally processed data received from each sensor. Here by optimality we mean the equivalence to centralized estimation fusion. Remark: In this paper, by distribution estimation fusion, we mean only data-processed observations are available at the fusion center, not necessarily the local estimates from each sensor. Systems with only local estimates available at the fusion center are referred to as the standard distributed estimation fusion in [1].

Let 

.. .



c c c c Pk|k = Pk|k−1 − Pk|k−1 HkT Sk−1 Hk Pk|k−1

(1)

Sk =

  T (1) zk   

(1,N ) Rk (2,N ) Rk

(4)

c T Fk−1 Pk−1|k−1 Fk−1

c Kkc = Pk|k−1 HkT Sk−1

where

Hk =

··· ··· .. .

cov [wk , wj ] = Qk δkj , Qk ≥ 0 E [x0 ] = x ¯0 , cov [x0 ] = P0

Assume that altogether N sensors are used to observe the system state at the same time

zk =

(3) (1,2) Rk (2) Rk

We also assume that Rk > 0. Remark: Since it is assumed that Rk > 0, it follows easily (i) that Rk > 0, i = 1, 2, · · · , N also. In the sense of BLUE, the optimal centralized fusion by using the raw measurements from all N sensors can be described as follows:

xk = Fk−1 xk−1 + Gk−1 wk−1

zk = Hk xk + vk , i = 1, 2, · · · , N

mi

i=1

(N,1)

II. P ROBLEM

N X

(2)

III. L OCAL

ESTIMATES AND UNIFIED LINEAR DATA MODEL

Let the local estimate of sensor i at time instant k − 1 be (i) (i) x ˆk−1|k−1 with the corresponding covariance matrix Pk−1|k−1 . Then the local BLUE estimate of sensor i at time instant k

1286

From matrix inverse lemma [13], it follows that  −1 −1  T  −1 (i) (i) (i) (i) (i) Pk|k = Pk|k−1 + Hk Rk Hk

can be recursively estimated as (i)

(i)

(i)

(i)

x ˆk|k−1 = Fk−1 xˆk−1|k−1 T Pk|k−1 = Fk−1 Pk−1|k−1 Fk−1 + Gk−1 Qk−1 GTk−1   (i) (i) (i) (i) (i) (i) x ˆk|k = x ˆk|k−1 + Kk zk − Hk xˆk|k−1  T  −1 (i) (i) (i) (i) Kk = Pk|k−1 Hk Sk   (i) (i) (i) (i) Pk|k = I − Kk Hk Pk|k−1  T (i) (i) (i) (i) (i) Sk = Hk Pk|k−1 Hk + Rk

>0 (8)

(9)

Let

¯ (i) = K (i) H (i) H k k k (i) v¯k

=

(10) (11)

(i) (i) Kk vk

¯ (i) can be equivalently calculated by Thus R k  T  −1  −1 (i) (i) (i) (i) (i) ¯ (i) = P (i) H (i) R Rk Rk Rk Hk Pk|k k k k|k  T  −1 (i) (i) (i) (i) (i) = Pk|k Hk Rk Hk Pk|k  −1  −1  (i) (i) (i) (i) Pk|k − Pk|k−1 Pk|k = Pk|k  −1 (i) (i) (i) (i) = Pk|k − Pk|k Pk|k−1 Pk|k (17) Remark: We n can see that in general o each local sensor needs (i) ¯ (i) ¯ (i) (i) to transmit z¯k , H , R , K to the fusion center in k k k (i)

Then the unified linear data model for sensor i at the fusion center is (i) ¯ (i) xk + v¯(i) z¯k = H (12) k k where h i (i) E v¯k = 0 h i  T ¯ (i) = cov v¯(i) = K (i) R(i) K (i) R k k k k k

(i)

due to the assumption that Pk|k−1 > 0. Then from the theory of the Kalman filter, we have  T  −1 (i) (i) (i) (i) Kk = Pk|k Hk Rk (16)  T  −1  −1  −1 (i) (i) (i) (i) (i) Hk Rk Hk = Pk|k − Pk|k−1

Eq. (8) can be rewritten as   (i) (i) (i) (i) (i) (i) x ˆk|k = I − Kk Hk x ˆk|k−1 + Kk zk     (i) (i) (i) (i) (i) (i) = I − K k Hk x ˆk|k−1 + Kk Hk xk + vk   (i) (i) (i) (i) (i) (i) (i) = I − K k Hk x ˆk|k−1 + Kk Hk xk + Kk vk   (i) (i) (i) (i) (i) z¯k = x ˆk|k − I − Kk Hk x ˆk|k−1

(15)

(13)

h i  T ¯ (i,j) = cov v¯(i) , v¯(j) = K (i) R(i,j) K (j) , i 6= j (14) R k k k k k k It can be seen that Eq. (12) has a similar form to the original measurement equation in (1). Also from [1] we know that any local estimate can also be written in a similar form. That’s why it is said that Eq. (12) is a unified linear data model. (i) If we further assume that Pk|k−1 > 0, it follows from Eq. (9) that  −1 (i) (i) (i) (i) I − Kk Hk = Pk|k Pk|k−1  −1 (i) (i) (i) (i) Kk Hk = I − Pk|k Pk|k−1 (i) ¯ (i) in Eq. (11) can be equivalently Thus z¯k in Eq. (10) and H k calculated by  −1 (i) (i) (i) (i) (i) z¯k = x ˆk|k − Pk|k Pk|k−1 xˆk|k−1  −1 ¯ (i) = I − P (i) P (i) H k k|k k|k−1

order to obtain the optimal fused estimate. But if Pk|k−1 > 0, o n (i) (i) (i) (i) (i) ˆk|k , Pk|k , Kk we can choose to transmit xˆk|k−1 , Pk|k−1 , x from each local sensor to the fusion center. Because in this way, the fusion center can not only obtain the optimal fused estimate, but also can have local estimates for other (i,j) purpose (e.g., comparison). In both cases, Rk , i, j = 1, 2, · · · , N, i 6= j are supposed be available at the fusion center in advance. It can be easily seen that the second case requires to transmit an additional n × 1 variable, as compared with the first case. (i) Remark: If the measurement noise vk of sensor i is uncorrelated with any other sensor’s, then from Eq. (14), we ¯ (i,j) = R ¯ (j,i) = 0n×n , i 6= j. That is, it is not needed to have R k k (i) (i,j) (j,i) transmit Kk to the fusion center and store Rk and Rk at the fusion center in this case. IV. D ISTRIBUTED BLUE

FUSION

Let   T  T (1) (2) z¯k = z¯k z¯k ···   T  T ¯k = ¯ (1) ¯ (2) H H H ··· k k   T  T (1) (2) v¯k = v¯k v¯k ···



(N ) z¯k

 

T T

¯ (N ) H k

(N ) v¯k

T T

(18)

T T

Then the stacked unified linear data model at the fusion center w.r.t. all N sensors can be written as

1287

¯ k xk + v¯k z¯k = H

It follows from the above almost sure uniqueness of the LMMSE estimators xˆck|k = x ˆdk|k if and only if

with E [¯ vk ] = 0 ¯ k = cov [¯ R vk ]  ¯ (1) R k  ¯ (2,1)  Rk = ..   . ¯ (N,1) R

¯ (1,2) R k ¯ (2) R k .. . ¯ (N,2) R k

k

··· ··· .. .

¯ (1,N ) R k ¯ (2,N ) R k .. .

···

¯ (N ) R k



c d Since Pk|k−1 = Pk|k−1 as shown in the above, it follows c d from Eqs. (23) and (7) that Pk|k = Pk|k if and only if

    

c c c ¯ T S¯+ H ¯ kP c Pk|k−1 HkT Sk−1 Hk Pk|k−1 = Pk|k−1 H k k k|k−1

This completes the proof.

Correspondingly, the optimal distributed fused BLUE estimate of the system state at time instant k can be recursively computed as x ˆdk|k−1 = Fk−1 x ˆdk−1|k−1 d Pk|k−1

=

c d Pk|k = Pk|k

(19)

(20)

d T Fk−1 Pk−1|k−1 Fk−1



+

Gk−1 Qk−1 GTk−1

(21)



(22)

¯kx x ˆdk|k = x ˆdk|k−1 + Kkd z¯k − H ˆdk|k−1 d ¯ T S¯+ H Kkd = Pk|k−1 k k

d d d ¯ kT S¯+ H ¯kP d Pk|k = Pk|k−1 − Pk|k−1 H k|k−1 k d T ¯ ¯ ¯ ¯ Sk = Hk Pk|k−1 Hk + Rk

(23)

where A+ stands for the unique Moore-Penrose pseudoinverse (MP inverse in short) of A. V. O PTIMALITY

=

where

ANALYSIS

The optimality condition for general distributed BLUE fusion was given in [14]. In the following, the optimality (equivalence to the optimal centralized estimation fusion) of the optimal distributed estimation fusion algorithm in Section IV is discussed in detail by following the same ideas as in [14]. Note that two LMMSE estimators of the same estimatee (quantity to be estimated) using the same set of data are almost surely identical if and only if their MSE matrices are equal [13], [14]. Then the following necessary and sufficient condition holds. c = ˆdk−1|k−1 and Pk−1|k−1 Theorem 1 Given x ˆck−1|k−1 = x d Pk−1|k−1 , then the optimal distributed and optimal centralized estimation fusion algorithms are identical, that is, x ˆck|k = xˆdk|k c d and Pk|k = Pk|k , if and only if c c Pk|k−1 HkT Sk−1 Hk Pk|k−1

 In the following, we provide two sufficient conditions for the equivalence between the optimal distributed and optimal centralized estimation fusion algorithms. (i) Theorem 2 If Pk|k−1 > 0, i = 1, 2, · · · , N , given c d c d x ˆk−1|k−1 = xˆk−1|k−1 and Pk−1|k−1 = Pk−1|k−1 , then the optimal distributed and optimal centralized estimation fusion (i) algorithms are identical if Hk , i = 1, 2, · · · , N, are all full row rank. Proof: Substituting Eqs. (13), (14) and (3) into Eq. (19), we have

c ¯ T S¯+ H ¯kP c Pk|k−1 H k k k|k−1

n o ˘ k = diag K (1) , K (2) , · · · , K (N ) K k k k

(26)

¯ (i) = K (i) H (i) H k k k Thus Eq. (18) can be rewritten as ¯k = K ˘ k Hk H

(27)

Substituting Eqs. (25) and (27) into Eq. (24), we have c ¯ kT S¯+ H ¯kP c Pk|k−1 H k|k−1 k  + c T ˘T ˘ k Sk K ˘ kT ˘ k Hk P c = Pk|k−1 Hk Kk K K k|k−1

(28)

Substituting Eq. (16) into Eq. (26), we have ˘ k = P˘k|k H ˘ kT R ˘ −1 K k

(29)

o n (2) (N ) (1) P˘k|k = diag Pk|k , Pk|k , · · · , Pk|k n o ˘ k = diag H (1) , H (2) , · · · , H (N ) H k k k n o (1) (2) (N ˘ k = diag R , R , · · · , R ) R k k k

(30)

where

Proof: Since x ˆck−1|k−1 = xˆdk−1|k−1 , it follows from Eqs. (4) and (20) that x ˆck|k−1 = xˆdk|k−1

This means that the a priori first two moments of the system state at time instant k for centralized estimation fusion Eq. (6) and distributed estimation fusion Eq. (22) in the sense of LMMSE are exactly the same.

(25)

From Eqs. (9) and (11), we have

(24)

c d Also since Pk−1|k−1 = Pk−1|k−1 , it follows from Eqs. (5) and (21) that c d Pk|k−1 = Pk|k−1

¯k = K ˘ k Rk K ˘T R k

(i)

If Hk , i = 1, 2, · · · , N are all full row rank, from Eq. ˘ T would be full column rank. (30), we can easily see that H k (i) Also since it is assumed that Pk|k−1 > 0, i = 1, 2, · · · , N , from Eq. (15) we can easily see that P˘k|k > 0. Furthermore ˘ k would also be full column from Eq. (29), we can see that K rank. Then from the property of the MP inverse that

1288

+

(U AV ) = V + A−1 U +

if U is full column rank, V is full row rank and A is nonsingular, we have c ¯ kT S¯+ H ¯kP c Pk|k−1 H k|k−1 k  +  + c T ˘T ˘ kT ˘k K ˘ k Hk P c = Pk|k−1 Hk Kk K Sk−1 K k|k−1

Proof: Substituting Eq. (29) into Eq. (28), we have c ¯ T S¯+ H ¯kP c Pk|k−1 H k k k|k−1

 + c ˘ −1 H ˘ k P˘k|k P˘k|k H ˘TR ˘ −1 ˘ −1 ˘ ˘ = Pk|k−1 HkT R k k Sk Rk Hk Pk|k k

c c = Pk|k−1 HkT Sk−1 Hk Pk|k−1

˘ kT R ˘ −1 Hk P c · P˘k|k H k|k−1 k

where use has been made of the property of the MP inverse that U +U = I

˘ −1 H ˘ k P˘k|k = H T S −1 Sk R ˘ −1 H ˘ k P˘k|k HkT R k k k k where from the matrix inversion lemma [13], we have

and VV

+

=I

c Sk−1 = Rk−1 − Rk−1 Hk Mk−1 Pk|k−1 HkT Rk−1

if U is full column rank and V is full row rank. That is, the sufficient condition given in Eq. (24) is satisfied (i) if Hk , i = 1, 2, · · · , N are all full row rank. This completes the proof.  Remark: Actually it is not hard to satisfy in practical appli(i) cations the sufficient condition that Hk , i = 1, 2, · · · , N, are all full row rank in Theorem 2. For example, in most target tracking scenarios, the sensors (radar, sonar, camera, etc.) can only measure the relative position (e.g., range and angles) and velocity (e.g., range rate) of the target, while the target motion is modeled for position, velocity and even acceleration components. This is also a critical assumption in [9] which handled the same problem as that of Theorem 2. Remark: In general, the optimal distributed estimation fusion o local sensor to transmit n algorithm in [9] requires each (i) (i) (i) (i) (i) to the fusion center if Rk xˆk|k−1 , Pk|k−1 , xˆk|k , Pk|k , Hk are available at n the fusion center in advance. Ifoeach local (i) (i) (i) (i) (i) to the fuˆk|k , Pk|k , Kk sensor transmits x ˆk|k−1 , Pk|k−1 , x sion center, then our proposed optimal distributed estimation fusion algorithm has almost the same transmission require(i) (i) ments for each local sensor (Hk and Kk cost the same (i) amount of communication) except that Rk is not needed to be available at the fusion center. The reason for the absence (i) ¯ (i) can be reconstructed from P (i) of Rk is that R k k|k−1 and (i)

We further have

Pk|k as shown in Eq. (17). But if each local sensor transmits n o (i) ¯ (i) ¯ (i) (i) z¯k , H , R , K to the fusion center, then our proposed k k k optimal distributed estimation fusion algorithm reduces the transmission of each local sensor by n×1 at each time instant. (i) Also Rk is not needed to be available at the fusion center ¯ (i) . due to the inclusion of its effect in R k Remark: Another implicit assumption in [9] is that both d d Pk|k−1 and Pk|k are also invertible, while the optimal distributed estimation fusion algorithm in Section IV does not have this requirement. (i) Theorem 3 If Pk|k−1 > 0, i = 1, 2, · · · , N , given that c d c d x ˆk−1|k−1 = x ˆk−1|k−1 and Pk−1|k−1 = Pk−1|k−1 , then the optimal distributed and optimal centralized estimation fusion algorithms are identical if measurement noises across sensors (i,j) are uncorrelated, that is, Rk = 0, i, j = 1, 2, · · · , N, i 6= j.

where c Mk = Pk|k−1 HkT Rk−1 Hk + I

Thus HkT Sk−1 c HkT Rk−1 = HkT Rk−1 − HkT Rk−1 Hk Mk−1 Pk|k−1   c = I − HkT Rk−1 Hk Mk−1 Pk|k−1 HkT Rk−1

Then ˘ −1 H ˘ k P˘k|k HkT R k  c ˘ −1 H ˘ k P˘k|k = I − HkT Rk−1 Hk Mk−1 Pk|k−1 HkT Rk−1 Sk R k Note that HkT =

 

(1) Hk



(2) Hk

T

···



(N ) Hk

T 

 In×n In×n · · · In×n  T  T  T  (1) (2) (N ) · diag Hk , Hk , · · · , Hk =



T

Let IN = Then



In×n

In×n

···

In×n



˘T HkT = IN H k (i,j)

Also under the assumption that Rk 1, 2, · · · , N, i = 6 j, it follows that

= 0, i, j =

˘ k = Rk R Thus

1289

˘ −1 H ˘ k P˘k|k HkT R k  c = I − HkT Rk−1 Hk Mk−1 Pk|k−1 ˘TR ˘ −1 Sk R ˘ −1 H ˘ k P˘k|k · IN H k  k k  c = I − HkT Rk−1 Hk Mk−1 Pk|k−1

−1 ˘ ˘TR ˘ −1 ˘ −1 ˘ ˘ Pk|k H · IN P˘k|k k k Sk Rk Hk Pk|k

Taking transpose on both sides, we have

centralized estimation fusion) of the optimal distributed estimation fusion algorithm has been analyzed. A necessary and sufficient condition of the optimality for the general case and sufficient conditions of the optimality for two special cases have been given. Further research work includes the finding of more sufficient conditions for the optimality of the distributed estimation fusion algorithm.

˘ kT R ˘ −1 Hk P˘k|k H k T ˘ −1 ˘ ˘ ˘ −1 H ˘ k P˘k|k P˘ −1 ITN = Pk|k Hk Rk Sk R k k|k   −1 T −1 c · I − Pk|k−1 Nk Hk Rk Hk where c Nk = HkT Rk−1 Hk Pk|k−1 +I

R EFERENCES

Then c ¯ T S¯+ H ¯kP c Pk|k−1 H k k k|k−1   c T −1 c = Pk|k−1 I − Hk Rk Hk Mk−1 Pk|k−1 −1 ˘ ˘TR ˘ −1 ˘ −1 ˘ ˘ · IN P˘k|k Pk|k H k k Sk Rk Hk Pk|k  + ˘ kT R ˘ −1 Sk R ˘ −1 H ˘ k P˘k|k · P˘k|k H k k

˘TR ˘ −1 ˘ −1 ˘ ˘ ˘ −1 T · P˘k|k H k k Sk Rk Hk Pk|k Pk|k IN   c c · I − Pk|k−1 Nk−1 HkT Rk−1 Hk Pk|k−1   c c = Pk|k−1 I − HkT Rk−1 Hk Mk−1 Pk|k−1

−1 ˘ ˘TR ˘ −1 ˘ −1 ˘ ˘ ˘ −1 T · IN P˘k|k Pk|k H k k Sk Rk Hk Pk|k Pk|k IN   c c · I − Pk|k−1 Nk−1 HkT Rk−1 Hk Pk|k−1 c c = Pk|k−1 HkT Sk−1 Sk Sk−1 Hk Pk|k−1

c c = Pk|k−1 HkT Sk−1 Hk Pk|k−1

This means Eq. (24) is satisfied, which completes the proof.  Remark: If the measurement noises across sensors are uncorrelated, the equivalent optimal distributed estimation fusion algorithm to the optimal centralized estimation fusion algorithm can be easily obtained as in [7], [8], [15], [16]. Here by Theorem 3, it is shown that the optimal distributed estimation fusion algorithm in Section IV is also equivalent to the optimal centralized estimation fusion algorithm at least for this wellknown and relatively simple case, although the proof seems a little bit harder. That is, the optimal distributed estimation fusion algorithm in Section IV provides at least one more candidate of optimal distributed estimation fusion algorithm that is equivalent to the optimal centralized estimation fusion algorithm in practical applications. Remark: For the case in which the measurement noises across sensors are uncorrelated, the optimal distributed estimation fusion algorithm in [7], [8], [15], [16] assumes that d d both Pk|k−1 and Pk|k are also invertible, while the optimal distributed estimation fusion algorithm in Section IV does not have this kind of requirement at all. VI. C ONCLUSIONS

[1] X. R. Li, Y. M. Zhu, J. Wang, and C. Z. Han, “Optimal linear estimation fusion - part i: Unified fusion rules,” IEEE Transactions on Information Theory, vol. 49, no. 9, pp. 2192–2208, September 2003. [2] D. Willner, C. B. Chang, and K. P. Dunn, “Kalman filter algorithms for a multi-sensor system,” in Proceedings of the 1976 IEEE Conference on Decision and Control, December 1976, pp. 570–574. [3] Y. Bar-Shalom and L. Campo, “The effect of the common process noise on the two-sensor fused-track covariance,” IEEE Transactions on Aerospace and Electronic Systems, vol. 22, no. 6, pp. 803–805, November 1986. [4] K. C. Chang, R. K. Saha, and Y. Bar-Shalom, “On optimal track-totrack fusion,” IEEE Transactions on Aerospace and Electronic Systems, vol. 33, no. 4, pp. 1271–1276, October 1997. [5] H. M. Chen, T. Kirubarajan, and Y. Bar-Shalom, “Performance limits of track-to-track fusion versus centralized estimation: theory and application,” IEEE Transactions on Aerospace and Electronic Systems, vol. 39, no. 2, pp. 386–400, April 2003. [6] K. H. Kim, “Development of track to track fusion algorithms,” in Proceedings of the 1994 American Control Conference, Baltimore, MD, June 1994, pp. 1037–1041. [7] C. Y. Chong, “Hierarchical estimation,” in Proceedings of the MIT/ONR Workshop on C3, Monterey, CA, 1979. [8] H. R. Hashemipour, S. Roy, and A. J. Laub, “Decentralized structures for parallel Kalman filtering,” IEEE Transactions on Automatic Control, vol. 33, no. 1, pp. 88–94, January 1988. [9] E. B. Song, Y. M. Zhu, J. Zhou, and Z. S. You, “Optimal Kalman filtering fusion with cross-correlated sensor noises,” Automatica, vol. 43, no. 8, pp. 1450–1456, August 2007. [10] K. C. Chang, Z. Tian, and S. Mori, “Performance evaluation for map state estimate fusion,” IEEE Transactions on Aerospace and Electronic Systems, vol. 40, no. 2, pp. 706–714, April 2004. [11] X. R. Li, “Optimal linear estimation fusion - part vii: dynamic systems,” in Proceedings of the 6th International Conference on Information Fusion, Cairns, Australia, July 2003, pp. 455–462. [12] S. Roy and R. A. Iltis, “Decentralized linear estimation in correlated measurement noise,” IEEE Transactions on Aerospace and Electronic Systems, vol. 27, no. 6, pp. 939–941, November 1991. [13] X. R. Li, Applied Estimation and Filtering. New Orleans, LA: Course Notes, University of New Orleans, 2006. [14] X. R. Li and K. S. Zhang, “Optimal linear estimation fusion - part iv: Optimality and efficiency of distributed fusion,” in Proceedings of the 4th International Conference on Information Fusion, Montreal, QC, Canada, August 2001, pp. WeB1–19–WeB1–26. [15] Y. Bar-Shalom and X. R. Li, Multitarget-Multisensor Tracking: Principles and Techniques. Storrs, CT: YBS Publishing, 1995. [16] Y. M. Zhu, Z. S. You, J. Zhao, K. S. Zhang, and X. R. Li, “The optimality for the distributed Kalman filtering fusion with feedback,” Automatica, vol. 37, no. 9, pp. 1489–1493, 2001.

AND FUTURE WORKS

For the case in which the measurement noises across sensors at the same time may be correlated, based on a unified linear data model for linear unbiased estimation, a systematic way to handle the corresponding distributed fusion problem has been proposed in this paper. The optimality (equivalence to optimal

1290

Suggest Documents