Distributed cubature information filtering based on ...

2 downloads 0 Views 1MB Size Report
Mar 8, 2017 - Saber et al. have proposed the distributed Kalman filtering (DKF) algorithm ...... formation on the state vector can be obtained in an off-line man-.
Neurocomputing 243 (2017) 115–124

Contents lists available at ScienceDirect

Neurocomputing journal homepage: www.elsevier.com/locate/neucom

Distributed cubature information filtering based on weighted average consensus Qian Chen a,∗, Wancheng Wang a, Chao Yin b, Xiaoxiang Jin c, Jun Zhou a a

College of Energy and Electrical Engineering, Hohai University, Nanjing 211100, China School of Electronic and Information Engineering, Southwest University, Chongqing 400715, China c Department of Electrical Engineering, Southeast University Chengxian College, Nanjing 210088, China b

a r t i c l e

i n f o

Article history: Received 18 September 2016 Revised 19 January 2017 Accepted 2 March 2017 Available online 8 March 2017 Communicated by Wei Guoliang Keywords: Distributed state estimation Weighted average consensus Distributed cubature information filters Sensor networks Stability Consistency

a b s t r a c t In this paper, the distributed state estimation (DSE) problem for a class of discrete-time nonlinear systems over sensor networks is investigated. First, based on weighted average consensus, a new DSE algorithm named distributed cubature information filtering (DCIF) algorithm is developed to address the highdimensional nonlinear DSE problem. The proposed filtering algorithm not only has such advantages as easy initialization and less computation burden, but also possesses the guaranteed stability regardless of consensus steps. Moreover, it is proved that the corresponding estimation is consistent, and its meansquared estimation errors are exponentially bounded. Finally, numerical simulations are given to verify the effectiveness of DCIF.

1. Introduction With broad applications of large-scale sensor networks in the fields such as target tracking, environment monitoring and wireless camera networking, the study of distributed state estimation (DSE) in various sensor networks has risen to a plethora of popularities due to its distinct advantages over most of the centralized estimation techniques [1]. The major advantages of DSE lie in scalability, low communication burden, and robustness to individual sensor failures [2,3]. During the past two decades, many techniques have been developed to address a variety of problems related to DSE (see, e.g., [4–21]). Among the existing techniques, the consensus-based methodology is the most popular one. For example, with respect to linear discrete-time Gaussian systems, OlfatiSaber et al. have proposed the distributed Kalman filtering (DKF) algorithm by exploiting average consensus algorithms [7–9]. Furthermore, based on the extended Kalman filtering (EKF) algorithm, DKF algorithm is directly extended to nonlinear Gaussian systems, which yields the distributed extended Kalman filtering (DEKF) algorithm [10]. However, DEKF algorithm has inherent drawbacks such as instability and low-order accuracy when systems experience high nonlinearities. Compared with EKF algorithm, the un∗

Corresponding author. E-mail address: [email protected] (Q. Chen).

http://dx.doi.org/10.1016/j.neucom.2017.03.004 0925-2312/© 2017 Elsevier B.V. All rights reserved.

© 2017 Elsevier B.V. All rights reserved.

scented Kalman filtering (UKF) algorithm turns out to be of higher stability robustness and better estimation accuracy. By exploiting a statistical linear regression approach and reconstructing a pseudo measurement matrix, Li and Jia have presented a distributed UKF algorithm for jump Markov nonlinear systems [11]. Lately, without approximating any pseudo measurement matrices, a weighted average consensus-based UKF algorithm has been developed in [12]. A crucial limitation about the UKF-like techniques is that the non-positive definite covariance matrix may arise, especially when systems are high-dimensional [22,23]. To overcome this, the cubature Kalman filter (CKF) is proposed for the high-dimensional nonlinear state estimation [22]. Furthermore, the cubature information filter (CIF), an algebraical equivalence of CKF, is developed in [24]. More precisely, CKF is a Gaussian approximation of a Bayesian filter, but provides estimations more precise and stable than most existing Gaussian filters [24]. Thus, it is no wonder that CKF has been broadly studied in various settings [25–27]. Recently, with rapid developments of sensor technologies and increasing demands of large-scale sensor networking, researchers have turned their attention to designing the distributed CKF (or simply, DCKF) in networked environments [13–17]. Unfortunately, several consensus filtering problems encountered in the DCKF setting have not been examined satisfactorily despite their practical importance. In general, the consensus-based DSE approaches can be classified into three groups. The first group is consensus on estimates

116

Q. Chen et al. / Neurocomputing 243 (2017) 115–124

(CE), which performs an average consensus on local state estimates [7]. In this view, the work of [13] belongs to this group. However, CE involves no error covariance matrices. As is known, error covariance matrices contain information useful to improve the filter performance. The second group is consensus on measurements (CM) [8,9], which performs consensus on the local innovation parts [12]. It is shown that when the number of consensus steps during each sampling interval is sufficiently large, CM can approximate the correction step of the centralized Kalman-like filter. In this sense, the works [14–16] fall into this group; more specifically, these techniques adopt different forms of CKF to achieve CM. Indeed, CIF is used in [14], while the square root CIF is employed in [15,16] to avoid numerically sensitive matrix operations. The third group falls into consensus on information (CI) [28]. From an algorithm viewpoint, CI is nothing but reaching a local average on information vectors and information matrices. Stability of CI algorithms can be guaranteed for any number of consensus steps (even a single one). As mentioned above, the achievements in the DCKF setting have been obtained either in the CE or CM paradigm. In this paper, we focus our attention on embedding the CI architecture into DCKF to extract possible benefits from the former’s positive features. The resulting algorithm is named the distributed cubature information filtering (DCIF) algorithm. A fundamental property of an estimator is consistency [29], which is significant for information fusion over sensor networks. When fusing information with unknown correlations, simply neglecting the unknown correlations may cause inconsistency in estimation [30]. Ref. [28] has reported results about consistency analysis. More recently, the features about consistency have also been investigated in [30]. However, the results about consistency analysis have been achieved only in the linear setting. Hence, it is also our goal to give a rigorous proof to the consistency of our proposed DCIF algorithm but in the nonlinear setting. By constructing a collective Lyapunov function, it has been shown that under network connectivity and collective observability, the distributed EKF algorithm in [31] can achieve local stability. With a different viewpoint, the stochastic boundedness of estimation errors for a class of consensus-based UKF algorithms has been verified by means of the stochastic stability lemma in [12]. However, stochastic stability analysis remains unsolved in the DCKF setting. Motivated by the above researches, we explicate our proposed DCIF algorithm by exploiting a weighted average consensus approach. Furthermore, we attempt to analyze the consistency of the proposed DCIF algorithm, while boundedness analysis of estimation errors is attacked. The main contributions include: (1) a more accurate and stable distributed nonlinear filtering algorithm is well-developed, which applies to a wide range (from low to high dimensions) of nonlinear DSE problems; (2) by deriving a pseudo system matrix and a pseudo measurement matrix, the consistency of estimates for a class of consensus-based CIF algorithms is proven; (3) by means of the stochastic stability lemma, the stochastic boundedness of estimation errors for the proposed DCIF algorithm is investigated. The outline is as follows. Section 2 models the sensor network in nonlinear dynamic systems. The general CIF algorithm is presented in Section 3. Section 4 develops our proposed DCIF algorithm, while its stability analysis is presented in Section 5. Furthermore, Section 6 illustrates numerical simulations. Finally, Section 7 concludes this paper. Throughout the paper, we write X X T = X (∗ )T , X T Y X = (∗ )T Y X and X Y X T = X Y (∗ )T to save space. Rn represents the n-dimensional Euclidean space and Rn×m represents the set of all n × m real matrices. · is the Euclidean norm in Rn . In is the n × n identity matrix and diag(B1 , B2 , . . . , Bn ) refers to a diagonal matrix with its main diagonal matrix block being B1 , B2 , . . . , Bn . For an arbitrary matrix A, AT and A−1 denote its transpose and inverse, re-

spectively; tr{A} represents the trace of A and A > 0 means A is a positive define matrix. E{·} denotes the expectation operation. E−1 {·} = (E{·} )−1 is used for brevity. 2. Sensor network modeling Consider a discrete-time dynamic system with an N-sensor network (N is the total number of sensor nodes; in the case of N ≥ 2, the sensors are located in a distributed fashion), which possesses collectively the following discrete-time nonlinear system subject to additive Gaussian noise:



xk = f (xk−1 ) + ωk−1 zks = hs (xk ) + νks ,

s = 1, 2, . . . , N

(1)

where xk ∈ Rn is the state vector of the dynamic system at discrete-time instant k. zks ∈ Rr represents the measurement vector of the sth sensor. The process noise ωk−1 ∈ Rn and the measurement noise νks ∈ Rr are uncorrelated zero-mean Gaussian white sequences with covariance matrices Qk−1 ∈ Rn×n and Rsk ∈ Rr×r , respectively. The first and second equations in (1) are the process equation and the measurement equation, respectively. f : Rn → Rn describes the nonlinear state transition function and hs : Rn → Rr describes the nonlinear measurement function of the sth sensor. These functions are assumed to be known. The communication topology of the sensor network is denoted by an undirected graph G (N , E ), where N = {1, 2, . . . , N} is the sensor node set and E is the edge set. An edge (s, j ) ∈ E means that node j can receive data from node s in its neighbors and vice versa. For each node s ∈ N , Ns = { j|( j, s ) ∈ E } denotes the set of its neighbors; if node s is included at least in one of its neighbors’ sets, we write Js = N s ∪ {s} = ∅. 3. Cubature information filtering algorithms For each sensor node s, the general CIF algorithm is summarized as follows, which is a two stage procedure containing time update and measurement update. (1) Time update: Let m (= 2n ) cubature points χks,i−1|k−1 ∈ Rn be generated based on the state estimate xˆsk−1|k−1 and the square-root matrix Sks −1|k−1 at time step k −1.

χks,i−1|k−1 = Sks −1|k−1 ξi + xˆsk−1|k−1 , i = 1, . . . , m

(2)

where

√

ξi =

ne i , √ − nei−n ,

1≤i≤n n + 1 ≤ i ≤ m.

(3)

ei is the n-dimensional unit vector with the ith element being 1 and Sks −1|k−1 is the square-root matrix of (Yks−1|k−1 )−1 . Then, each cubature point χks,i−1|k−1 is mapped through the nonlinear state transition function f(·) as

s,i n χk∗s,i |k−1 = f (χk−1|k−1 ) ∈ R , i = 1, . . . , m.

(4)

Next, the predicted state xˆsk|k−1 , the predicted information matrix Yks|k−1 and the predicted information state vector yˆsk|k−1 are determined by

⎧ s xˆ = 1  m χ ∗s,i ∈ Rn ⎪ ⎨ k|k−1 m i=1 k|k−1 1 T s T −1 Yks|k−1 = [ m im=1 χk∗s,i ∈ Rn×n |k−1 (∗ ) − xˆk|k−1 (∗ ) + Qk−1 ] ⎪ ⎩ s s s n

(5)

yˆk|k−1 = Yk|k−1 xˆk|k−1 ∈ R .

(2) Measurement update: Firstly, another set of new cubature points χks,i|k−1 ∈ Rn are produced based on the predicted state xˆsk|k−1

Q. Chen et al. / Neurocomputing 243 (2017) 115–124

and the square-root matrix Sks |k−1 satisfying Sks |k−1 (∗ )T = (Yks|k−1 )−1 as

χks,i|k−1 = Sks |k−1 ξi + xˆsk|k−1 , i = 1, . . . , m.

(6)

Secondly, the so-called propagated cubature points are spread through the nonlinear measurement function hs (·) as

Zks,i|k−1 = hs (χks,i|k−1 ) ∈ Rr ,

i = 1, . . . , m

(7)

which gives the following predicted measurement:

zˆks |k−1

1 = im=1 Zks,i|k−1 ∈ Rr . m

(8)

Thirdly, the information state contribution formation matrix Iks are estimated by

isk

and its associated in-

⎧s s s −1 s s T s s n ik = Yks|k−1 Pxz,k ⎪ |k−1 (Rk ) [υk + (Pxz,k|k−1 ) Yk|k−1 xˆk|k−1 ] ∈ R ⎪ ⎪ ⎪ ⎨Is = Y s Ps (Rs )−1 (Ps )T Y s ∈ Rn×n k|k−1 xz,k|k−1

k

k

xz,k|k−1

k|k−1

s,i s,i 1 s m T s s T n×r ⎪ Pxz,k ⎪ |k−1 = m i=1 χk|k−1 (Zk|k−1 ) − xˆk|k−1 (zˆk|k−1 ) ∈ R ⎪ ⎪ ⎩ s υk = zks − zˆks |k−1 ∈ Rr .

(9)

117

Remark 1. The weighted average consensus depends on the topology of the network. When the undirected network is fully connected, a small number of consensus steps L can show a satisfactory consensus level; otherwise, a larger L is required [12]. Further, as L increases, each element of L approaches 1/N. Thus, the limit in Definition 1 is well-defined. Specifically, the information pairs are said to be of weighted average consensus, if each information pair has the same behavior (the limit) as L tends to ∞. Theorem 1. Consider the sensor network with topology G (N , E ). Suppose that the consensus weight matrix = {π s, j } ∈ Rn×n is primitive. Then, each and every information pair (yˆsk|k , Yks|k ), s ∈ N can achieve a weighted average consensus, namely, the limit (11) holds over s ∈ N with the same pair (yˆ∗k|k , Yk∗|k ). Proof. The proof of Theorem 1 is similar to that of Theorem 1 in [12]. Here we omit the proof and more details can be found in [12].  By exploiting the CIF algorithm described above as well as Theorem 1, the suggested DCIF algorithm is summarized as Algorithm 1 given below.

Finally, we obtain the estimated information state vector yˆsk|k , the estimated information matrix Yks|k and the state estimate xˆsk|k by

Algorithm 1 Distributed cubature information filtering.

⎧ s yˆ = yˆsk|k−1 + isk ∈ Rn ⎪ ⎨ k|k



using isk and Iks in terms of

Yks|k

⎪ ⎩

=

Yks|k−1

+

Iks

∈R

n×n

(10)

xˆsk|k = (Yks|k )−1 yˆsk|k ∈ Rn .

This section explains whether it is possible to develop some weighted average consensus algorithm such that all the sensor nodes reach an agreement on the estimated information state vectors and the estimated information matrices. Here, by consensus algorithms we refer to a distributed communication rule specifying: (1) the information exchange happens merely between each individual node and its neighbors; (2) the update of the local estimated information state vectors and matrices is based on the received information. To formulate the weighted average consensus, yˆsk|k and Yks|k are

denoted simply by the information pairs (yˆsk|k , Yks|k ), s ∈ N . Then, the following definition can be introduced. Definition 1. ([12]) Given the information pairs in the form of (yˆsk|k , Yks|k ), s ∈ N , it is said that their weighted average consensus is achieved if the following limit exists for each and all s ∈ N .

where ∈ N denotes the information pair of node s available at time step k after the lth internal iteration satisfying

yˆsk,l+1

=  j∈Js π

s, j

j yˆk,l

j s Yk,l+1 =  j∈Js π s, jYk,l

s s −1 s T s Yk,s 0 = Yks|k−1 + Yks|k−1 Pxz,k |k−1 (Rk ) (Pxz,k|k−1 ) Yk|k−1 .

j j b) Receive the messages (yˆk,l , Yk,l ) from all its neighbors j ∈ Ns . c) Fuse the information according to j yˆsk,l+1 =  j∈Js π s, j yˆk,l ,

j s Yk,l+1 =  j∈Js π s, jYk,l .

3) Update the estimated information state vector, the estimated information matrix and the state estimate



yˆsk|k = yˆsk,L ,

s Yks|k = Yk,L

xˆsk|k = (Yks|k )−1 yˆsk|k . 4) Implement the prediction step

⎧ s xˆ = 1  m χ ∗s,i ⎪ ⎨ k+1|k m i=1 k+1|k 1 Yks+1|k = [ m im=1 χk∗s,i (∗ )T − xˆsk+1|k (∗ )T + Qk ]−1 +1|k ⎪ ⎩ s s s yˆk+1|k = Yk+1|k xˆk+1|k .

(11)

s ), s (yˆsk,l , Yk,l



s s −1 s s T s s yˆsk,0 = yˆsk|k−1 + Yks|k−1 Pxz,k |k−1 (Rk ) [υk + (Pxz,k|k−1 ) Yk|k−1 xˆk|k−1 ]

2) For l = 0, 1, . . ., L − 1, do the following consensus steps. s ) to its neighbors j ∈ N . a) Broadcast the message (yˆsk,l , Yk,l s

4. Distributed cubature information filtering

s (yˆ∗k|k , Yk∗|k ) = liml→∞ (yˆsk,l , Yk,l )

1) For each node s ∈ N , obtain the measurement zks and use (10) to compute

(12)

with π s, j ≥ 0, j ∈ Js being the weighting coefficients and  j∈Js π s, j = 1, ∀s ∈ N . The initial conditions are assumed to be yˆsk,0 = yˆsk|k and Yk,s 0 = Yks|k . In the above, l is the consensus step index in our proposed algorithm.

To understand the following remark, let denote the consensus weight matrix, whose elements are the consensus weighting coefficients π s, j , j ∈ Js for any s ∈ N . In the sequel, let be the

th power of .

Remark 2. About Algorithm 1, we have the following observations: (1) The algorithm has less computational cost and better realtime performance, which mean less energy cost and higher numerical efficiency. These are of importance in practical applications especially in large-scale sensor networks. (2) When the consensus weight matrix is concerned, the Metropolis weights ensure that is stochastic. Furthermore, according to [32, Theorem A.1], is primitive if and only if the undirected sensor network is connected. When the sensor network is directed, a necessary condition for to be primitive is that the associated graph is strongly connected [28]. (3) The proposed algorithm belongs to the CI paradigm, whose stability is guaranteed for any number of consensus steps (even for a single one, i.e., L = 1). For a single consensus

118

Q. Chen et al. / Neurocomputing 243 (2017) 115–124

step, the CI algorithm reduces to the covariance intersection [29]. Multiple-step consensus (i.e., L > 1) generalizes the covariance intersection to the CI algorithm, which can be used to improve performance. 5. Stability analysis Stability analysis of the proposed DCIF is concerned with regard to two major problems: estimation consistency and estimation error boundedness. To streamline our stability analysis technique, let us adopt the statistical linear error propagation methodology [33] to reconstruct a pseudo system matrix Fks−1 and a pseudo measurement matrix Hks for each filter node. By using the filter node notation rather than the sensor node notation, we mean that each sensor node is equipped with a corresponding filter. 5.1. Linearization approximation and basic relationships By the error propagation notion, the cross-covariance matrix can be obtained as follows [33,34]: s Pxz,k |k−1 = E





xk − xˆsk|k−1

≈ Yks|k−1 where

Hks

≈(

Hks





−1 s T

T

∈ Rn×r

Hk

∂ hs (x )/∂ x|

zk − zˆks |k−1

x=xˆs k|k−1

(13)

. Via (13), one gets

)

s T s Pxz,k |k−1 Yk|k−1

which says that the linearized measurement matrix Hks can be s s given approximately by Pxz,k |k−1 and Yk|k−1 . In view of this, based

s on Yks|k−1 and Pxz,k |k−1 that are calculated from (5) and (9), respectively, we can define the pseudo measurement matrix Hks as s T s Hks  (Pxz,k |k−1 ) Yk|k−1 .

(14)

Clearly, Hks can be calculated by means of the CIF algorithm. By the above definition formulas, Hks and Hks have a same numerical interpretation. However, their system modeling implications are essentially different. Furthermore, to linearize the process equation in (1), a pseudo system matrix Fks−1 is derived. To this end, by using the error propagation approach, the cross-covariance matrix between the latest previous estimate and the current prediction can be defined as follows [12]:

Pxsk−1 ,xk|k−1 = E ≈E

 

xk−1 − xˆsk−1|k−1

where



xk − xˆsk|k−1

T





xk−1 − xˆsk−1|k−1 Fks−1 xk−1 − xˆsk−1|k−1 + ωk−1



= Pks−1|k−1 Fks−1 Fks−1



T

 ∂ f (x )/∂ x|x=xˆs

k−1|k−1

(15)

(16)

where Yks−1|k−1 = (Pks−1|k−1 )−1 is obtained by (10) and Pxs

k−1 ,xk|k−1

given by 1 m

is

  T s ˆ im=1 χks,i−1|k−1 − xˆsk−1|k−1 χk∗s,i − x . k|k−1 |k−1

Thus, the discrete-time nonlinear network (1) can be linearized as



xk = αks −1 Fks−1 xk−1 + ωk−1 zks = βks Hks xk + νks ,

s = 1, . . . , N

⎧ s yˆk|k = yˆsk|k−1 + (βks Hks )T (Rsk )−1 zks ⎪ ⎪ ⎪ ⎪ ⎪ Y s = Yks|k−1 + (βks Hks )T (Rsk )−1 βks Hks ⎪ ⎪ ⎨ k|k yˆsk+1|k = (αks Fks )−T [In − Yks|k (Yks|k + (αks Fks )T Qk−1 αks Fks )−1 ]yˆsk|k (18) ⎪ ⎪ ⎪ ⎪Yks+1|k = (αks Fks )−T Yks|k (αks Fks )−1 − (αks Fks )−T ⎪ ⎪ ⎪ ⎩ ×Yks|k [Yks|k + (αks Fks )T Qk−1 αks Fks ]−1Yks|k (αks Fks )−1 .

For brevity, we further rewrite the predicted information matrix in (18) as

⎧ s Y = ψ (Yks|k ) ⎪ ⎨ k+1|k ψ (Y ) = (αks Fks )−T Y (αks Fks )−1 ⎪ ⎩ −(αks Fks )−T Y [Y + (αks Fks )T Qk−1 αks Fks ]−1Y (αks Fks )−1 .

With the above relations in mind, we are ready to tackle the stability analysis of the suggested DCIF algorithm in the following subsections. 5.2. Consistency of estimates Consistency is one of the crucial properties in data fusion processes [28–30]. For DSE algorithms in linear dynamic systems, consistency of estimates is proven in [28,30]. However, no proof in the nonlinear setting has been reported in most of the existing literature. In the sequel, we give a proof to consistency of our proposed DCIF algorithm by working with the linear approximation model (17). Definition 2. ([29]) Consider a random vector x. Let xˆ be an estimate of x and P an estimate of the associated error covariance. The pair (xˆ, P ) is said to be of consistency when E{(x − xˆ)(∗ )T } ≤ P holds. By Definition 2, consistency means the estimated error covariance P is always an upper-bound of the true error covariance. In terms of the information pair (y, Y ) = (P −1 xˆ, P −1 ), the pair (y, Y) is of consistency, if

Y ≤ E−1 {(x − Y −1 y )(∗ )T }.

(19)

Before verifying consistency of our proposed DCIF algorithm, the following lemma is required. Lemma 1. ([28]) The function ψ (·) is monotone non-decreasing, i.e., for any two positive semi-definite matrices Y1 and Y2 with Y1 ≤ Y2 , it holds that 0 ≤ ψ (Y1 ) ≤ ψ (Y2 ). Theorem 2. If the initial predicted estimates {xˆs1|0 }N s=1 are consistent in the sense of

. Similarly, we have

Fks−1  (Pxsk−1 ,xk|k−1 )T Yks−1|k−1

Pxsk−1 ,xk|k−1 =

T

also utilized to model the possible approximation errors during our linearization approximation process. Based on the linear approximation model (17), the corresponding modified CIF recursion algorithm is rewritten as follows:

(17)

where the unknown instrumental matrices αks = s s s s s s s diag(αk,1 , αk,2 , . . . , αk,n ) and βk = diag(βk,1 , βk,2 , . . . , βk,r ) are

Y1s|0 ≤ E−1 {(x1 − xˆs1|0 )(∗ )T }

(20)

then, for each time step k > 1 and s ∈ N , Yks|k−1 ≤ E−1 {(xk − xˆsk|k−1 )(∗ )T } and Yks|k ≤ E−1 {(xk − xˆsk|k )(∗ )T }. That is, Algorithm 1 preserves consistency. Proof. Denote the prior estimate error and the true prior error covariance at node s by x˜sk|k−1  xk − xˆsk|k−1 and P˜ks|k−1  E{x˜sk|k−1 (∗ )T }, respectively; define the posterior estimate error and the true posterior error covariance as x˜sk  xk − xˆsk|k and P˜ks|k  E{x˜sk (∗ )T }, respectively. According to (17), we have



P˜ks|k = (In − Wks βks Hks )P˜ks|k−1 (∗ )T + Wks Rsk (∗ )T Pks|k = (In − Wks βks Hks )Pks|k−1 (∗ )T + Wks Rsk (∗ )T

Q. Chen et al. / Neurocomputing 243 (2017) 115–124

where Wks is the CKF gain and Pks|k = (Yks|k )−1 . Assume at time step k that

Yks|k−1 ≤ E−1 {(xk − xˆsk|k−1 )(∗ )T }

(21)

Theorem 3. Consider the nonlinear system (1), its linearized model (17) and Algorithm 1. The estimation error x˜sk+1 = xk+1 − xˆsk+1|k+1 is exponentially bounded in mean square for any s ∈ N under the following assumptions:

for any s ∈ N . Then, (21) implies that

1) Real numbers α , f, β , h =0 and α , f , β , h = 0 exist such that for each k ≥ 0

E−1 {(xk − xˆsk,0 )(∗ )T }

⎧ ⎨α 2 In ≤ α s (∗ )T ≤ α 2 In , k



= (In − Wks βks Hks )E{(xk − xˆsk|k−1 )(∗ )T }(In − Wks βks Hks )T + Wks E{νks (∗ )T }(Wks ) ≥ [(

In − Wks

=(

)

Pk,s 0 −1

β

=

s s k Hk

)

T −1

Pks|k−1

(∗ )

T

+ Wks Rsk

119

2

f In ≤ Fks (∗ )T ≤ f In 2

⎩β 2 I ≤ β s (∗ )T ≤ β 2 I , h2 I ≤ Hs (∗ )T ≤ h2 I . r r r r k k

(∗ ) ]

T −1

2) Real numbers pmax ≥ pmin > 0, q ≥ q > 0, r ≥ r > 0, and p ≥ p > 0 exist such that for each k > 0

Yk,s 0



−1 where xˆsk,0 = (Yk,s 0 ) yˆsk,0 and Pks|k−1 = (Yks|k−1 )−1 . Note that the covariance intersection fusion preserves consiss , which tells us that tency [29], i.e., E−1 {(xk − xˆsk,l )(∗ )T } ≥ Yk,l s E−1 {(xk − xˆsk,l+1 )(∗ )T } ≥ Yk,l+1

for any l = 0, 1, . . . , L − 1. Bearing this in mind, it is immediate to see that s E−1 {(xk − xˆsk,L )(∗ )T } ≥ Yk,L

s . Further, according to Lemma 1, we with xˆsk|k = xˆsk,L and Yks|k = Yk,L

pmin ≤ ps ≤ pmax ,

rIr ≤ Rsk ≤ rIr ,

qIn ≤ Qk ≤ qIn

(23)

pIn ≤ (Yks|k )−1 ≤ pIn .

3) The consensus weight matrix is stochastic and primitive. Proof. For brevity, define x˜k = col(x˜sk , s ∈ N ) and x˜k+1|k = col(x˜sk+1|k , s ∈ N ). Here, col(x˜sk , s ∈ N ) means to vertically concatenate all the vectors x˜1k , . . . , x˜sk , . . . , x˜N into a single column k vector. Let P = ( p1 , . . . , ps , . . . , pN )T be the Perron–Frobenius left eigen-

Yks+1|k =

vector of the matrix L (more specifically, L = (πL )n×n ). According to Assumption 3) in Theorem 3, ps is strictly positive (i.e., ps > 0). And we have PT L = PT . Or equivalently, we write  j∈N p j πLj,s = ps . Construct the following stochastic function in terms of x˜k+1|k :

The proof is completed since the initial predicted estimates {xˆs1|0 }Ns=1 are consistent. 

V (x˜k+1|k ) = s∈N ps (∗ )T Yks+1|k x˜sk+1|k .

Remark 3. Eq. (20) can be easily satisfied in general. The prior information on the state vector can be obtained in an off-line manner before the data fusion process. In the worst case when no prior information is available, we can let the initializations be Y1s|0 = 0,

Yks|k−1 = [αks −1 Fks−1 (Yks−1|k−1 )−1 (∗ )T + Qk−1 ]−1

have







ψ Yks|k ≤ ψ E−1 xk − xˆsk|k (∗ )T 

 = E−1 xk+1 − xˆsk+1|k (∗ )T .

yˆs1|0 = 0 for all s ∈ N .

s, j

Note that the modified predicted information matrix can be written as

and according to the assumptions in Theorem 3, we have

( p¯ α¯ 2 f¯2 + q¯ )−1 In ≤ Yks+1|k ≤ ( pα 2 f 2 + q )−1 In .

5.3. Boundedness of estimation errors

Further, from (23), it can be obtained that

Boundedness of estimation errors in mean square is a criterion to test the filter performance. For a single CKF, estimation errors are proven to be exponentially bounded in mean square [23,35]. To the authors’ best knowledge, no proof has yet been investigated in the distributed setting. In what follows, we discuss the boundedness of estimation errors for our proposed DCIF. As preliminaries to boundedness analysis, the following lemmas are reviewed.

pmin 2 2

pα f + q

vmin ξk 2 ≤ V (ξk ) ≤ vmax ξk 2 E{V (ξk )|ξk−1 } − V (ξk−1 ) ≤ μ − λV (ξk−1 )

E{ξk 2 } ≤

vmax μ k−1 E{ξ0 2 }(1 − λ )k +  ( 1 − λ )i . vmin vmin i=1

Lemma 3. ([28]) Given an integer N ≥ 2, for positive definite matrices M1 , . . . , MN and vectors v1 , . . . , vN , the following inequality holds

(∗ )T (iN=1 Mi )−1 (iN=1 Mi vi ) ≤ iN=1 (∗ )T Mi vi . With Lemmas 2 and 3 and Algorithm 1, it is ready to state and verify the following result.

pmax pα 2 f + q 2

x˜k+1|k 2

x˜sk+1|k = xk+1 − xˆsk+1|k = αks Fks (xk − xˆsk|k ) + ωk =

 αks Fks  j∈N πLs, j (Yks|k )−1Ykj|k−1 (xk − xˆkj |k−1 )

−  j∈N πLs, j (Yks|k )−1 (βkj Hkj )T (Rkj )−1 νkj + ωk

=  j∈N πLs, j αks Fks (Yks|k )−1Ykj|k−1 (xk − xˆkj |k−1 ) − j∈N πLs, j αks Fks (Yks|k )−1 (βkj Hkj )T (Rkj )−1 νkj + ωk

(22)

is fulfilled for all k. Then, ξ k is exponentially bounded in mean square, i.e.,

x˜k+1|k 2 ≤ V (x˜k+1|k ) ≤

which meets the first condition of (22) for the application of Lemma 2. Then, it follows that

Lemma 2. ([36]) Assume that ξ k is a stochastic process. If a stochastic function V(ξ k ), scalars vmin , vmax > 0, μ > 0 and 0 < λ ≤ 1 exist such that



(24)

=  j∈N ks, j x˜kj |k−1 +  j∈N ϒks, j νkj + ωk

(25)

where



ks, j = πLs, j αks Fks (Yks|k )−1Ykj|k−1 ϒks, j = −πLs, j αks Fks (Yks|k )−1 (βkj Hkj )T (Rkj )−1 .

Next, inserting (25) to (24) and taking the conditional expectation, we have

E{V (x˜k+1|k ) | x˜k|k−1 } = E{s∈N ps (∗ )T Yks+1|k x˜sk+1|k | x˜k|k−1 } = ϕkx+1 + ϕkν+1 + ϕkω+1

(26)

120

Q. Chen et al. / Neurocomputing 243 (2017) 115–124

Since 0 < β˜ < 1, setting λ = 1 − β˜ , we have 0 < λ < 1. Consequently, from Lemma 2, x˜k+1|k is exponentially bounded in mean square. This in turn implies x˜sk+1|k is exponentially bounded in mean square. In the following, we attempt to prove that x˜sk+1 is exponentially bounded in mean square as well. To see this, noting that

where

⎧ ⎪ ϕ x = E{s∈N ps (∗ )T Yks+1|k ( j∈N ks, j x˜kj |k−1 )|x˜k|k−1 } ⎪ ⎨ k+1 ϕkν+1 = E{s∈N ps (∗ )T Yks+1|k ( j∈N ϒks, j νkj )|x˜k|k−1 } ⎪ ⎪ ⎩ϕ ω = E{ ps (∗ )T Y s ω |x˜ }. s∈N

k+1

k+1|k

k

k|k−1

To complete the proof, let us examine each of the three terms in (26). Firstly, let us focus on ϕkx+1 of (26). Assumption 1) of Theorem 3 ensures that αks Fks is non-singular. Based on Y s ≤ β˜ (α s F s )−T Y s (α s F s )−1 for some 0 < β˜ < 1 (see

x˜sk+1|k = αks Fks (xk − xˆsk|k ) + ωk

Lemma 1 (iii) in [28]), it is immediate to see that

ωk is exponentially bounded in mean-square sense. Hence, we con-

k+1|k

k

k|k

k

k

and taking expectation from both sides, one has

E{x˜sk 2 } ≤ α −2 f

clude that the estimation error x˜sk+1 is exponentially bounded in mean square. 

× ( j∈N ks, j x˜kj |k−1 )|x˜k|k−1 }

Remark 4. About Theorem 3, we observe:

= β˜ E{s∈N ps (∗ )T (αks Fks )−T Yks|k (αks Fks )−1

(1) Though αks and βks are unknown, neither boundedness of the estimation errors nor consistency of the proposed DCIF algorithm depends on the exact magnitude of αks and βks . (2) The stochastic process (24) plays a key role in proving the stability of the consensus-based CKF, which can be extended to stability analysis of other Kalman-like consensus filters both in linear and nonlinear settings. (3) Here, we employ the linearization approximation to analyze the boundedness of estimation errors. In order to take the residuals into account, we utilize instrumental matrices developed in [37] to model the possible approximation errors during the linearization approximation process. It turns out to be easier for the later proof process. In addition, we investigate this technique in a more complex setting, i.e., the distributed filtering framework.

× [ j∈N πLs, j αks Fks (Yks|k )−1Ykj|k−1 x˜kj |k−1 ]|x˜k|k−1 } = β˜ E{s∈N ps (∗ )T (Yks|k )−1 × ( j∈N πLs, jYkj|k−1 x˜kj |k−1 )|x˜k|k−1 } ≤ β˜ E{s∈N ps (∗ )T ( j∈N πLs, jYkj|k−1 )−1 × ( j∈N πLs, jYkj|k−1 x˜kj |k−1 )|x˜k|k−1 }

(27)

where the last inequality follows from the fact that Yks|k ≥  j∈N πLs, jYkj|k−1 . Applying Lemma 3 to (27), we have

ϕkx+1 ≤ β˜ E{s∈N ps  j∈N πLs, j (∗ )T Ykj|k−1 x˜kj |k−1 |x˜k|k−1 } = β˜ E{ j∈N p j (∗ )T Ykj|k−1 x˜kj |k−1 |x˜k|k−1 } = β˜ V (x˜k|k−1 ). Now consider the noise-related term ϕkν+1 + ϕkω+1 of (26). It is easy to see that

 ϕkν+1 + ϕkω+1 = E s∈N ps (∗ )T Yks+1|k ( j∈N ϒks, j νkj )  +s∈N ps (∗ )T Yks+1|k ωk |x˜k|k−1  2 ≤ ( pα 2 f + q )−1 E s∈N ps (∗ )T ( j∈N ϒks, j νkj )  +s∈N ps (∗ )T ωk |x˜k|k−1  2 = ( pα 2 f + q )−1 E s∈N ps  j∈N (∗ )T (ϒks, j νkj )  +s∈N ps (∗ )T ωk |x˜k|k−1  2 = ( pα 2 f + q )−1 E tr{s∈N ps  j∈N (∗ )T (ϒks, j νkj )}  +tr{s∈N ps (∗ )T ωk }|x˜k|k−1



0

×(βkj Hkj )T (Rkj )−1

T





+s∈N ps tr{Qk }]  μk by which we see that μk > 0. Also, from the assumptions in Theorem 3, we have

μk = ( pα f + q ) s∈N p [ j∈N tr{(∗ ) ( ) 2 2

×(−π

−1

T

Rkj −T

) )} + tr{Qk }]  2 2 2 2 2 ¯ f¯ β¯ h¯ p¯ s, j 2 α 2 2 −1 s ≤ ( pα f + q ) s, j∈N p [(πL ) r + q¯ n]  μ. s, j L

α

s

s s k Fk

(

) (β

Yks|k −1



0

cos h

0

− sin h

1−cos h



1

sin h

sin h

0

cos h







⎡ h2 2

⎥ ⎢ ⎥ ⎢h ⎥xk−1 + ⎢ ⎥ ⎢ ⎦ ⎣0 0

0



⎥ ⎥ωk−1 h2 ⎥ 2 ⎦ 0⎥

h (28)

−πLs, j αks Fks (Yks|k )−1

(−πLs, j αks Fks (Yks|k )−1 (βkj Hkj )T )

1−cos h ⎤

sin h

1

⎢ ⎢0 xk = ⎢ ⎢0 ⎣

2



6. Simulation results The nonlinear system for a typical air-traffic control scenario is concerned. The target executes maneuvering turns in an x/y plane at a fixed turn rate . The turning dynamics can be depicted by the nonlinear process equation [11]:

= ( pα 2 f + q )−1 [s∈N ps  j∈N tr{(ϒks, j )T (ϒks, j Rkj )} + s∈N ps tr{Qk }] 2

(E{x˜sk+1|k 2 } − E{ωk 2 } ).

By exploiting the same technique as before, it is easy to prove

k

ϕkx+1 ≤ β˜ E{s∈N ps (∗ )T (αks Fks )−T Yks|k (αks Fks )−1

= ( pα 2 f + q )−1 s∈N ps  j∈N tr

−2

j j T H k k

r

Summarizing the above results, we are ready to claim that

E{V (x˜k+1|k )|x˜k|k−1 } − V (x˜k|k−1 ) ≤ μ − (1 − β˜ )V (x˜k|k−1 ).

where the state vector is xk = [ξ , ξ˙ , η, η˙ ]T ; ξ and η represent the positions; ξ˙ and η˙ represent velocities in the x and y directions, respectively; h = 1 is the sampling period and ωk−1 ∼ N (0, Qk−1 ) with Qk−1 = 0.01I2 . There are 12 radars fixed in the horizontal plane to measure the range and bearing of the target. Their coordinates are (0, 30), (80, 30), (160, 30), (240, 30), (0, 60), (80, 60), (160, 60), (240, 60), (0, 90), (80, 90), (160, 90), (240, 90), respectively. Hence, we can write the sensor measurement equation as



zks

=

rks

θks





=

(ξk − xs0 )2 + (ηk − ys0 )2 η −ys

arctan( ξk −xs0 ) k



+ νks

(29)

0

where s = 1, . . . , 12 and (xs0 , ys0 ) represents the position of the sth radar sensor. νks ∼ N (0, Rsk ) with Rsk = diag(σr2 , σθ2 ). We set the system parameters σr = 0.2 m, σθ = 0.015 rad,  = 10◦ s−1 , the true initial state x0 = [−40 m, 3 m s−1 , 10 m, 1 m s−1 ]T and the associated covariance P0 = diag(52 m2 , 0.52 m2 s−2 , 42 m2 , 0.52 m2 s−2 ).

Q. Chen et al. / Neurocomputing 243 (2017) 115–124

121

work are depicted; (2) for consistency of the DCIF algorithm, only the performance of Sensor 2 is plotted since all sensors in the network are equivalent; (3) the position root mean square error (PRMSE), averaged over all the network nodes, has been computed as performance metrics. Performance metrics: the PRMSE at time step k is

PRMSE(k ) =

The initial state estimates xˆs0|0 are chosen randomly from N(x0 , P0 ) in each run; the total number of scans per run is 100. The communication topology graph among sensors is presented in Fig. 1, where the line between any two nodes indicates they can communicate with each other. The consensus weights adopted in the subsequent simulations are set equal to the Metropolis weights:

0,

M

M i=1 ((ξki − ξˆki )2 + (ηki − ηˆ ki )2 )

1/2

(31)

where (ξki , ηki ) and (ξˆki , ηˆ ki ) are the true and estimated positions at the ith Monte Carlo run; M is the number of Monte Carlo runs. For a comprehensive comparison, 100 independent Monte Carlo trails with the same conditions are performed in the following simulations. Fig. 2 shows the true state xk and the estimates xˆsk|k of the

Fig. 1. Communication topology among sensors.

⎧ (1 + max{ds , d j } )−1 , if (s, j ) ∈ E, ⎪ ⎨ π s, j = 1 − (s, j )∈E π s, j , if s = j, ⎪ ⎩

1

(30)

otherwise.

In the following numerical simulations, we note that: (1) as for estimation, only the simulations of 4 sensors in the 12-sensor net-

system. The blue line depicts the true state trajectory, whereas the other four lines show individual filter performance. Each local filter reaches an agreement on the actual state of the system. The results validate our proposed weighted average consensus is effective and the consensus-based CIF algorithm exhibits a satisfactory estimation performance. For Sensor 2, we also present the local posterior estimate errors, and the associated estimates of the two standard deviation bounds, with σ being the standard deviation. These bounds are obtained by twice the square root of the diagonals of the approximated error covariance in the DCIF algorithm. The results are shown in Fig. 3. According to the knowledge in [38], if the filter is said to be consistent, the estimate errors should lie within these bounds 95% of the time. Obviously, the errors always lie well within the two standard

Fig. 2. True and estimated states with L=10 consensus steps.

122

Q. Chen et al. / Neurocomputing 243 (2017) 115–124

Fig. 3. xk − xˆ2k|k and 2σ with L=10 consensus steps.

Fig. 4. RMSE in position.

deviations, which implies our proposed DCIF algorithm is consistent. Fig. 4 depicts some comparisons between our proposed DCIF algorithm and the existing CM algorithm within the CKF framework for L = 2 and L = 12 consensus steps. As expected, CM requires a minimal number of consensus steps to perform a stable behavior (in this case, L = 12 consensus steps are needed); in con-

trast, our proposed filtering algorithm exhibits a satisfactory performance with less consensus steps. Furthermore, Table 1 shows mean PRMSE and running time for L = 2 and L = 12 consensus steps. As shown, our proposed algorithm has less computational cost, which implies better real-time performance. The reason behind is that CM method needs one more multiplication operation to obtain yˆsk|k and Yks|k after consensus

Q. Chen et al. / Neurocomputing 243 (2017) 115–124 Table 1 Performance comparison in mean time cost and PRMSE. L=2 Algorithm

CM

L = 12 Proposed filter CM

Time (s) 0.2749 0.2597 PRMSE (m) 1.5841 × 104 24.1083

Proposed filter

0.2798 0.2623 0.3627 0.3207

steps, whereas our proposed algorithm obtains yˆsk|k and Yks|k after consensus steps. 7. Conclusions The paper is devoted to addressing the DSE problem in the nonlinear setting. The proposed filtering algorithm is derived from CIF and the weighted average consensus approach, which has been successfully applied to estimate the true state of the target system. Our proposed DCIF algorithm can be easily initialized and its update step is computationally simpler by employing CIF. Moreover, it can guarantee stability for any number of consensus steps. Further, we give a proof to show that consistency of estimates is guaranteed to be preserved. Finally, with the assistance of the stochastic stability lemma, it is proven that the estimation error of our proposed DCIF algorithm is bounded in mean square. As a side note about the suggested approach, the proof procedure contrived here can be extended to a more general case, such as distributed unscented information filters. Acknowledgments This work was supported jointly by the National Natural Science Foundation of China under Grants 61573001 and 61104081, the Fundamental and Frontier Research Project of Chongqing under Grant cstc2014jcyjA40020 and the Fundamental Research Funds for the Central Universities under Grant XDJK2014B001. References [1] D. Ding, Z. Wang, B. Shen, Recent advances on distributed filtering for stochastic systems over sensor networks, Int. J. Gen. Syst. 43 (3–4) (2014) 372–386. [2] W. Yang, H. Shi, Sensor selection schemes for consensus based distributed estimation over energy constrained wireless sensor networks, Neurocomputing 87 (2012) 132–137. http://dx.doi.org/10.1016/j.neucom.2012.02.011. [3] W. Li, Z. Wang, G. Wei, L. Ma, J. Hu, D. Ding, A survey on multi-sensor fusion and consensus filtering for sensor networks, Discrete Dynamics in Nature and Society, 2015, 2015, p. 12, doi:10.1155/2015/683701. Http://dx.doi.org/10.1155/ 2015/683701 [4] W. Zhang, G. Feng, L. Yu, Multi-rate distributed fusion estimation for sensor networks with packet losses, Automatica 48 (9) (2012) 2016–2028. http://dx. doi.org/10.1016/j.automatica.2012.06.027. [5] C. Wen, Y. Cai, Y. Liu, C. Wen, A reduced-order approach to filtering for systems with linear equality constraints, Neurocomputing 193 (2016) 219–226. [6] D. Ding, Z. Wang, B. Shen, H. Dong, H∞ state estimation with fading measurements, randomly varying nonlinearities and probabilistic distributed delays, Int. J. Robust Nonlinear Control 25 (13) (2015) 2180–2195. [7] R. Olfati-Saber, J.S. Shamma, Consensus filters for sensor networks and distributed sensor fusion, in: Proceedings of the Conference on Decision and Control, and the European Control Conference, Seville, Spain, 2005, pp. 6698–6703. [8] R. Olfati-Saber, Distributed Kalman filter with embedded consensus filters, in: Proceedings of the Conference on Decision and Control, and the European Control Conference, Seville, Spain, 2005, pp. 8179–8184. [9] R. Olfati-Saber, Distributed Kalman filtering for sensor networks, in: Proceedings of the Conference on Decision and Control, New Orleans, USA, 2007, pp. 5492–5498. [10] H. Long, Z. Qu, X. Fan, S. Liu, Distributed extended Kalman filter based on consensus filter for wireless sensor network, in: Proceedings of the World Congress on Intelligent Control and Automation, Beijing, China, 2012, pp. 4315–4319. [11] W. Li, Y. Jia, Consensus-based distributed multiple model UKF for jump Markov nonlinear systems, IEEE Trans. Autom. Control 57 (1) (2012) 227–233.

123

[12] W. Li, G. Wei, F. Han, Y. Liu, Weighted average consensus-based unscented Kalman filtering, IEEE Trans. Cybern. 46 (2) (2016) 558–567. [13] V.P. Bhuvana, M. Schranz, M. Huemer, B. Rinner, Distributed object tracking based on cubature Kalman filter, in: Proceedings of the Asilomar Conference on Signals, Systems and Computers, Pacific Grove, USA, 2013, pp. 423–427. [14] J. Ding, J. Xiao, Y. Zhang, Distributed algorithm-based CKF and its applications to target tracking, Control Decis. 30 (2) (2015) 296–302. [15] Y. Liu, Y. He, H. Wang, Squared-root cubature information consensus filter for non-linear decentralised state estimation in sensor networks, IET Radar Sonar Navig. 8 (8) (2014) 931–938. [16] Y. Chen, Q. Zhao, A novel square-root cubature information weighted consensus filter algorithm for multi-target tracking in distributed camera networks, Sensors 15 (5) (2015) 10526–10546. [17] Y. Chen, Q. Zhao, Z. An, P. Lv, L. Zhao, Distributed multi-target tracking based on the K-MTSCF algorithm in camera networks, IEEE Sensors J. 16 (13) (2016) 5481–5490, doi:10.1109/JSEN.2016.2565263. [18] Q. Liu, Z. Wang, X. He, D. Zhou, Event-based distributed filtering with stochastic measurement fading, IEEE Trans. Ind. Inform. 11 (6) (2015a) 1643–1652. [19] Q. Liu, Z. Wang, X. He, D. Zhou, Event-based recursive distributed filtering over wireless sensor networks, IEEE Trans. Autom. Control 60 (9) (2015b) 2470–2475. [20] D. Ding, Z. Wang, B. Shen, H. Dong, Event-triggered distributed H∞ state estimation with packet dropouts through sensor networks, IET Control Theory Appl. 9 (13) (2015) 1948–1955. [21] F. Han, Y. Song, S. Zhang, W. Li, Local condition-based finite-horizon distributed H∞ -consensus filtering for random parameter system with event-triggering protocols, Neurocomputing 219 (2017) 221–231. [22] I. Arasaratnam, S. Haykin, Cubature Kalman filters, IEEE Trans. Autom. Control 54 (6) (2009) 1254–1269. [23] B. Xu, P. Zhang, H. Wen, X. Wu, Stochastic stability and performance analysis of cubature Kalman filter, Neurocomputing 186 (2016) 218–227. [24] K.P.B. Chandra, D.W. Gu, I. Postlethwaite, Cubature information filter and its applications, in: Proceedings of the American Control Conference, San Francisco, USA, 2011, pp. 3609–3614. [25] M. Havlicek, K.J. Friston, J. Jan, M. Brazdil, V.D. Calhoun, Dynamic modeling of neuronal responses in fMRI using cubature Kalman filtering, NeuroImage 56 (4) (2011) 2109–2128. [26] Q. Li, Y. Song, Z. Hou, Neural network based FastSLAM for autonomous robots in unknown environments, Neurocomputing 165 (2015) 99–110. [27] Y. Zhao, Performance evaluation of cubature Kalman filter in a GPS/IMU tightly-coupled navigation system, Signal Process. 119 (2016) 67–79. [28] G. Battistelli, L. Chisci, Kullback–Leibler average, consensus on probability densities, and distributed state estimation with guaranteed stability, Automatica 50 (3) (2014) 707–718. [29] S.J. Julier, J.K. Uhlmann, A non-divergent estimation algorithm in the presence of unknown correlations, in: Proceedings of the American Control Conference, Albuquerque, USA, 1997, pp. 2369–2373. [30] S. Wang, W. Ren, On the consistency and confidence of distributed dynamic state estimation in wireless sensor networks, in: Proceedings of the Conference on Decision and Control, Osaka, Japan, 2015, pp. 3069–3074. [31] G. Battistelli, L. Chisci, Stability of consensus extended Kalman filter for distributed state estimation, Automatica 68 (2016) 169–178. [32] G.C. Calafiore, F. Abrate, Distributed linear estimation over sensor networks, Int. J. Control 82 (5) (2009) 868–882. [33] T. Lefebvre, H. Bruyninckx, J.D. Schuller, Comment on “A new method for the nonlinear transformation of means and covariances in filters and estimators” [with authors’ reply], IEEE Trans. Autom. Control 47 (8) (2002) 1406–1409. [34] D.J. Lee, Nonlinear estimation and multiple sensor fusion using unscented information filtering, IEEE Signal Process. Lett. 15 (2008) 861–864. [35] T.R. Wanasinghe, G.K. Mann, R.G. Gosine, Stability analysis of the discrete-time cubature Kalman filter, in: Proceedings of the Conference on Decision and Control, Osaka, Japan, 2015, pp. 5031–5036. [36] K. Reif, S. Günther, E. Yaz, R. Unbehauen, Stochastic stability of the discrete– time extended Kalman filter, IEEE Trans. Autom. Control 44 (4) (1999) 714–728. [37] K. Xiong, H. Zhang, C. Chan, Performance evaluation of UKF-based nonlinear filtering, Automatica 42 (2) (2006) 261–270. [38] S. Julier, J. Uhlmann, H.F. Durrant-Whyte, A new method for the nonlinear transformation of means and covariances in filters and estimations, IEEE Trans. Autom. Control 45 (3) (20 0 0) 477–482. Qian Chen received the B.S. degree in automation from Hohai University, Nanjing, China, in 2014. She is currently pursuing the M.S. degree in control theory and control engineering at Hohai University, Nanjing, China. Her current research interests include distributed state estimation and cubature Kalman filter.

124

Q. Chen et al. / Neurocomputing 243 (2017) 115–124 Wancheng Wang was born in Shandong, China, on January 21, 1976. He graduated as a Ph.D. from Southeast University, Nanjing, China in 2007. He is now an Associate Professor of College of Energy and Electrical Engineering, Hohai University, Nanjing, China. His work has been in measurement and signal processing, nonlinear control, state estimation, power system control. Email: [email protected].

Chao Yin is with the College of Electronic and Information Engineering, Southwest University, Chongqing, 400715, China, and also with the Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Chongqing, 400715, China (e-mail: [email protected]. edu.cn).

Xiaoxiang Jin received a Master’s Degree in Southeast University, Jiangsu, China in 2013. She is currently a teaching assistant of Electrical Engineering in Southeast University Chengxian College. Her research interests include control system of electric vehicles, design of Permanent-Magnet motor and filter.

Jun Zhou received the B.S. degree in Radio and Electronics from Sichuan University, China, the M.S. degree in Information and Control from Lanzhou University, China, and the Ph.D. degree in Electrical Engineering from Kyoto University, Japan, respectively. Currently, he is a professor at the Department of Automatic Control Engineering, Hohai University, China. His research topics include: nonlinear/hybrid systems and control, robustness performance synthesis, multi-agent and sensor networks, stabilization of power systems, especially theoretical contributions are established in periodic systems and control via harmonic analysis.

Suggest Documents