Nonlinear Dyn (2013) 73:551–563 DOI 10.1007/s11071-013-0808-9
O R I G I N A L PA P E R
Accelerated consensus to accurate average in multi-agent networks via state prediction Huiwei Wang · Xiaofeng Liao · Tingwen Huang
Received: 23 September 2012 / Accepted: 30 January 2013 / Published online: 12 February 2013 © Springer Science+Business Media Dordrecht 2013
Abstract This paper considers the double-integrator consensus speeding up problem for multi-agent networks (MANs) asymptotically achieving distributed weighted average. First, basic theoretical analysis is carried out and several necessary and sufficient conditions are derived to ensure convergence to weighted average for both directed and undirected networks, but the convergence is generally slow. In order to improve the rate of convergence, an approach is proposed to accelerate consensus by utilizing a linear predictor to predict future node state on the basis of the current and outdated node state. The local iterative algorithm then becomes a convex weighted sum of the original consensus update iteration and the prediction, which allows for a significant increase in the rate of convergence towards weighted average consensus because redundant sates are bypassed. Additionally, the feasible region of mixing parameter and optimal mixing parameter are determined for undirected networks. It is worth pointing out that the accelerated framework has tapped the maximum potential to the utmost, from H. Wang () · X. Liao State Key Laboratory of Power Transmission Equipment & System Security and New Technology, College of Computer Science, Chongqing University, Chongqing 400044, P.R. China e-mail:
[email protected] T. Huang Texas A&M University at Qatar, Doha, P.O. Box 23874, Qatar
both the current and outdated state stored in the memory, to improve the rate of convergence without increasing the computational and memorial burden. Finally, a simulation example is provided to demonstrate the effectiveness of our theoretical results. Keywords Accelerated convergence · Average consensus · Multi-agent networks (MANs) · State prediction
1 Introduction In the past few years, many important results have been appeared for collective behaviors in multi-agent dynamical networks which include, but are not limited to, consensus [1], synchronization [2], and swarming or flocking [3]. Consensus has fascinated many scholars from the last decade due to its wide application in the distributed coordination and control, distributed average of sensor networks, unmanned air vehicle (UAV) formations and mobile robotic teams, the readers may refer to the recent survey [1], the monograph [4] and the references therein. The investigation of consensus behavior mainly focused on analyzing how multi-agent networks agree on a common value through the exchange of local messages whatever single- or double-integrator dynamics. Roughly speaking, the single-integrator consensus problem of MANs can be treated as a special
552
case of the synchronization problem of complex dynamical networks [5], which has been widely studied in the past few decades [1, 2]. However, many singleintegrator consensus criteria fail to handle the consensus problem for double-integrator MANs [6]. Usually, some oscillators are governed by double-integrator dynamics with both position and velocity terms, such as simple pendulums and harmonic oscillators [7]. Up to date, many results have been appeared for consensus behaviors in double-integrator MANs [8–15] and higher order MANs [16]. Literature review Convergence rate or speed is an important performance index in the analysis of consensus problem, which is attracting more and more researchers to investigate this field. In [17], Xiao et al. have been the main contributors of methods that strive to accelerate consensus algorithms through optimization of the weight matrix, they formulated the optimization as a semi-definite problem and describe a decentralized algorithm using distributed orthogonal iterations. Further theoretical extensions of this work were presented in [18] with a look toward treatment of second-order neighbors. Unlike [17, 18] to directly optimize the weight matrix, a modified consensus protocol based on predicted mechanism was proposed in [19, 20], which constructs a convex weighted sum of between local neighbor averaging and a linear predictor by picking up short node memory. Building on the earlier work and combining with the generalized convex weighted sum, an augmented update framework was studied in [21–23] to improve convergence rate of the original single-integrator linear iterative algorithm. Recently, motivated by the prediction intelligence of the individual, a predictive pinning control framework was addressed in [24] for both single- and double-integrator MANs to accelerate convergence. Furthermore, a linear quadratic regular (LQR)-based method is proposed in [25] and outdated information are reused in [26, 27] to increase the convergence rate towards consensus. In summary, most of the previous works focused on the improvements of the convergence speed, such as optimizing the iterative weight matrix, designing a prediction framework, or reusing the outdated state estimation. It is noted that there are, more or less, some defects in most of the former works. First, optimization of the weight matrix is failed to the case that the parameter is fixed; Second, the augmented update framework will aggravate the computational and
H. Wang et al.
memorial burden; Third, the prediction for the future state will fail for time-varying interaction. Therefore, it is quite challenging to design an effective collective predictive mechanism to improve the consensus performance of MANs. Statement of contributions Based on the above discussion, it is motivated to design an effective collective predictive framework which has tapped the maximum potential to the utmost, from both the current and outdated state stored in the memory, to improve the rate of convergence without increasing the computational and memorial burden. Moreover, to the best of the authors’ knowledge, few authors have considered the double-integrator consensus speeding up problem except for [24]. In this paper, we propose an approach to accelerate double-integrator consensus for MANs asymptotically achieving distributed weighted average, which has overcome the drawbacks listed in the front. First, several necessary and sufficient conditions are derived to ensure convergence to weighted average for both directed and undirected networks. By utilizing a linear predictor to predict the future node state on the basis of both the current and outdated node state, the local iterative algorithm then becomes a convex weighted sum of between the original consensus update iteration and the prediction, which allows for a significant increase in the rate of convergence towards weighted average consensus because redundant states are bypassed. Additionally, the feasible bound of mixing parameter that guarantees convergence of consensus protocol and optimal mixing parameter are determined for undirected networks. Finally, a simulation result is reported to evaluate the behavior of consensus speeding up. The remainder of this paper is organized as follows. Section 2 introduces some basic concepts of algebraic graph theory and formulates the problem to be solved in this paper. Section 3 establishes several necessary and sufficient conditions on consensus problem for both directed and undirected networks. A novel accelerated framework is proposed in Sect. 4, the feasible region of the parameters for the consensus speeding up problem are derived and the mixed parameter is also optimized. Section 5 gives a simulation example to demonstrate the performance of the proposed accelerated consensus algorithms. Section 6 draws conclusions to this paper.
Accelerated consensus to accurate average in multi-agent networks via state prediction
Notation The notations are standard. Throughout the paper, the notation Z+ and C represent, respectively, the set of positive integers and the set of complex number. Denote z = Re(z) + ι Im(z) ∈ C, where Re(z), Im(z) and ι are, respectively, the real part, the imaginary part and the imaginary unit. 1 and 0 are, respectively, the vector of all 1 and 0. ρ(A) denotes the spectral radius of matrix A.
553
protocol as [10] xi (k + 1) = xi (k) + vi (k), vi (k + 1) = vi (k) + ui (k),
(1)
where xi ∈ Rn is the information state, vi ∈ Rn is the information state derivative, and ui ∈ Rn is the information control input associated with the ith agent; is analogous to a sampling interval or a step-size [10]. We will employ the following distributed consensus algorithm:
2 Problem formulation In this paper, we model a MAN as a weighted directed graph G = (V, E, A) of order N , consisting of a set of nodes V = {1, 2, . . . , N}, a set of directed edges E ⊆ V × V, and a weighted adjacent matrix A = (aij )N ×N . A directed edge in graph G is denoted by eij = (i, j ). If there is an edge from node j to node i, then it is said that node j can reach node i and aij > 0 is the weight associated with the edge eij ; otherwise, aij = 0. As usual, we assume there is no self-loop in G. A graph G is strongly connected if between any pair of distinct nodes i to j in G, there exists a directed path from i to j , i, j = 1, 2, . . . , N . The Laplacian matrix L = (Lij )N ×N of graph G is defined by Lij = −aij for i = j , i, j ∈ {1, . . . , N} and Lii = − N j =1,j =i Lij is the sum of the weights of the edges ending at node vi . It is easy to check that N j =1 Lij = 0 for all i = 1, 2, . . . , N , that is, L1N = 0N . Also, μ = [μ1 , μ2 , . . . , μN ]T is the unique nonnegative left eigenvector of L associated with eigenvalue 0 satisfying μT 1N = 1, that is, μT L = 0N . In particular, for a connected graph, the rank of L is N − 1. For an undirected network, the presence of an edge (i, j ) indicates that nodes i and j can communicate with each other reliably. Under this condition, we can also show that 1TN L = 0TN and μ = 1N /N . Additionally, in this case, L is a symmetric positive semidefinite matrix (implying its eigenvalues are nonnegative); and its eigenvalues can be arranged in increasing order as 0 = λ1 (L) < λ2 (L) ≤ · · · ≤ λN (L). In the sequel, for the sake of simplicity, we simply write λi (L) as λi . In the following, we will present the consensus speeding up problems concerning networks with a distributed discrete-time double-integrator consensus
ui (k) = −
N
aij xi (k)−xj (k) +γ vi (k)−vj (k) ,
j =1
where γ > 0 is analogous to a coupling strength parameter. To utilize expediently the current and outdated state information, we need to transform the network (1) into a simple form which only includes one evolutive state. By simple calculation, the network (1) is then transformed into the following distributed discrete-time double-integrator protocol [24]: X(k) = W X(k − 1), where X(k) =: [x T (k), x T (k − 1)]T ,
2IN − γ L −IN − 2 L + γ L W =: . IN ON
(2)
(3)
Let λi be the ith eigenvalue of L. Then, one has some relationships between the eigenvalues of W and L are reviewed [24] as follows: det(ηI2N − W ) = det (η − 1)2 IN + ηγ + 2 − γ L =
N (η − 1)2 + ηγ + 2 − γ λi i=1
= 0. Therefore, it follows that 2 − γ λi ± γ 2 λ2i − 4λi ηi± = . 2
(4)
From (4), it is easy to see that W has an eigenvalue 1 of algebraic multiplicity 2m if and only if L has a 0 eigenvalue of algebraic multiplicity m. Since L has
554
one simple eigenvalue at 0, then W has the eigenvalue at 1 with a geometric simplicity of 1 and an algebraic multiplicity 2. Note that Yu et al. considered the distributed continuous-time consensus problem for second-order protocol [8] and high-order protocol [16], and some necessary and sufficient conditions are obtained. However, these conditions cannot be directly utilized to the state-update iteration (1) due to discrete sampling, which inspires us to investigate some new conditions for the state-update iteration sample-based (1).
3 Convergence analysis In this section, we investigate the convergence properties of the double-integrator consensus algorithm in both directed and undirected networks. In the following, we state a lemma provided the platform for the proof of the lemmas and theorems. Lemma 1 (Lemma 8.5 in [4]) The polynomial z2 + az + b = 0, where a, b ∈ C, has all roots within the unit circle if and only if all roots of (1 + a + b)s 2 + 2(1 − b)s + (b − a + 1) = 0 are in the open left half plane. 3.1 Directed interaction The main result regarding the convergence properties of the double-integrator consensus algorithm in directed networks is stated in this subsection.
H. Wang et al.
[p T , q T ]T , where p, q ∈ RN , be a right eigenvector of W associated with the eigenvalue 1. It follows that
2IN − γ L −IN − 2 L + γ L IN ON
(5) Based on the analysis of the previous section, we know that the eigenvalue 1 has geometric multiplicity equal to one. Without loss of generality, we can choose [1TN , 1TN ]T and [0TN , −1TN ]T as, respectively, a right eigenvector and a generalized right eigenvector associated with the eigenvalue 1. Similarly, it can be shown that [μT , −μT ]T and [0TN , μT ]T are, respectively, a left eigenvector and generalized left eigenvector associated with the eigenvalue 1. As a result, we know that W can be written in the Jordan canonical form as W = P ΛP −1 , where the columns of P , denoted by pr , r = 1, . . . , 2N , can be chosen to be the right eigenvectors or generalized right eigenvectors of W , the rows of P −1 , denoted by qrT , r = 1, . . . , 2N , can be chosen to be the left eigenvectors or generalized left eigenvectors of W such that prT qr = 1 and prT ql = 0, r = l, and Λ is the Jordan block diagonal matrix with the eigenvalues of W being the diagonal entries. Note that η1+ = η1− = 1 and |ηi± | < 1, i = 2, . . . , N . Also note that we can choose p1 = [1TN , 1TN ]T , p2 = [0TN , −1TN ]T , q1 = [μT , −μT ]T and q2 = [0TN , μT ]T . Because X(k) = W k X(0) = P Λk P −1 X(0) and lim P Λk P −1 k→∞
−
1N
0N
1N −1N = lim P Λk P −1 k→∞
Lemma 2 For double-integrator network (2), weighted average consensus limk→∞ x(k) = 1N μT x(0) can be reached if and only if W in (2) has two eigenvalues at 1 (or the network has a rooted directed spanning tree over time), and all the other eigenvalues inside the unit circle. Proof (Sufficiency) Note from (4) that if W has exactly two eigenvalues equal to 1 (i.e., η1+ = η1− = 1), then L has exactly one eigenvalue equal to 0. Let
p p = . q q
− = 0,
k1N μT (k − 1)1N μT
1 k 0 1
0TN μT
T −μ μT
(1 − k)1N μT T (2 − k)1N μ (6)
it follows that |x(k) − 1N μT x(0)| → 0 as k → ∞. (Necessity) Note that W has at least two eigenvalues equal to 1. If |x(k) − 1N μT x(0)| → 0 as k → ∞, it follows that W has rank two as k → ∞, which in turn implies that Λk has rank two as k → ∞. It follows that
Accelerated consensus to accurate average in multi-agent networks via state prediction
W has exactly two eigenvalues equal to 1 and all other eigenvalues inside the unit circle. Lemma 3 Suppose that Re(λi ) > 0 for i = 2, . . . , N . The roots of (4) are within the unit circle if and only if γ > and Υi > 0, where 2γ 4 Im(λi )2 4 Re(λi ) γ 2 1− + 2 2 − 4 4 . Υi = 1 − |λi | |λi | Proof As shown in Lemma 1, the roots of (4) are within the unit circle if and only if the roots of the following equation are in the open left half plane: 4 2 s 2 + 2 γ − 2 s + 2 − 2γ + = 0. λi
(7)
Letting s1 , s2 ∈ C denote the roots of (7), it follows that s1 + s2 = 2 − s1 s2 = 1 −
2γ ,
(8)
4 2γ + 2 . λi
(9)
555
(Q1 + Q2 )2 − 4Q1 Q2 + )2
16 Im(λi )2 4 |λi |4
= (Q1 − Q2 )2 +
16Im(λi 4 |λi |4
≥ 0, which implies that (12) has two real roots. Therefore, the necessary and sufficient conditions for a1 a2 > 0 are Q1 + Q2 > 0 and Q1 Q2 − 2 4 Im(λi )2 i) > 0. Because 4Im(λ 4 |λ |4 ≥ 0 and Q1 > 0, if 4 |λ |4 i
2
i
i) Q1 Q2 − 4Im(λ 4 |λ |4 > 0, then Q2 > 0, which implies i Q1 + Q2 > 0 as well. Combining the previous arguments proves the lemma.
Theorem 1 The weighted average consensus in directed double-integrator network (2) can be achieved if and only if the underlying graph of the network contains a directed spanning tree and the parameters are chosen in the following set: Ψc = (γ , )γ >
γ 2 and 1−
4 Re(λi ) 2γ 4 Im(λi )2 + 2 2 − 4 4 >0 . × 1− |λi | |λi |
In particular, xi (k) = μT x(0) as k → ∞, where μ is the unique nonnegative left eigenvector of L associated with eigenvalue 0 satisfying μT 1N = 1.
It follows from (8) that Im(s1 ) + Im(s2 ) = 0. Let s1 = a1 +bι and s2 = a2 −bι. Note that s1 and s2 have negative real parts if and only if a1 + a2 < 0 and a1 a2 > 0. Also note from (8) that a1 + a2 < 0 is equivalent to γ > . We next show conditions on parameters γ and such that a1 a2 > 0 holds. Substituting the definition of s1 and s2 into (9) gives a1 a2 + b2 + (a2 − a1 )bι = 4 1 − 2γ + 2 λ , which implies
Proof The proof follows directly from Lemmas 2 and 3.
4 Re(λi ) 2γ a1 a2 + b = 1 − + 2 2 , |λi |
The main result regarding the convergence properties of the double-integrator consensus algorithm in undirected networks is stated in this subsection.
i
2
(a2 − a1 )b = −
(10)
4 Im(λi ) . 2 |λi |2
(11)
i) It follows from (11) that b = − 2 |λ4 Im(λ . Consider |2 (a −a ) i
2
1
also the fact that (a2 − a1 )2 = (a2 + a1 )2 − 4a1 a2 = 4(1 − γ )2 − 4a1 a2 . After some simple computation, (10) can be rewritten as
(a1 a2 )2 − (Q1 + Q2 )a1 a2 + Q1 Q2 −
4 Im(λi )2 = 0, 4 |λi |4 (12)
4 Re(λi ) where Q1 =: (1 − γ )2 , Q2 =: 1 − 2γ + 2 |λi |2 . It follows that the discriminant of (12) is equivalent to
3.2 Undirected interaction
Lemma 4 Suppose that λi > 0 for i = 2, . . . , N . The roots of (4) are within the unit circle if and only if 2 Ψc = (γ , ) < γ < + and 2 λN 2 . (13) 0 0. s1 s2 = 1 − λi
Here, σ ∈ [0, 1] is mixing parameter of the convex sum, Θ = [θ1 , θ2 , . . . , θM ] is the vector of predictor coefficients. Denote X(k − 1) =: [x T (k − 1), x T (k − 2), . . . , T x (k − M)]T and
s1 + s2 = 2 −
(15) (16)
Noting that λi > 0 for i = 2, . . . , N , it is easy to have γ > and γ < 2 + λ2 i . Then, γ exists if < 2 + 2 λi ,
that is, < √2λ . Since the underlying graph of i the considered multi-agent network is undirected, then one can obtain the convergence region given in (13). This completes the proof.
Theorem 2 The average consensus in undirected double-integrator network (2) can be achieved if and only if the underlying graph of the network is connected and the parameters are chosen in the following set: 2 and Ψc = (γ , ) < γ < + 2 λN 2 . (17) 0 1. We are now in a position to present the rationale behind the design of weights Θ. Denoting xi (k, M) =: [xi (k − 1), xi (k − 2), . . . , xi (k − M)]T , we would like to design the best linear least squares approximation to match the available data stored in the memory. Then using the approximate model we would like to extrapolate the current state time τ time steps forward. The easiest way to do this is the approximate model of the form xˆi (k) = ak + b, with a and b being the parameters of the linear model, which can be rewritten in the matrix form for the set of available data: xˆi (k, M) = A(k,M) ψ,
(23)
where ⎡ ⎢ ⎢ A(k,M) = ⎢ ⎣
k−1 k−2 .. .
⎤ 1 1⎥ ⎥ .. ⎥ .⎦
k−M
1
and ψ =
a . b
(24)
By recalling the standard least squares technique, we define the cost function as J (ψ) T = xi (k, M) − xˆi (k, M) xi (k, M) − xˆi (k, M) T = xi (k, M) − A(k,M) ψ xi (k, M) − A(k,M) ψ (25) and find the optimal approximate linear model ψˆ as the global minimizer of the cost function ψˆ = arg min J (ψ). ψ
(26)
557
Taking into account the convexity of the cost function and equating the derivative of J (ψ) with respect to ψ to zero, we get the solution −1 (27) ψˆ = AT(k,M) A(k,M) AT(k,M) xi (k, M). Now, given the linear approximation of the model generating current data, we extrapolate the current state τ steps forward using A(k,τ ) = [k − 1 + τ, 1], −1 xiP (k + τ ) = A(k,τ ) AT(k,M) A(k,M) AT(k,M) xi (k, M). (28) From (24), we obtain Ω1 (k) AT(k,M) A(k,M) = Ω2 (k) where Ω1 (k) =
Ω2 (k) M
,
(29)
M
2 j =1 (k −j )
= Mk 2 −M(M +1)k + M(M + 1)(2M + 1)/6, Ω2 (k) = M j =1 (k − j ) = Mk − M(M + 1)/2. Since MΩ1 (k) − [Ω2 (k)]2 = M 2 (M 2 − 1)/12 = 0 due to M > 1, then we have T −1 A(k,M) A(k,M)
12 M −Ω2 (k) = 2 2 . (30) M (M − 1) −Ω2 (k) Ω1 (k) This results in the following expression for predictor weights: −1 Θ = A(k,τ ) AT(k,M) A(k,M) AT(k,M) ⎡ ⎤T (k − 1) ⎥ 12 · [M(k − 1 + τ ) − Ω2 (k)] ⎢ ⎢ (k − 2) ⎥ · = ⎢ ⎥ . .. ⎣ ⎦ M 2 (M 2 − 1) (k − M) +
12 · [Ω1 (k) − Ω2 (k)(k − 1 + τ )] T · 1M . M 2 (M 2 − 1)
(31)
Proposition 1 Suppose that M, τ ∈ Z+ and M > 1, then Θ1M = 1. Proof From (31), it is easy to see that Θ1M =
12
Ω2 (t) M(k − 1 + τ ) − Ω2 (t)
− 1) + M Ω1 (t) − Ω2 (t)(k + τ ) = 1. M 2 (M 2
This completes the proof.
558
H. Wang et al.
4.2 Consensus in MANs based on state prediction The aim of this subsection is to analyze the convergence properties of the modified state update iteration. Of note is that compute and memory resources available at the nodes are often scarce, and it is desirable that the algorithms are computationally inexpensive and save memory as much as possible. We are therefore motivated to use a linear predictor only containing the current state and the previous sampling state information, thereby retaining the linear nature of the consensus algorithm and efficiently exploiting the memorial state information in the original state update iteration. The predictor under consideration is the weighted sum of between the current state and the previous sampling state information, i.e., M = 2 and τ = 1, it follows from the preceding subsection that Θ = [2, −1], the update equation is then simply X(k) = W X(k − 1),
(33)
Then, one has some relationships between the eigenvalues of W and L as follows: det η I2N − W 2 = det η − 1 IN + (1 − σ ) η γ + 2 − γ L =
N 2 η − 1 + (1 − σ ) η γ + 2 − γ λi i=1
= 0. Thus, one has = ηi±
Ψc∗
2 = (γ , ) < γ < + 2 (1 − σ )λN 2 . 0 +
2 1 . (37) 2 − σ λ2
Accelerated consensus to accurate average in multi-agent networks via state prediction
Proof From (4), we know that W is nonsingular if γ = + λ1 i for i = 2, . . . , N . Then, it is easy to see that
W
−1
ON IN = −1 , −(IN + 2 L − γ L)−1 W[22]
(38)
−1 where W[22] = (IN + 2 L − γ L)−1 (2IN − γ L). To calculate the eigenvalues of φI2N − (I2N + W+ W −1 ), one obtains
det φI2N − I2N + W+ W −1 =
N (φ − 1)2 + σ i=1
( 2 − γ )λi (φ − 1) = 0. 1 + ( 2 − γ )λi (39)
From (39), it is clear that the eigenvalue φ = 1 is with algebraic multiplicity N + 1 (including λ1 = 0 with algebraic multiplicity 1), and the other eigenvalues are ( 2 − γ )λi 1 + ( 2 − γ )λi σ =1−σ + . 2 1 + ( − γ )λi
φ =1−σ
(40)
Next, we will derive the sufficient condition to ensure |φ| < 1. On one hand, it is clear that φ < 1 if 1 + ( 2 − γ )λi > 1 or 1 + ( 2 − γ )λi < 0. Obviously, if γ < , then 1 + ( 2 − γ )λi > 1 for ∀λi ≥ 0; if γ > + λ1 i , then 1 + ( 2 − γ )λi < 0 for ∀λi ≥ 0. On the other hand, φ > −1 is divided into two cases. Case 1: 1 + ( 2 − γ )λi > 0, which holds if γ < + λ1 i . In this case, φ > −1 is equivalent to 1 + ( 2 − γ )λi > σ/(2 − σ ), which is automatically satisfied because of 0 < σ < 1. Case 2: 1 + ( 2 − γ )λi < 0, which holds if γ > + λ1 i . In this case, φ > −1 is equivalent to 1 + ( 2 − γ )λi < σ/(2 − σ ), 2 1 which holds if γ > + 2−σ λi . Given the above, it therefore can be concluded that ρ(I2N + W+ W −1 ) ≤ 1 if the parameters are chosen in the set Ψc• . This completes the proof. For convenience of presentation, we denote
2JN −JN JN =: 11T /N, , J ∗ =: JN ON
J ON ON −JN ∗ J ∗∗ =: N =: , I2N . JN −JN ON IN
559
Theorem 4 Consider the double-integrator network (2), and the modified update iteration with state prediction given by (32). Then the spectrum of iteration matrices is strict compressed as (41) ρ W + W+ − J ∗ < ρ W − J ∗ , if the parameters are chosen in the set Ψc ∩ Ψc∗ ∩ Ψc• . Proof Since W J ∗ = J ∗∗ + J ∗ , taking into account the fact that J ∗∗ = W J ∗∗ , we can obtain W −1 J ∗ = J ∗ − W −1 J ∗∗ = J ∗ − J ∗∗ , which leads to W+ W −1 J ∗ = W+ (J ∗ − J ∗∗ ) = O2N . Therefore, it is easy to have I2N + W+ W −1 W − J ∗ = W + W+ − J ∗ − W+ W −1 J ∗ = W + W+ − J ∗ .
(42)
Now, let us explore a matrix (including a suitable matrix Γ can be solved)
ON ON J ON Ξ =: + N (43) ON IN Γ ON such that Ξ (W − J ∗ ) = 0. Combining this equation with (42), one has W + W+ − J ∗ = (I2N + W+ W −1 − Ξ )(W − J ∗ ). Therefore, if ρ I2N + W+ W −1 − Ξ < 1, (44) it then follows immediately that ρ W + W+ − J ∗ = ρ I2N + W+ W −1 − Ξ W − J ∗ ≤ ρ I2N + W+ W −1 − Ξ ρ W − J ∗ < ρ W − J∗ .
(45)
Recalling W+ in (33), we know that the last N rows of W+ are zeros, which implies that the last N rows ∗ also equal to 0T . Noting of I2N + W+ W −1 − I2N 2N that W+ 12N = 02N , 12N = W 12N and 1T2N W+ = 0T2N , one has (I2N + W+ W −1 )12N = 12N and 1T2N (I2N + W+ W −1 ) = 1T2N . Thus, the right-bottom N × N block ∗ has just a simple eigenvalue of I2N + W+ W −1 − I2N at 1 and all the other eigenvalues inside the unit circle. From Lemma 5, we know that I2N + W+ W −1 has an eigenvalue at 1 of algebraic multiplicity N + 1 and all the other eigenvalues inside the unit circle if the parameters are chosen in the set Ψc ∩ Ψc∗ ∩ Ψc• . It then
560
H. Wang et al.
can be concluded that ρ(I2N + W+ W −1 − Ξ ) < 1. This completes the proof. Remark 4 Through the proof of Theorem 4, a question is naturally raised: why does the iterative matrix eventually converge to J ∗ rather than diag{JN , JN }? Let us explain the reason as follows: note that the consensus protocol eventually converges to the average of initial measurement, which implies that the eventual convergence matrix in (6) can be expressed as a constant matrix even though it is time-varying. Not much is it known that the right-bottom N × N block of W is not contributing to the whole convergence process of iterative update, which implies that the right-bottom N ×N block of the eventual convergence matrix should be ON , that is, k = 2 and thus answers the question. Of note is that J ∗ has two eigenvalues at 1 and all the others eigenvalues at 0. Remark 5 Note that the set Ψc ∩ Ψc∗ ∩ Ψc• in Theorem 4 is not always nonempty. It is nonempty 1 2 2 when satisfies + 2−σ λ2 < 2 + λN , that is, )λ2 −λN ] and 0 < σ < 2 − λλN2 . < 4[(2−σ (2−σ )λ2 λN
Fig. 1 A sketch of the compression ratio
It is easy to see that σi is a monotonous increasing function of λi , that is, 0 < σ2 ≤ σ3 ≤ · · · ≤ σN < 1, which is shown in Fig. 1 as a sketch. In order to decrease the compression ratio |φi (σ )|, we need to find an optimal σ ∗ such that |φi (σ ∗ )| can be minimized. Noting that φi (σ ) is a monotonous decreasing function and φi (1) < 0 for i = 2, . . . , N , thus, there exists a constant σ ∗ such that φ2 (σ ∗ ) = −φN (σ ∗ ), that is,
4.4 Optimization of the mixing parameter The spectral radius of the weight matrix governs the asymptotic convergence rate, so optimizing the weight matrix corresponds to minimizing the spectral radius, subject to connectivity constraints. Similarly, the spectral radius of the compression coefficient matrix governs the compression ratio, so optimizing the compression coefficient matrix corresponds to maximizing the compression ratio. So in this subsection, we optimize the compression coefficient matrix to obtain a good compression ratio instead of optimizing the weight matrix due to the intractable calculation. Suppose that the set Ψc ∩ Ψc∗ ∩ Ψc• is nonempty, λN 2 1 which implies that γ > + 2−σ λi and 2 − λ2 > 0. Defining the function φi (σ ) = 1 − σ + 1+( 2σ−γ )λ , i it is then concluded that φ(0) = 1 > 0 and φ(1) = 1 2 1 < 0 due to the fact γ > + 2−σ λi > 1+( 2 −γ )λ i
+ λ1 i for each λi , i = 2, 3, . . . , N . Thus, it is clear that φi (σ ) has a unique zero point between 0 and 1 which is σi = 1 +
1 , ( 2 − γ )λi
i = 2, 3, . . . , N.
(46)
1 − σ∗ +
σ∗ 1 + ( 2 − γ )λ2
= − 1 − σ∗ +
σ∗ . 1 + ( 2 − γ )λN
By simple computation, one obtains σ∗ =
2[1 + ( 2 − γ )λ2 ][1 + ( 2 − γ )λN ] . 2( 2 − γ )2 λ2 λN + ( 2 − γ )(λ2 + λN )
Denoting σ = 2 − λλN2 , if σ > σ ∗ , then the optimal value of σ is σ ∗ , and the minimum of |φi (σ )| is φN (σ ∗ ); if σ < σ ∗ , then the optimal value of σ is σ , and the minimum of |φi (σ )| is φN (σ ).
5 Numerical simulation In this section, an example is given to illustrate the potential benefits and effectiveness of the developed designs on the consensus speeding up problems concerning double-integrator MANs. The interaction shown in Fig. 2 is considered for double-integrator network (2) with the weights on the
Accelerated consensus to accurate average in multi-agent networks via state prediction
561
connections. It is easy to see that the topology contains a directed spanning tree. By Theorem 2, we then know that the consensus can be achieved if and only if < γ < 2 + λ2N for ∈ (0, 0.5557). Let = 0.3 and γ = 0.65, it is clear that the parameters are satisfied in Theorem 2 and the evolution of the state of all the agents are shown in the above panels of Fig. 3. Next, we analyze the consensus speeding up problem under the same circumstance parameters. By Theorem 2, we know that the consensus can be achieved 1 2 2 if and only if + 2−σ λ2 < γ < 2 + λN for ∈ (0, 0.3320) and σ ∈ (0, 0.8328). It is clear that = 0.3 and γ = 0.65 are still satisfied in Theorems 3 and 4. Let σ = 0.2, the consensus speeding up is then effective by Theorem 4. Hence, double-integrator consensus can be achieved in multi-agent network (2). The evolution of the state of all the agents are shown in the
below panels of Fig. 3. It is easy to see that the consensus is obviously accelerated through comparing the two panels of Fig. 3. Finally, we will optimize the mixing parameter. It is easy to show that the best accelerated performance is achieved when σ = 0.1847. Denoting
Fig. 2 Interaction of a simple double-integrator network
Fig. 4 The evolution of χ (σ )
Fig. 3 Average consensus in double-integrator networks (2). The parameters of predicted acceleration: = 0.3, γ = 0.65, and σ = 0.1847; the initial states xi (0), i = 1, . . . , N are selected randomly in [0, 1]
χ(σ ) =
1 max{|φ2 (σ )|, |φN (σ )|}
with respect to the compression ratio, we can obtain the evolution of χ(σ ) shown in Fig. 4. It is also demonstrated that the best performance for acceleration is achieved when σ = 0.1847 through Fig. 4.
562
6 Conclusion Motivated by the convergence rate of distributed consensus algorithms, we have investigated the consensus speeding up problem concerning networks with double-integrator iterations. First, we considered iterations that allow nodes to maintain the current and outdated states and provided necessary and sufficient conditions for convergence through the exchange of local information. It turns out that each node employs a linear predictor to predict future node state on the basis of both the current and outdated node state. The local, linear iterative algorithm then becomes a convex weighted sum of the original consensus update iteration and the prediction that in some cases converges faster than the original algorithm. Additionally, the feasible interval of mixing parameter that guarantees convergence of consensus protocol and optimal mixing parameter are determined for undirected networks. Finally, we presented numerical example to evaluate and compare the convergence rate between the original update iteration and the modified update iteration. It is worth pointing out that the prediction framework in this paper is able to extrapolate the future state more than one time steps, which might improve the rate of convergence but with more imposed memorial burden, it is thus needed to make a tradeoff. In addition, the rate of convergence for the update iteration is not normally analyzed and optimized in this paper. However, this optimization problem is computationally intractable for complicated update algorithm. These conditions are not amenable for optimization in their present form, but we are looking into this in our future work. Acknowledgements This work was supported in part by the National Natural Science Foundation of China under Grant 60973114, Grant 61170249, and Grant 61273021, in part by the Research Fund of Preferential Development Domain for the Doctoral Program of Ministry of Education of China under Grant 201101911130005, in part by the State Key Laboratory of Power Transmission Equipment & System Security and New Technology, Chongqing University, under Grant 2007DA10512709207, and in part by the Program for Changjiang Scholars. This work was also supported by NPRP Grant 4-1162-1-181 from the Qatar National Research Fund (a member of the Qatar Foundation).
References 1. Olfati-Saber, R., Fax, J.A., Murray, R.M.: Consensus and cooperation in networked multi-agent systems. Proc. IEEE 95(1), 215–233 (2007)
H. Wang et al. 2. Lu, J., Chen, G.: A time-varying complex dynamical network model and its controlled synchronization criteria. IEEE Trans. Autom. Control 50(6), 841–846 (2005) 3. Yu, W., Chen, G., Cao, M.: Distributed leader-follower flocking control for multi-agent dynamical systems with time-varying velocities. Syst. Control Lett. 59(9), 543–552 (2010) 4. Ren, W., Cao, Y.: Distributed Cooperative of Multi-Agent Networks. Springer, London (2011) 5. Li, Z., Duan, Z., Chen, G., Huang, L.: Consensus of multiagent systems and synchronization of complex networks: a unified viewpoint. IEEE Trans. Circuits Syst. I, Regul. Pap. 57(1), 213–224 (2010) 6. Yu, W., Chen, G., Cao, M., Kurths, J.: Second-order consensus for multiagent systems with directed topologies and nonlinear dynamics. IEEE Trans. Syst. Man Cybern., Part B, Cybern. 40(3), 881–891 (2010) 7. Su, H., Chen, G., Wang, X., Lin, Z.: Adaptive second-order consensus of networked mobile agents with nonlinear dynamics. Automatica 47(2), 368–375 (2011) 8. Yu, W., Chen, G., Cao, M.: Some necessary and sufficient conditions for second-order consensus in multi-agent dynamical systems. Automatica 46(6), 1089–1095 (2010) 9. Ren, W., Atkinsy, E.: Second-order consensus protocols in multiple vehicle systems with local interactions. In: AIAA Guid., Nav., Control Conf. Exh., San Francisco, California (2005) 10. Casbeer, D.W., Beard, R., Swindlehurst, A.L.: Discrete double integrator consensus. In: Proc. 47th IEEE Conf. Decision and Control, Cancun, Mexico, pp. 2264–2269 (2008) 11. Zhang, Y., Tian, Y.: Consentability and protocol design of multi-agent systems with stochastic switching topology. Automatica 45(5), 1195–1201 (2009) 12. Li, H., Liao, X., Huang, T.: Second-order dynamic consensus of multi-agent systems with arbitrarily fast switching topologies. IEEE Trans. Syst. Man Cybern., Part A, Syst. Humans (2012, accepted) 13. Lin, P., Jia, Y.: Consensus of second-order discrete-time multi-agent systems with nonuniform time-delays and dynamically changing topologies. Automatica 45(9), 2154– 2158 (2009) 14. Wang, L., Sun, S., Xia, C.: Finite-time stability of multiagent system in disturbed environment. Nonlinear Dyn. 67(3), 2009–2016 (2012) 15. Li, H., Liao, X., Dong, T., Xiao, L.: Second-order consensus seeking in directed networks of multi-agent dynamical systems via generalized linear local interaction protocols. Nonlinear Dyn. 70(3), 2213–2226 (2012) 16. Yu, W., Chen, G., Ren, W., Kurths, J., Zheng, W.X.: Distributed higher order consensus protocols in multiagent dynamical systems. IEEE Trans. Circuits Syst. I, Regul. Pap. 58(8), 1924–1932 (2011) 17. Xiao, L., Boyd, S.: Fast linear iterations for distributed averaging. Syst. Control Lett. 53(1), 65–78 (2004) 18. Yuan, D., Xu, S., Zhao, H., Chu, Y.: Accelerating distributed average consensus by exploring the information of second-order neighbors. Phys. Lett. A 374(24), 2438–2445 (2010) 19. Aysal, T.C., Oreshkin, B.N., Coates, M.J.: Accelerated distributed average consensus via localized node state prediction. IEEE Trans. Signal Process. 57(4), 1563–1576 (2009)
Accelerated consensus to accurate average in multi-agent networks via state prediction 20. Oreshkin, B.N., Coates, M.J., Rabbat, M.G.: Optimization and analysis of distributed averaging with short node memory. IEEE Trans. Signal Process. 58(5), 2850–2865 (2010) 21. Muthukrishnan, S., Ghosh, B., Schultz, M.H.: First- and second-order diffusion methods for rapid, course, distributed load balancing. Theory Comput. Syst. 31(4), 331– 354 (1998) 22. Johansson, B., Johansson, M.: Faster linear iterations for distributed averaging. In: Proc. IFAC World Congr., Seoul, South Korea, pp. 2861–2866 (2008) 23. Liu, J., Morse, A.S.: Accelerated linear iterations for distributed averaging. Annu. Rev. Control 35(2), 160–165 (2011)
563
24. Zhang, H.-T., Chen, M.Z.Q., Stan, G.-B.: Fast consensus via predictive pining control. IEEE Trans. Circuits Syst. I, Regul. Pap. 58(9), 2247–2258 (2011) 25. Cao, Y., Ren, W.: LQR-based optimal linear consensus algorithms. In: Proc. Amer. Control Conf., St. Louis, USA, pp. 5204–5209 (2009) 26. Meng, Z., Cao, Y., Ren, W.: Stability and convergence analysis of multi-agent consensus with information reuse. Int. J. Control 83(5), 1081–1092 (2009) 27. Cao, Y., Ren, W.: Multi-agent consensus using both current and outdated states with fixed and undirected interaction. J. Intell. Robot. Syst. 58(1), 95–106 (2010)