A consensus algorithm for networks with process

0 downloads 0 Views 2MB Size Report
May 29, 2018 - [9] Tuncer C Aysal, Mark J Coates, and Michael G. Rabbat. Distributed average consensus with dithered quantization. Signal Processing, IEEE.
A consensus algorithm for networks with process noise and quantization error Francisco F. C. Rego, Ye Pu, Andrea Alessandretti, A. Pedro Aguiar, Colin N. Jones May 29, 2018

Abstract

1.1

In this paper we address the problem of quantized consensus where process noise or external inputs corrupt the state of each agent at each iteration. We propose a quantized consensus algorithm with progressive quantization, where the quantization interval changes in length at each iteration by a pre-specified value. We derive conditions on the design parameters of the algorithm to guarantee ultimate boundedness of the deviation from the average of each agent. Moreover, we determine explicitly the bounds of the consensus error under the assumption that the process disturbances are ultimately bounded within known bounds. A numerical example of cooperative path-following of a network of single integrators illustrates the performance of the proposed algorithm.

1

Literature survey on quantized consensus

The work in [6] addresses a class of gossip algorithms to achieve consensus on integer numbers and proves probabilistic convergence to consensus of those algorithms. Since that paper deals with consensus of integers, it also deals with the case when those integer numbers represent quantized real numbers. However, after convergence, a quantization error still exists. Moreover, since this is a probabilistic method it is not possible to determine, at each agent, how close to the average its own state is. In [7] discrete time consensus algorithms are addressed, where at each update the state is corrupted by an additive noise of zero mean. That paper also addresses the question of computing the optimal weight matrix such that we have a least mean-square deviation in steady state. Following the same framework, the work in [8] provides a method of computing an optimal stepsize for distributed average consensus with noisy communications. This approach might be applied to quantized consensus if we consider that the quantization error can be approximated with a zero mean additive noise. This is the principle behind the work in [9] where the authors present a proof of probabilistic convergence to consensus of a quantized consensus algorithm with probabilistic or dithered quantization. In that work it is proven that the values of all the agents will converge to the same quantization level and that value is in expectation the average of the initial values. In [10] the authors propose a quantized consensus scheme with the same probabilistic guarantees as [9], however, in this case the authors consider a vanishing communication rate. This is achieved through the design of an optimal predictive coding which uses information of the previously received messages. The authors of [10] also address the design of an optimal Wyner-Zyv decoder, which uses each agent’s own state sequence on the decoder. In [11] the authors prove that using the dithered quantization scheme proposed by [9] and performing a temporal average of the values at each agent the sequence of averaged values converges to the average of the initial values. This is stronger than proving that the expectation of the limit value is equal to the average of the initial values.

Introduction

Motivated by advances in wireless sensor networks, there has been a flurry of activity on the topic of distributed consensus, where the objective is the computation of the average of the states of all the agents in a distributed fashion. For certain applications, such as localization of underwater vehicles [1], the available bandwidth can be limited and therefore we must consider theoretical frameworks that take into account that the number of bits each sensor transmits to its neighbours at a given instant should not exceed a fixed number. One possible way to tackle this issue is to consider the theory of quantized consensus, see for example [2–5]. In the next subsection we provide a more detailed review on the quantized consensus literature. The main novelty introduced in this paper is that we provide conditions for guaranteed ultimate boundedness of a distributed quantized consensus algorithm when we consider that the state at each agent is subjected to an ultimately bounded external input or process disturbance. Therefore the method presented in this paper is suitable for cases of limited data rate between nodes and known disturbance bounds where one wants to achieve consensus in the presence of disturbances. Moreover, we determine explicitly the bounds of the consensus error. 1

The work in [12] provides a quantized consensus algorithm where the average of all the agent’s states is preserved. The worst case and probabilistic performance of the proposed quantized consensus algorithm is also addressed. The quantization interval is considered to be constant. In [13] the authors propose a modification of the classic consensus algorithm in order to achieve consensus if the variance of the quantization error converges to zero. They propose an adaptive quantization schemes that drive the quantization interval to zero satisfying the requirements for convergence. Another work by the same authors [14] deals with the distribution of GaussMarkov random fields, i.e. the case where each node makes a noisy scalar measurement and the values being measured are correlated with each other. That work uses one-bit quantized data with adaptive quantization using the same principles of [13]. The work in [15] deals with the quantized consensus algorithm with dithered quantization and it is proven that when the quantizer range is unbounded, consensus is achieved asymptotically to some value whose expected value is the average of the initial values. Moreover it is shown that the stepsize can be made arbitrarily small for a smaller variance. The paper also deals with the case of a bounded quantization interval and provides probability bounds as a function of the quantization levels and the quantization interval. In the papers [3, 16] the authors provide a progressive quantizer, which exploits the increasing correlation between the values exchanged by the sensors throughout the iterations of the consensus algorithm by progressively reducing the quantization intervals, and derive conditions on the quantizer parameters to guarantee a deterministic convergence to consensus. The paper [3] provides an asymptotic convergence rate. However, no performance bounds are provided. In [2] the authors describe a method of computing the parameters of a progressive quantizer depending on the network topology and the communication rate, such that convergence is guaranteed if the values at each agent always fall inside the quantization interval. This paper also provides conditions that guarantee that in expectation the values at each agent always fall inside the quantization interval. Finally, [17] presents two schemes of quantized average consensus. The first is the zoom-in zoom-out method, where the quantization interval shrinks or increases depending whether a saturation constraint is active. The second is a logarithmic quantizer, i.e. the value quantized is the logarithm of the state of the agent. The paper [17] also provides conditions on the parameters of the logarithmic quantizer that guarantee convergence of the algorithm. In this paper we propose a quantized consensus algorithm with progressive quantization, i.e., where the quantization interval decreases in size at a pre-defined rate,

based in the work in [2]. However, the main novelty in this paper is that we deal with the case where we are unable to control perfectly the agent’s states due to the presence of a process noise. To the best of the authors’ knowledge, this situation has never been considered in the quantized consensus literature. Moreover, we present a numerical example of cooperative path-following of a network of single integrators which illustrates the performance of the proposed algorithm.

1.2

Paper structure

This paper is organised as follows. In the next section we describe the problem to be addressed in this paper, and introduce the uniform quantizer, the network and the system considered. The following section describes the proposed quantized consensus algorithm and provides conditions on the quantizer parameters under which the deviation from the average of the value of each agent is ultimately bounded. The next section provides an application of the quantized consensus to the problem of cooperative path-following. Next, we present numerical results of the cooperative path-following application. Finally, we provide the conclusions of this paper.

2 2.1

Problem Definition Notation

Throughout this paper we will use the symbol ⊗ for the Kronecker product. The symbol k · k represents the L2 norm, and k · k∞ represents the L∞ norm. The notation | · | represents the cardinality of a set. The notation b·c represents the floor operator, or the rounding down to the closest integer towards zero, and the function sgn(·) is the sign function. IM represents an M × M identity matrix, and 1 represents a N × 1 vector with ones in every entry.

2.2

Uniform quantizer

  ¯t + L2t of Consider the quantization interval x ¯t − L2t , x size Lt centred at the mid-value x ¯t . A uniform quantizer with a quantization step-size ∆t can be expressed as   kx − x ¯t k 1 Qt (x) := x ¯t + sgn(x − x ¯t )∆t + . (1) ∆t 2 The parameter ∆t is determined by the number nb of the bits of the quantizer as ∆t := 2Lntb . From (1), the quantization error is upper-bounded by kx − Qt (x)k ≤

∆t Lt = n +1 . 2 2 b

(2)

For the case where the input of the quantizer and the mid-value are vectors with the same dimension, the quantizer Qt is intended element-wise. 2

2.3

Network

Assumption A2. The consensus matrix Π is doubly stochastic and primitive 1 .

Let N denote the set of nodes, with cardinality N := |N |, A ⊆ N × N the set of node pairs denoting the directed Note that if the graph is bidirectional, i.e. if (i, j) ∈ A connections between the nodes, and N i the set of in- implies (j, i) ∈ A, then assumption A2 can be satisfied neighbours of i, i.e., N i := {j : (j, i) ∈ A}. by designing Π with Metropolis local-degree weights [5]. Given assumption A2 we can define λ2 as the norm of the second largest eigenvalue of Π where 0 < λ2 < 1. 2.4 System In this paper we consider a set of agents or nodes N and that each agent, identified by i ∈ N contains a state evolving in time xit ∈ Rn , where t ∈ N is the time. Therefore we consider a quantizer at each node which evolves in time Qit (·). The quantizer Qit (·) is defined by the node dependent mid-value x ¯it , and for simplicity, we consider a quantization interval size equal for all the nodes Lt . We consider that each agent updates each state according to a discrete time dynamic of the following form:

2.6

Problem definition

Assume the disturbances vti to be ultimately bounded over time with a-priori known bounds and consider that at each time step the nodes are allowed to communicate quantized messages, according to the network structure defined by A. This paper addresses the problem of quantized consensus where we want that the state at each node be driven and maintained inside a bound from the d,i j j i i i i i i xt+1 = xt+1 (xt , Qt (xt ), {Qt (xt ), j ∈ N }) + vt , (3) average of the states of all the agents according to a discrete-time dynamics of the form in (3), with an uln 2+|N i | n timate bound proportional to the ultimate magnitude of where xd,i : (R ) → R is a linear law to be t designed corresponding to the desired next state and the disturbances. vti ∈ Rn is an external input or process disturbance. The function xd,i t will be defined later in (4). Quantized consensus We consider that the process disturbance is bounded 3 with an a-priori known bound. Therefore the following 3.1 Main result assumption is needed in the rest of the document: This section addresses the convergence analysis of a Assumption A1. The disturbances satisfy consensus algorithm where the communications among kvti k ≤ δv + v kvt , i ∈ N , t ∈ N0 , nodes are subject to a quantization error. The quantized t for some constants v , δv ≥ 0, and 1 > kv ≥ 0, where kv consensus considered in this section uses the progressive quantization scheme considered in [2, 3, 16]. Howis kv to the power of t. ever, in this paper we provide performance bounds, i.e. This assumption covers the cases of a vanishing process bounds on the norm of the estimation error at each noise when δv = 0 and v > 0, of a uniformly bounded iteration which tend to zero. Here we consider the noise when when δv > 0 and v = 0, or a combination case where the messages exchanged are quantized, thereof the two. These assumptions are suitable for scenarios fore the sent messages are denoted by x ˆit := Qit (xit ). where we cannot have absolute control over the local vari- Before proceding to our main result we need to deable of interest, on which we desire to perform consensus, fine the quantization error wti := Qit (xit ) − xit , and the   but we can set it up to a ultimately bounded disturbance. variables xt := col xit , i ∈ N , wt := col wti , i ∈ N ,   One example where these assumptions apply can be seen x ˆt := col x ˆit , i ∈ N , vt := col vti , i ∈ N , xavg := t  on the application section. (1/N ) 11T ⊗ In xt , and et := xt − xavg . Since we cont sider quantized messages, our desired consensus dynam2.5 Standard consensus algorithm ics, i.e. the function xd,i t , is of the following form X Let the information of the generic node i at time t be xd,i π i,j x ˆjt − (ˆ xit − xit ) (4) t+1 := described by the vector xit ∈ Rn . A standard consensus j∈N i i algorithm consists of updating the internal vector xt at each iteration with a weighted sum of the matrices of the The consensus step can be written in a compact form as neighbors as follows X xit+1 = π i,j xjt , j∈N xt+1 = Π ⊗ In x ˆt − (ˆ xt − xt ) + vt

= Π ⊗ In xt + (Π − IN ) ⊗ In wt + vt (5) where we denote by π i,j the weight that the node i uses for j, where π i,j = 0 if (i, j) ∈ / A. 1 doubly stochastic matrix is a square matrix of nonnegative The matrix Π whose component (i, j) is equal to π i,j real Anumbers, whose rows and columns sum to 1. A primitive is termed the consensus matrix and is assumed to satisfy matrix is a nonnegative square matrix A such that there exists a the following standard assumptions in consensus design: positive integer k such that all elements of Ak are strictly positive 3

Note that using the property N1 11T Π = N1 11T , from assumption A2 and the update (5), the vector  of averages avg T 1 11 follows the dynamics xavg = x + ⊗In vt . Comt t+1 N  avg T 1 ⊗ bining (5) with the fact that xavg = x + t t+1 N 11 In vt , we can write   1 et+1 = Π − 11T ⊗ In et + (Π − IN ) ⊗ In wt N   1 (6) + IN − 11T ⊗ In vt . N

process noise, i.e. where the dynamics of each agent is of the form p˙ = u + ω, (9) where p ∈ Rm is the agent’s state u ∈ Rm is the agent’s input and ω ∈ Rm is the process noise which is bounded by kωk ≤ ω . We now assign to each agent a desired path pd : R → Rm and a path-following variable γ that parameterizes the path. The objective is to derive control laws for u and γ¨ to steer the state of each agent p to its assigned path pd (γ) and the time derivative of the path-following variable γ˙ to its assigned value νr . Since we are considering multiple agents we can make explicit the agent’s index on the path-following variable as γ i , i ∈ N , and we will omit the agent’s index when clear from the context that we are referring to a generic agent. We also want the formation to be coordinated, i.e. to desirably converge the bound kγ i − γ j k for i 6= j to zero (or at least to a small value). Defining, Γ := col γ i , i ∈ N  the above is the same as converging k IN − N1 110 Γk to zero (or a small value). In order to enforce consensus we introduce a desired velocity νd (t) which is a perturbation of νr where kνd (t) − νr k is bounded for a bounded deviation from  average of the path-following variables k IN − N1 110 Γk. Therefore we want to drive γ˙ to νd (t) and we design νd (t) to achieve consensus. In summary, defining the path-following error as ξ := p − pd (γ) and the velocity error as η := γ˙ − νd , the objective of the path-following controller is then to make the agent’s error vector defined as z := [ξ 0 η]0 . The objective of the consensus algorithmis to ultimately bound the consensus error k IN − N1 110 Γk. Moreover we also want the deviation to the reference velocity kνd (t) − νr k to be ultimatelly bounded.

If the time dependent quantization interval is selected as Lt := Ck t + D, with 0 < λ2 < k < 1 and kv < k, and the mid-value of the quantizer is recursively chosen to be x ¯t = x ˆt−1 , the convergence of the quantized consensus can be established, as stated in the following theorem. Theorem 1 (Convergence of quantized consensus). Consider a quantizer Qt from (1) with nb bits and where Lt = Ck t + D, with 0 < λ2 < k < 1 and kv < k. For the recursion (5), if assumptions A1 and A2 hold and if the number of bits nb , the initial quantizer parameters C, and D and the initial mid-value x ¯0 satisfy a1 + a2

C C ≤ , 2nb +1 2

b1 + b2

D D ≤ 2nb +1 2

(7)

where a1 , a2 , b1 and b2 are defined in (16) in appendix, then for any t ≥ 0 the values of et satisfy h √ i Nn C ket k ≤ k t ke0 k + k−λ n b + v 2 (8)  √N n2 D + . + δ v 1−λ2 2nb The proof is given in appendix. It is worth noticing that the assumption in (7) can be satisfied by properly designing the quantizer. More specifically, choosing nb such that nb > log2 (max(a2 , b2 ))

4.2

and C and D such that  a2  C a1 ≤ 1 − n , 2 b 2

b1 ≤



1−

b2 2nb



The coordinated path following algorithm considered in this section is depicted in figure 1. It is composed mainly by the following three components:

D . 2

• A consensus law which provides the desired pathfollowing variable at the next communication time,

It is worth noticing that a1 , a2 , b1 and b2 depend on the network through λ2 and on the decrease rate k. The parameter a2 is also proportional to the initial parameters v , ke0 k and k¯ x0 − xavg 0 k. Finally b2 is also proportional to the ultimate bound of the noise δv .

4 4.1

Algorithm description

• A desired velocity generator which provides the refence velocity to the path-following controller, • A path-following control law which provides the control input u and the second derivative of the pathfollowing variable γ¨ .

Application: Cooperative Path Following

We will now describe each main component in detail.

Problem description

4.2.1

Consensus

As an application we consider the problem of coopera- In this application we consider that the agents commutive path-following of a network of single integrators with nicate at discrete instants of time tk := k∆t where k ∈ N 4

!

System(

Given this desired velocity we can observe that the values of the path-following variable at the communication instants evolve as follows

Path:following( variable(dynamics((

p˙ = u + !

p

˙

u

¨

Coopera/ve(Path:following(Control(law(

u= ¨=

K⇠ (p k⌘ ( ˙

pd ( )) +

@pd ( )⌫d (t) @

⌫d (t)) + ⌫˙ d (t) + (p

pd ( ))

0 @pd

@

γ(tk+1 ) = γ(tk ) +

( )

⌫˙ d (t)

⌫d (t)

= γ(tk ) +

Desired(velocity(

⌫d (t) =



⌫r + 2⌫kc t ttk , ⌫r + 2⌫kc tk+1t t ,

tk  t  tk + 2t tk + 2t  t  tk+1

d = γk+1 +

⌫kc Impulse(perturba/on(velocity(! d (tk ) k+1 ⌫kc = 2 ⌫r t

d,i k+1

=

X

⇡ i,j Qjk ( j (tk ))

Qik ( i (tk ))

j2N i

Qjk ( j (tk ))

i

(tk ) + ⌫r t

tk+1

tk Z tk+1

tk Z tk+1

γdt ˙ νd (t)dt +

Z

tk+1

η(t)dt

tk

η(t)dt.

tk

Therefore the values of the path-following variable at the communication instants evolve according to γ(tk+1 ) = Rt d γk+1 + vk where vk is vk := tkk+1 η(t)dt.

d k

Consensus(

Z

Quan/zer(

Qik (·) Qik ( i (tk ))

Network(

4.2.3

Path-following controller

We now consider a path-following scheme as described in Figure 1: Diagram of the coordinated path-following al- [18] for the case of single integrators. The path-following gorithm. Blocks in grey correspond to continuous-time control law is the following systems and blocks in orange correspond to discrete-time systems. ∂pd (13) γ¨ = −kη η + ν˙ d + ξ 0 ∂γ ∂pd and ∆t is the time between communications. To achieve u = −Kξ ξ + νd , (14) coordination we set a desired path-following variable at ∂γ each communication instant according to the quantized consensus algorithm where Kξ := Im kξ with kξ > 0 and kη > 0. Since this is X a path-following control law, there exist a feedback from d,i i,j j j γk+1 = π Qk (γ (tk )) the path following error p − pd (γ) to the the dynamics j∈N i of the path-following variable γ. This provides a faster  − Qik (γ i (tk )) − γ i (tk ) + νr ∆t, (10) convergence of the vehicles to their assigned paths with where the mid-value of the quantizer Qik+1 (·) is set as less use of the control action u, since the path-following d,i γ¯k+1 = Qik (γ i (tk )) + νr ∆t and its interval length is set variable dynamics drives pd (γ) closer to the agent’s position p. It should be noted that the derivative of the as Lk = Ckγk + D, for an appropriate decrease rate kγ desired velocity ν˙d is not defined at all points. To overand parameters C and D selected under the conditions of come this problem we must replace ν˙d on the control law Theorem 1. Therefore we want that the values of pathfor γ¨ with some signal which is defined everywhere and d,i i following variables evolve according to γk+1 = γk+1 + is equal to ν˙d where it is defined. vki where the disturbance vki satisfies assumption A1. It can be observed that the introduction of the reference velocity on the algorithm does not change the properties 4.3 Design and theoretical guarantees of the algorithm and Theorem 1 still applies. 4.2.2

The path-following control law drives the agent’s error vector z to an ultimate bound proportional to ω as stated by the following theorem.

Desired velocity

Given the desired path following variable at time tk+1 we can define the desired velocity for the time segment Theorem 2. Given the system (9) and the pathtk < t ≤ tk+1 as following control law (13), the norm of the agent’s error  k , tk ≤ t ≤ tk + ∆2t νr + 2νkc t−t ∆t (11) vector kz(t)k satisfies νd (t) = −t ∆t νr + 2νkc tk+1 ∆t , tk + 2 ≤ t ≤ tk+1 ω where the impulse perturbation velocity νkc is defined as . (15) kz(t)k ≤ kz(0)ke−(1−θ)km t + θkm ! d γk+1 − γ(tk ) νkc := 2 − νr . (12) ∆t where km := min(kξ , kη ) and 0 < θ < 1. 5

From Theorem 2, we can bound kvk k as follows



Z

tk+1

tk tk+1

Z

tk

ω dt θkm   1 − e−(1−θ)km ∆t e−(1−θ)km ∆tk

0

kz(0)ke−(1−θ)km t +

kz(0)k (1 − θ)km ω ∆t + . θkm =

20

kη(t)kdt

p2 (m)

kvk k ≤

-20

-40

-60

The proof is given in appendix. With this bound we can observe that assumption A1 is satisfied with  kz i (0)k ω ∆t , v := maxi∈N (1−θ)k δv := θk 1 − e−(1−θ)km ∆t and m m

-80 -60

-40

-20

0

20

40

60

p1 (m)

kv := e−(1−θ)km ∆t . Therefore we can apply Theorem 1 to derive lower bound conditions for the number of bits nb Figure 2: Agent’s trajectories. The black markers repreand the quantizer parameters kγ , C and D. Since we can sent the agents position every 75 seconds and the black apply Theorem 1 we can also establish  ultimate bounds dashed lines represent communication links. 0 1 on the consensus error k IN − N 11 Γk using continuity arguments. Moreover, since the consensus error is ultimatelly bounded we can also establish ultimate bounds 4 on the impulse perturbation velocity νkc and therefore 3 ultimately bound the deviation to the reference velocity kνd (t) − νr k.

5

Quantization level

2

Simulation Results

As an example we considered a fleet of six two dimensional vehicles with single integrator dynamics performing a lawnmower mission. The desired shape of the fleet is composed by equilateral triangles and the agents communicate each second with their immediate neighbours only. We consider a coordinated path-following controller with gains kξ = 0.08 and kη = 2, and the process noise is bounded with ω = 0.2. The parameters v , kv and δv were adjusted manually to provide a tight bound on the process noise kvki k. It was found that the conditions of Theorem 1 are satisfied if six bits are transmitted each second, and in this example we used eight bits per second. The obtained trajectories of the agents are represented in the Figure 2, where it can be observed that the positions of the agents converge to the desired paths. We can also observe that, as expected, the formation acquires the desired shape, since the path-following variables approach consensus. The quantization level transmitted by each agent, Qik (γ i (tk ))Lk /2nb , is shown in Figure 3. It can be observed that, for this case, the theoretical guarantees are quite conservative and we could use a much lower number of transmitted bits, since only 9 of the 511 quantization levels were used. The evolution of the difference of the path-following variables to their average, P i.e. of γ i − (1/N ) j∈N γ j , can be observed in Figure 4, the derivative of the path-following variables γ˙ i , is shown in Figure 5, and we can observe the norms of the

1

0

-1

-2

-3

-4

0

100

200

300

400

500

600

700

800

time (s)

Figure 3: Evolution of difference of path-following variables to νr t. path-following error kpi − pid (γ i )k in Figure 6. From Figures 4, 5 and 6 we can observe that all the quantities that we wanted to regulate, i.e. the deviation of the path-following variables from the average, the deviation of the time derivative of path-following variables to the reference velocity (one in this case), and the norm of the path-following error, decrease from their initial values until they reach an ultimate bound and remain within that bound, as was expected from the theoretical analysis.

6

Conclusions

We addressed the problem of quantized consensus where a process noise or an external input corrupts the state 6

5

25

4 20

3

2

kpi ! pd (. i )k

.i !

1 N

P

j2N

.j

15

1

0

10 -1

-2

5

-3

-4

0

100

200

300

400

500

600

700

0

800

0

100

200

time (s)

300

400

500

600

700

800

time (s)

Figure 4: Evolution of the difference of the P path- Figure 6: Evolution of the norm of the path-following following variables to their average γ i − (1/N ) j∈N γ j . error kpi − pid (γ i )k. Finally, we presented a numerical example of cooperative path-following of a network of single integrators which illustrates the performance of the proposed algorithm and its usefulness.

6

5

4

7

._ i

3

Appendix

2

7.1

Design parameters

1

(k + 1)ke0 k + k¯ x0 − xavg 0 k k √ N n(k + 1) + k − λ2 v + k(k − λ2 ) √ 2 N n(k + 1) + k − λ2 a2 := k(k − λ2 ) √ 2 N n + 1 − λ2 b1 := δv 1 − λ2 √ 4 N n + 1 − λ2 . b2 := 1 − λ2 a1 :=

0

-1

-2

0

100

200

300

400

500

600

700

800

time (s)

Figure 5: Time derivative of path-following variables γ˙ i .

of each agent at each iteration, according to a discretetime dynamics of the form in (3). We assumed the disturbances vti to be ultimately bounded over time with a-priori known bounds and that at each time step the nodes are allowed to communicate quantized messages, according to the network structure defined by A. To solve this problem we proposed a quantized consensus algorithm with progressive quantization, where the quantization interval changes in length at each iteration with a pre-specified length. We derived conditions on the design parameters of the algorithm to guarantee ultimate boundedness of the deviation from the average of each agent, i.e. the state at each node is driven and maintained inside a bound from the average of the states of all the agents, with an ultimate bound proportional to the magnitude on the disturbances. Moreover, we determined explicitly the bounds of the consensus error.

7.2

(16a) (16b) (16c) (16d)

Proof of Theorem 1

Before proving the main result of this section we first need the following result: Lemma 1. Consider a quantizer Qt from (1) with nb bits and where Lt = Ck t + D, with 0 < λ2 < k < 1 and kv < k. Let assumptions A1 and A2 hold. Given the linear consensus system with quantized communications (5), if, given t, for all 0 ≤ p ≤ t the values of xp fall L inside the quantization interval, i.e. kwp k∞ ≤ 2p then kep k satisfies h √ i Nn C ket k ≤ k t ke0 k + k−λ n b + v 2 2 √ (17)  Nn D + 1−λ + δ . nb v 2 2 7

Proof. Given that assumption A2 holds, from (6) we have that 

p+1 1 T ep+1 = Π − 11 ⊗ In e0 N " #  i p X 1 T + Π − 11 (Π − IN ) ⊗ In wp−i N i=0 " i  # p X 1 T 1 T + Π − 11 IN − 11 ⊗ In vp−i . N N i=0

kx0 − x ¯0 k∞ = ke0 + xavg −x ¯ 0 k∞ 0

≤ ke0 k + kxavg −x ¯0 k 0 ≤ a1 ≤

Ck 0 + D L0 C ≤ = . 2 2 2

We now have to prove the induction step, that is, given ¯t+1 k∞ ≤ Lt+1 that kxt − x ¯t k∞ ≤ L2t we have kxt+1 − x 2 . From (6), and the fact that the vector of averages follows avg + N1 11T ⊗ In vt , we have the dynamics xavg t+1 = xt

Noting that, from assumption A2, kΠ− N1 11T k = λ2 < 1, kxt+1 − x ¯t+1 k∞ = kxt+1 − ˆt k∞  x kΠ − IN k ≤ kΠk + kIN k ≤ 2, and that kIN − N1 11T k = 1 = ket+1 − et + N1 11T ⊗ In vt − wt k∞ we have ≤ ket+1 k∞ + ket k∞ + k N1 11T ⊗ In vt k∞ + kwt k∞

t+1 ≤ ket+1 k + ket k + kvt k∞ + kwt k∞ kep+1 k ≤ Π − N1 11T ke0 k

i Pp (18) T 1 + i=0 Π − N 11 kΠ − IN kket−i k Combining (18) with Lemma 1, assumption A1 and the

Pp i + i=0 Π − N1 11T kIN − N1 11T kkvt−i k assumption of the induction, i.e. the assumption that P p = λp+1 ke0 k + i=0 λi2 (2kwt−i k + kvt−i k) . kxt − x ¯t k∞ ≤ L2t , it follows that 2 h i √ √ L N nv N nC From the assumption that kwp k∞ ≤ 2nbp+1 we have kxt+1 − x ¯t+1 k∞ ≤ k t+1 ke0 k + k−λ + nb 2 (k−λ ) 2 h i 2 √ √ N nv N nC t √ √ √ + +k ke k + n 0 √ k−λ2 2 b (k−λ2 ) C Nn D Nn Lp N n √ √ kwp k ≤ N nkwp k∞ ≤ n +1 ≤ n +1 k p + n +1 . 2 N nδv 2 N nD + + b b b 2 2 2 1−λ2 2nb (1−λ2 ) +v k t + δv + 2nCb +1 k t + 2nD . b +1 Also, from assumption A1 we have Rearranging the right hand side of the last inequality one √ √ √ kvp k ≤ N nkvp k∞ ≤ N nv kvp + N nδv obtains √ √ h √ p ≤ N nv k + N nδv , 0k 2 kxt+1 − x ¯t+1 k∞ ≤ k t+1 (k+1)ke + N n(k+1)+k−λ v k k(k−λ2 ) i √ C 2 where we used the fact that kv ≤ k, and therefore + 2 N n(k+1)+k−λ nb +1 k(k−λ2 ) 2√ √ n+1−λ2 n+1−λ2 D √ Pp + 2 N1−λ δv + 4 N1−λ 2nb +1 kep+1 k ≤ λp+1 ke0 k +√ 2Cnb + v N n i=0 λi2 k p−i 2  2 2 C D P t+1 p D ≤ k a + a + b1 + b i 1 2 2nb +1 2 2nb +1 , N n i=0 λ + 2 2nb + δv √ Pp ≤ k p+1 ke0 k +√ 2Cnb + v N n i=0 λi2 k p−i where the last inequality stems from the definitions of P p D N n i=0 λi2 + x0 − xavg 2nb + i a1 , a2 , b1 , b2 and the fact that k¯ h δv 0 k ≥ 0. From √ i  P λ p 2 the assumptions of Theorem 1 we have that N n i=0 ki+1 ≤ k p+1 ke0 k + 2Cnb + v √ Pp   D + N n i=0 λi2 . kxt+1 − x ¯t+1 k∞ ≤ k t+1 a1 + a2 2nCb +1 2nb + δv + b1 + b2 2nD b +1 t+1 Since 0 < k < 1, by using the property of the geometric ≤ Ck 2 +D = Lt+1 2 series, we get that the expression above is equal to    √  N n 1−( λk2 )p+1 C p+1 kep+1 k ≤ k ke0 k + 2nb + v λ k(1− k2 ) √ 7.3 Proof of Theorem 2 p+1  N n(1−λ2 ) D + δ v 2nb + 1−λ 2 h Proof. First, we introduce the Lyapunov function  √N n i ≤ k p+1 ke0 k + 2Cnb + v k−λ V (z) = 12 z 0 z = 12 kzk2 . We can bound the derivative 2 √  D Nn of the Lyapunov function as + 2nb + δv 1−λ2 . V˙ (z) = −kξ ξ 0 ξ − kη η 2 + ω 0 ξ

= −(1 − θ)(kξ ξ 0 ξ + kη η 2 )

Proof of Theorem 1. We prove by induction that xt falls inside the quantization interval of Qt , i.e. kxt − x ¯t k∞ ≤ Lt for t ≥ 0, that combined with Lemma 1 concludes the 2 proof of Theorem 1. The base case is given by assumption, since from equation (7), and the definitions of et and xavg we can state t

+ (ω 0 ξ − θkξ ξ 0 ξ − θkη η 2 )

≤ −(1 − θ)km kzk2 + (kωkkzk − θkm kzk2 )

= −(1 − θ)km 2V (z) p + (kωk 2V (z) − θkm 2V (z)). 8

p Noting that kωk 2V (z) − θkm 2V (z) is always negative  2 ω for V (z) ≥ 21 θk we have that m 1 V˙ (z) ≤ −(1 − θ)km 2V (z) ∀V (z) ≥ 2



ω θkm

[8] Carlos Mosquera, Roberto L´opez-Valcarce, and Sudharman K Jayaweera. Stepsize sequence design for distributed average consensus. Signal Processing Letters, IEEE, 17(2):169–172, 2010.

2

[9] Tuncer C Aysal, Mark J Coates, and Michael G Rabbat. Distributed average consensus with dithered quantization. Signal Processing, IEEE Transactions on, 56(10):4905–4918, 2008.

Applying the comparison lemma (see e.g. [19]) we have  2 ! ω −(1−θ)km 2t 1 V (z) ≤ max V (z(0))e , . 2 θkm And finally we can bound kz(t)k as kz(t)k ≤ kz(0)ke−(1−θ)km t +

ω . θkm

[10] Mehmet E Yildiz and Anna Scaglione. Coding with side information for rate-constrained consensus. Signal Processing, IEEE Transactions on, 56(8):3753– (19) 3764, 2008. [11] Jun Fang and Hongbin Li. Distributed consensus with quantized data via sequence averaging. Signal Processing, IEEE Transactions on, 58(2):944–948, 2010.

Acknowledgment

The first author benefited from grant [12] Paolo Frasca, Ruggero Carli, Fabio Fagnani, and SFRH/BD/51929/2012 of the Foundation for SciSandro Zampieri. Average consensus on networks ence and Technology (FCT), Portugal, and the third with quantized communication. International Jourauthor benefited from grant SFRH/BD/51450/2011 of nal of Robust and Nonlinear Control, 19(16):1787– FCT, Portugal. 1816, 2009. [13] Jun Fang and Hongbin Li. An adaptive quantization scheme for distributed consensus. In Acoustics, Speech and Signal Processing, 2009. ICASSP 2009. [1] Alexander Bahr, John J Leonard, and Maurice F IEEE International Conference on, pages 2777– Fallon. Cooperative localization for autonomous 2780. IEEE, 2009. underwater vehicles. The International Journal of Robotics Research, 28(6):714–728, 2009. [14] Jun Fang and Hongbin Li. Distributed estimation of gauss-markov random fields with one-bit quantized [2] Tao Li, Minyue Fu, Lihua Xie, and Ji-Feng Zhang. data. Signal Processing Letters, IEEE, 17(5):449– Distributed consensus with limited communication 452, 2010. data rate. Automatic Control, IEEE Transactions

References

on, 56(2):279–292, 2011.

[15] Soummya Kar and Jos´e MF Moura. Distributed consensus algorithms in sensor networks: Quantized [3] Dorina Thanou, Effrosini Kokiopoulou, Ye Pu, and data and random link failures. Signal Processing, Pascal Frossard. Distributed average consensus with IEEE Transactions on, 58(3):1383–1400, 2010. quantization refinement. Signal Processing, IEEE Transactions on, 61(1):194–205, 2013. [16] Dorina Thanou, Effrosini Kokiopoulou, and Pascal [4] Ye Pu, Melanie Nicole Zeilinger, and Colin N. Jones. Frossard. Progressive quantization in distributed Quantization design for unconstrained distributed average consensus. In Acoustics, Speech and Sigoptimization. In The 2015 American Control Connal Processing (ICASSP), 2012 IEEE International ference, 2015. Conference on, pages 2677–2680. IEEE, 2012. [5] Lin Xiao and Stephen Boyd. Fast linear iterations [17] Ruggero Carli, Francesco Bullo, and Sandro for distributed averaging. Systems & Control LetZampieri. Quantized average consensus via dynamic ters, 53(1):65–78, 2004. coding/decoding schemes. International Journal of Robust and Nonlinear Control, 20(2):156–175, 2010. [6] Akshay Kashyap, Tamer Ba¸sar, and Ramakrishnan Srikant. Quantized consensus. Automatica, [18] Francesco Vanni, A Pedro Aguiar, and Ant´ onio M 43(7):1192–1203, 2007. Pascoal. Cooperative path-following of underactu[7] Lin Xiao, Stephen Boyd, and Seung-Jean Kim. Disated autonomous marine vehicles with logic-based tributed average consensus with least-mean-square communication. In Proc. of NGCUV08-IFAC Workdeviation. Journal of Parallel and Distributed Comshop on Navigation, Guidance and Control of Unputing, 67(1):33–46, 2007. derwater Vehicles, pages 1–6, 2008. 9

[19] Hassan K Khalil and JW Grizzle. Nonlinear systems, volume 3. Prentice hall New Jersey, 1996.

10

Suggest Documents