Consensus in Networks of Multiagents With ...

8 downloads 0 Views 314KB Size Report
By the strong law of large numbers for Markov chains (see [18]), we have lim. 1 n n k=1 f(!k) = E f(!1); ..... Shih-Hau Fang and Tsung-Nan Lin. Abstract—This brief ...
IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 11, NOVEMBER 2008

[7] M. J. Korenberg, “Identifying nonlinear difference equation and functional expansion representations: the fast orthogonal algorithm,” Ann. Biomed. Eng., vol. 16, pp. 123–142, 1988. [8] L. Wang and J. M. Mendel, “Fuzzy basis functions, universal approximation, and orthogonal least-squares learning,” IEEE Trans. Neural Netw., vol. 3, no. 5, pp. 807–814, Sep. 1992. [9] X. Hong and C. J. Harris, “Neurofuzzy design and model construction of nonlinear dynamical processes from data,” Inst. Electr. Eng. Proc. —Control Theory Appl., vol. 148, no. 6, pp. 530–538, 2001. [10] Q. Zhang, “Using wavelets network in nonparametric estimation,” IEEE Trans. Neural Netw., vol. 8, no. 2, pp. 227–236, Mar. 1997. [11] S. A. Billings and H. L. Wei, “The wavelet-narmax representation: A hybrid model structure combining polynomial models with multiresolution wavelet decompositions,” Int. J. Syst. Sci., vol. 36, no. 3, pp. 137–152, 2005. [12] N. Chiras, C. Evans, and D. Rees, “Nonlinear gas turbine modeling using narmax structures,” IEEE Trans. Instrum. Meas., vol. 50, no. 4, pp. 893–898, Aug. 2001. [13] Y. Gao and M. J. Er, “Online adaptive fuzzy neural identification and control of a class of MIMO nonliear systems,” IEEE Trans. Fuzzy Syst., vol. 11, no. 4, pp. 462–477, Aug. 2003. [14] K. M. Tsang and W. L. Chan, “Adaptive control of power factor correction converter using nonlinear system identification,” Inst. Electr. Eng. Proc.—Electric Power Appl., vol. 152, no. 3, pp. 627–633, 2005. [15] G. C. Luh and W. C. Cheng, “Identification of immune models for fault detection,” Proc. Inst. Mech. Eng. Part I: J. Syst. Control Eng., vol. 218, pp. 353–367, 2004. [16] A. C. Atkinson and A. N. Donev, Optimum Experimental Designs. Oxford, U.K.: Clarendon, 1992. [17] X. Hong and C. J Harris, “Nonlinear model structure detection using optimum experimental design and orthogonal least squares,” IEEE Trans. Neural Netw., vol. 12, no. 2, pp. 435–439, Mar. 2001. [18] M. Brusco and S. Stahl, Branch-and-Bound Applications in Combinatorial Data Analysis. New York: Springer-Verlag, 2005. [19] S. Chen, C. F. Cowan, and P. M. Grant, “Orthogonal least squares learning algorithm for radial basis function networks,” IEEE Trans. Neural Netw., vol. 2, no. 2, pp. 302–309, Mar. 1991. [20] S. A. Billings, S. Chen, and R. J. Backhouse, “The identification of linear and nonlinear models of a turbocharged automotive diesel engine,” Mech. Syst. Signal Process., vol. 3, no. 2, pp. 123–142, 1989. [21] S. Chen, X. Hong, C. J. Harris, and P. M. Sharkey, “Sparse modeling using orthogonal forward regression with PRESS statistic and regularization,” IEEE Trans. Syst. Man Cybern. B, Cybern., vol. 34, no. 2, pp. 898–911, Apr. 2004.

1967

Consensus in Networks of Multiagents With Cooperation and Competition Via Stochastically Switching Topologies Bo Liu and Tianping Chen

Abstract—In this brief, we provide some theoretical analysis of the consensus for networks of agents via stochastically switching topologies. We consider both discrete-time case and continuous-time case. The main contribution of this brief is that the underlying graph topology is more general in both cases than those appeared in previous papers. The weight matrix of the coupling graph is not assumed to be nonnegative or Metzler. That is, in the model discussed here, the off-diagonal entries of the weight matrix of the coupling graph may be negative. This means that sometimes, the coupling may not benefit, but may prevent the consensus of the coupled agents. In the continuous-time case, the switching time intervals also take a more general form of random variables than those appeared in previous works. We focus our study on such networks and give sufficient conditions that ensure almost sure consensus in both discrete-time case and continuous-time case. As applications, we give several corollaries under more specific assumptions, i.e., the switching can be some independent and identically distributed (i.i.d.) random variable series or a Markov chain. Numerical examples are also provided in both discrete-time and continuous-time cases to demonstrate the validity of our theoretical results. Index Terms—Almost sure, consensus, stochastic, switching topology.

I. INTRODUCTION In a network of dynamical agents, groups of agents need to agree upon certain quantity of interest in order to realize coordination among them, which is the so-called “consensus problem.” Consensus problems often arise in the applications of multiagent systems [1]–[4] and have received much attention in recent years. There is a large amount of papers concerning such problems (see [5]–[15], [17], [19] and references therein). To achieve consensus, there should be some information flow from agent to agent, which may be directed or undirected. The agents with information flow can be described by a graph topology. The topology may be static, which means that it dose not change along with time. However, in many cases, it may dynamically change, which is often resulted from unreliable transmission or limited communication/sensing range. One of the important classes of dynamically changing network topologies is the so-called “switching topoldogy,” where the network topology switches at a sequence of time points, randomly or controlled by a given rule. Consensus problems with switching topologies have been addressed in several papers such as [7], [9], [15], [17], [19], and others. The weighted directed graph is an important class in modeling the network topology, where a directed information flow is modeled as a directed edge. When the information flow plays positive role to consensus between the agents, the corresponding edge is assigned a positive weight. Otherwise, it is assigned a negative weight. In real world, it is possible that there exists a positive or a negative role among agents to achieve consensus. This results in a graph topology with arbitrary weighted edges. Therefore, it is meaningful to investigate consensus problems for such network topologies in both theories and applications. Manuscript received April 23, 2008; revised July 15, 2008; accepted August 5, 2008. First published September 30, 2008; current version published November 5, 2008. This work was supported by the National Science Foundation (NSF) of China under Grants 60574044 and 60774074. The authors are with Key Laboratory of Nonlinear Science of Chinese Ministry of Education, Institute of Mathematics, Fudan University, Shanghai 200433, P. R. China (e-mail: [email protected]; [email protected]). Color versions of one or more of the figures in this brief are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNN.2008.2004404

1045-9227/$25.00 © 2008 IEEE

Authorized licensed use limited to: FUDAN UNIVERSITY. Downloaded on November 11, 2008 at 06:09 from IEEE Xplore. Restrictions apply.

1968

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 11, NOVEMBER 2008

Yet in almost all previous works, even in the latest results such as [17] and [19], the underlying graph of the network topology was always described by a weight matrix with nonnegative off-diagonal elements, which in the discrete-time case is always the stochastic matrix and in continuous-time case the Metzler matrix. To the best of our knowledge, [15] is among a few papers, in which consensus problems on arbitrary weighted graph topology is considered, where the switching time intervals are assumed to be identical and small enough and the switching sequence is independent and identically distributed (i.i.d.). In this brief, we investigate the consensus problems for network of agents, where the weight matrix of the underlying graph is not necessarily nonnegative, with switching topologies. It implies that the network includes both cooperation and competition among agents and the situation is more complicated than the case when the weights are always positive. Because of the existence of competitive tendency, stronger connection does not always result in better consensus stability and only the connection structure cannot determine the consensus stability. This can be seen by the fact that even if the network is completely connected, it is not difficult to find the examples of the fact that the consensus stability does not exist. So new method should be proposed. Here, we use a random variable derived from the connection structure and weights of the network and sufficient conditions in terms of this random variable for almost sure consensus is obtained. Moreover, both discrete-time network and continuous-time network are studied. We first discuss consensus problem under a general stochastic framework and give a sufficient condition for almost sure consensus. Then, using the strong law of large numbers for i.i.d. random variables and for Markov chains, we deduce sufficient conditions for almost sure consensus when the switching sequence consists of i.i.d. random matrices or forms a Markov chain. The rest of this brief is organized as follows. In Section II, we discuss the consensus problem on stochastic switching networks and give some theoretical results. Numerical examples with simulation will be given to illustrate the theoretical results in Section III. The brief is concluded in Section IV.

In this section, we will investigate consensus problems in networks with stochastic switched topologies. It includes both discrete-time and continuous-time dynamical systems. In both cases, we will first discuss the most general stochastic framework, and then deduce some corollaries under more specific assumptions. A. Discrete-Time Case  = fA = [aij ]ni;j =1 j n aij = cA ; i = 1; . . . ; ng, where Let

j =1 cA is a constant associated with A, let B be the Borel  -algebra of +  , let f( k ; Bk )g

k=1 be a series of measurable spaces, with k   uniformly bounded, and let Bk = B \ k . Construct a product

probability space f ; F ; g, where = f(!1 ; !2 ; . . .) : !k 2 k g; F = B1 2B2 2 1 1 1, and is a probability measure defined on f ; Fg. Consider the following random discrete-time dynamical linear system:

1

x(k + 1) = A (!)x(k); k = 1; 2; . . . (1) x(k) 2 is the state vector at time k, ! = (!1 ; !2 ; . . .) 2 , and A (!) = ! .  . Define a function  (A) as Suppose A = [a ] 2 2

k

n

k

ij n

n

(A) = 12 l;m=1 max max ;2;...;n i;j =1;2;...;n

a

li

0 ami 0 + k

6=

i;j

2

3

3

3

3

6

3

V (y) = y

l

3

3

n

0 ym = k n

(alk 0 amk )xk

a

lj

+ amj

jalk 0 amk j

:

(2)

3

(3)

=1

=

1 (alk 0 amk )[(xk 0 xj ) 0 (xi 0 xk )] 2 k=1

(4)

=

1 [(ali 0 ami ) 0 (alj 0 amj )](xi 0 xj ) 2

(5)

+ k



6=

(alk 0 amk )[(xk 0 xj ) 0 (xi 0 xk )]

(6)

i;j

1 (ali 0 ami 0 alj + amj (xi 0 xj ) 2 + k

6=

jalk 0 amk j)(xi 0 xj )

(7)

i;j

  (A)V (x)

II. CONSENSUS ANALYSIS

k

As a preparations, we first prove the following.  , and  (A) be defined in (2), Lemma 1: Let A = [aij ]n n 2

T n ; denote x = maxi=1;2;...;n (xi ), for x = (x1 ; x2 ; . . . ; xn ) 2 x = mini=1;2;...;n (xi ), and V (x) = x 0 x ; then the following statements are true. 1)  (A)  0, and  (A) = 0 if and only if A = 1 3 dT , where 1 = (1; 1; . . . ; 1)T 2 n and d = (d1 ; d2 ; . . . ; dn )T 2 n . 2) V (Ax)   (A)V (x). Proof: 1) For fixed i; j; l; m 2 f1; 2; . . . ; ng, k=i;j jalk 0 amk j  0, and if ali 0 ami 0 alj + amj < 0, then ami 0 ali 0 amj + alj > 0. Therefore,  (A)  0. It is clear that  (A) = 0 implies that for any l 6= m; alk = amk ; k = 1; . . . ; n. Therefore, A = 1 3 dT for some d 2 Rn . 2) Denote y = Ax. Without loss of generality, let yl = y , ym = y ; xi = x ; and xj = x . Then

(8)

where (3) ) (4) comes from n (alk 0 amk ) = 0 due to the k=1 equal row sum of A, and (6) ) (7) comes from the fact that if alk 0 amk  0, then (alk 0amk )(xk 0xj )  (alk 0amk )(xi 0xj ), and 0(alk 0 amk )(xi 0 xk )  0; otherwise, (alk 0 amk )(xk 0 xj )  0, and 0(alk 0 amk )(xi 0 xk )  0(alk 0 amk )(xi 0 xj ). The proof is completed. Based on this lemma, we now give the following result concerning consensus of (1). Theorem 1: If

!2 :

+1

k

=1

(!

k

)=0

=1

1

(9)

the system (1) will achieve consensus almost surely, i.e., for any initial value x(1) 2 n , the solution of (1) fx(k; ! )gk+=1 satisfies (f! : lim k + maxi;j jxi (k; ! ) 0 xj (k; ! )j = 0g) = 1 n ; Proof: From Lemma 1, for any initial value x(1) 2 V (x(k + 1))  (!k ) 1 1+1 (!1 )V (x(1)), which implies that limk + V (x(k)) = 0 if k=1  (!k ) = 0, and the conclusion is obvious. If we define ln 0 = 01, then the condition (9) in Theorem 1 is + ln (!k ) = 01g) = 1. In the equivalent to (f! 2 : k=1 following, we will consider two special cases: the i.i.d. case and the Markov process. Assumption 1 (i.i.d.): 1) 1 = 2 = 1 1 1. 2) =  2  2 1 1 1 for some probability measure  defined on 1 .

!1

!1

1

1

Authorized licensed use limited to: FUDAN UNIVERSITY. Downloaded on November 11, 2008 at 06:09 from IEEE Xplore. Restrictions apply.

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 11, NOVEMBER 2008

Under Assumption 1, A1 (!); A2 (! ); . . . are i.i.d. Thus, as a direct consequence of Theorem 1, we obtain the following. Corollary 1: Under Assumption 1, if the expectation E ln (A1 (!)) < 0 (it is possible that E ln (A1 (!)) = 01), then system (1) will achieve consensus almost surely. Proof: Because k is bounded, it is easy to see that  ( 1 ) is also bounded. Thus, ln  ( 1 ) is upper bounded, which implies that E (ln+  ( 1 )) < +1, where for a random variable X; X + = maxf0; X g. From the strong law of large numbers (see [16]), we have n

! 1 n k=1 ln (Ak (!)) = E ln (A1 (!)) < 0; a:s: : (10) 1 This means (f! 2 : + k=1 ln  (!k ) = 01g) = 1, and the result lim n +

1

comes from Theorem 1. Assumption 2 (Markov Chain): . 1) 1 = 2 = 1 1 1 are compact subsets of

2) P (Ajx) are Markov transition probabilities such that there is a unique probability  on B1 satisfying

(A) = P (Ajx)(dx)

1

1

1

lim

1

n

n k=1 f (!k ) = E f (!1 ); 1

then

1

+

k=1

which implies

1

+

k=1

f (!k ) = 01;

!

ln ( k ) =

01;

::

as

::

as

::

as

where ! = (!1 ; !2 ; . . .) with !k 2 kc , and Lk (! ) = !k . The switching time sequence ftk gk+=0 forms a partition of [0; +1) with 0 = t0 < t1 < t2 < 1 1 1. Denote 1ti = ti 0 ti 1 , then c ((!1 ; 1t1 ); (!2 ; 1t2 ); . . .) 2 . (If there is an i such that ti = +1, then we write 1ti = +1 and 1tk = 0 for k > i.) Then, (11) describes a dynamical system with stochastic switched topologies and switching time points.  c ! as:  (A) = max Define a function  :

i;j f0(aij + aji ) 0 , then min( a ; a ) g , where (Lk (!))1tk is A = [ a ] ij n n ik jk k=i;j a random variable on ( c ; F c ; c ). As a preparation for our main results, we first consider the following autonomous system:

1

(12)

 c has zero row sum. Then, we have the where A = [aij ]n n 2

following. as in Lemma 1; let x(t) be the Lemma 2: Define V : n ! solution of (12) with initial value x(0) 2 n such that V (x(0)) > 0; then the following relation will hold:

2

V (x(t))  (A)t + ln V (x(0)):

3

(13)

Proof: For any given time t, let i and i be indexes satisfying xi (t) = maxi 1;2;...;n (xi (t)), xi (t) = mini 1;2;...;n (xi (t)). Then, we have

2f

g

3

2f

g

d (xi (s) 0 xi (s)) ds s=t n

=

j =1 =

0 0

ai j xj (t) 0 n

6

j =1;j =i n

6

j =1;j =i

n

j =1

ai j xj (t)

ai j (t)[xi (t) 0 xj (t)] ai j (t)[xj (t) 0 xi (t)]

0(ai i + ai i )[xi (t) 0 xi (t)] n 0 ai j [xi (t) 0 xj (t)] j =1;j = 6 i ;i n ai j [xj (t) 0 xi (t) ] 0 j =1;j = 6 i ;i  0(ai i + ai i )[xi (t) 0 xi (t)] n + max(0ai j ; 0ai j )[xi (t) 0 xi (t)] j =1;j = 6 i ;i = 0(ai i + ai i )[xi (t) 0 xi (t)] n 0 min(ai j ; ai j )[xi (t) 0 xi (t)] j =1;j = 6 i ;i  (A)[xi (t) 0 xi (t)]: =

x

x

x

B. Continuous-Time Case n c  c = fA = [aij ]n Let

i;j =1 j j =1 aij = 0; i = 1; . . . ; ng; B + c c c  , and let f( ; B )g be the Borel  -algebra on

k k k=1 be a series  c being uniformly bounded, of measurable spaces with kc 

Bkc = Bcc \ c kc . Construct a product probability space ( c ; F c ; c ) + where ( ; F ) = f k=1 (( kc ; + ); (Bkc ; B+ ))g, with B+ being the Borel  -algebra on + , and c being a probability measure defined on ( c ; F c ). Consider the following switched linear dynamical system:

1

1

t 2 [tk01 ; tk )

2

x_ (t) = Ax

and the proof is completed.

x_ (t) = Lk (!)x(t);

0

6

ln

8A 2 B1 :

Under Assumption 2, denote the product probability space by ( ) ( ) + + ( 1 ; B1 ; x ) = ( k=1 k ; k=1 Bk ; x ), where x is a ( ) induced by P (Ajx) when using the probability measure on B1 initial distribution !1 = x. Corollary 2: Under Assumption 2, if E ln  (A1 (! )) < 0, then system (1) will achieve consensus almost surely, where E ( 1 ) is the expectation induced by  . Proof: Obviously, ln  ( 1 ) is continuous on 1 except for those x satisfying (x) = 0. Then, we can find a continuous function f ( 1 ) on 1 satisfying f (x)  ln  (x) for all x 2 1 and 01 < E f < 0. By the strong law of large numbers for Markov chains (see [18]), we have

1

1969

(11)

(14) (15)

This implies that

D + V  (A)V

(16)

D + f (t) = limh>0;h!0 f (t + hh) 0 f (t) :

(17)

V (x(t))  V (x(0))e(A)t :

(18)

where

Therefore

Authorized licensed use limited to: FUDAN UNIVERSITY. Downloaded on November 11, 2008 at 06:09 from IEEE Xplore. Restrictions apply.

1970

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 11, NOVEMBER 2008

Equivalently, we conclude ln

V (x(t))   (A)t + ln V (x(0))

(19)

which completes the proof. Based on Lemma 2, we give the following theoretical results on the consensus analysis of (11). Theorem 2: The switched system (11) will achieve consensus almost surely, if

Pc

n

1

!+1

lim sup

 (Lk (!))1tk < 0

n k=1

n

= 1

(20)

P c (Bnc ) = 0 (21) 1tn c (22) lim = 0; a:s: n!+1 n where Bnc = f((!1 ; 1t1 ); (!2 ; 1t2 ); . . .) 2 c : 1tn = +1g. +1 B c ) = 0. Thus, we Proof: Equation (21) implies that c ([n =1 n + 1 c c can restrict our analysis on n [n=1 Bn . For any given initial value x(0) 2 n and ((!1 ; 1t1 ); (!2 ; 1t2 ); . . .) 2 c , let x(t) be the solution of (11) from x(0), and define V (x) as in Lemma 2. Because  ( 1 )  c and f c g+1 are uniformly bounded, there exists is continuous on

k k=1 1 ck . a constant M > 0 such that j j < M on [k+=1  Denote Vk = V (x(tk )), and Vk = max t2[t ;t ] V (x(t)). Then, by Lemma 2, we conclude

V

ln n

 Vn M tn+1   Ln ! tn n   Lk ! tk ln

+

(

k

=1

1

(

))1

(

(

+ ln

))1

Vn01 + M 1tn+1

+ ln

n



c



c

lim

!+1

01

V

ln n =

n lim

!+1

V (x(t)) = 0

!+1

n

=



c

1

n

n k=1

!+1

lim sup

n

c

1

(

n k=1

!+1 k=1  Lk !

n

lim

(

(

tk + M 1tn+1 = 01

))1

ft!+1 V

(

lim

c

!+1

lim sup

n

(24)

1

n

 (!k )1tk < 0

n k=1

(25)

= 1

which implies (20). Assumption 4 (Markov Chain):  c , and f! g+ 1) 1c = 2c = 1 1 1 are compact subsets of

k k=1 forms a Markov chain. 2) P c (Ajx) are Markov transition probabilities such that there is a unique probability  c on B1c satisfying

1

P c Ajx c dx

c (A) =

(

)

(

8A 2 B1c :

)

1

1t1 ; 1t2 ; . . . are i.i.d. with E 1t1 < E 1t1 )2 < +1. 4) All !k and 1tk are independent.

3)

+

0

and E (1t1

1

f(1!k) ; (

1

!E

tk ;

N

1

! 1 +

1

1

n

!+1 n k=1  !k

:

x(t)) = 0g) = 1

(

and the proof is completed. Similar to the discrete-time case, we now consider the i.i.d. case and the Markov process. Assumption 3 (i.i.d.): 1) 1c = 2c = 1 1 1 and !1 ; !2 ; . . . are i.i.d. 2) 1t1 ; 1t2 ; . . . are i.i.d. with E 1t1 < +1 and E (1t1 0 E 1t1 )2 < +1. 3) All !k and 1tk are independent. Corollary 3: Under Assumption 3, the switched system (11) will achieve consensus almost surely if

E (!1 ) < 0:

= 1

From (23) and because E 1t1 > 0, we have

lim

n

Combining with (20), we have c

:

tk = E (!1 )E 1t1

)1

for Markov chains, we have

 (Lk (!))1tk + M 1tn+1 ) < 0

n

(

1 forms a Markov chain on (( 1c 2 tkN )gk+=1 c ; (B1 2 BN )(1) ) with its Markov transition probabilities [0; N ]) P N ((A; B)j(x; t)) = P c (AjxN)P (1t1N 2 B), where BN is the Borel  -algebra on [0; N ]; B 2 B . By the strong law of large numbers

 (Lk (!))1tk < 0 n

!+1 n k=1  !k

n

and

:

t

lim sup

n

1

lim

E 1tkN

On the other hand, by (22), we have c

c

1

V0 + M 1tn+1 :

 (Lk (!))1tk + M 1tn+1 = 01

!+1 k=1

n

1

Under Assumption 4, the product probability space can also be written c( ) c( ) c( ) as ( 1 ; B1 ; cx ), where cx is a probability measure on B1 c induced by P (Ajx) when using the initial distribution !1 = x. Corollary 4: Under Assumption 4, the switched system (11) will achieve consensus almost surely, if the expectation E ( ) < 0. Proof: It is similar to Corollary 3; we only need to verify (20). For a given N > 0, define 1tkN = minf1tk ; N g. Then, f1tkN g is a series of i.i.d. random variables satisfying

This implies that c

Proof: From Theorem 2, we only need to verify (20), because (21) and (22) follow directly from Chebyshev’s inequality (see [16]). Because f (!k )1tk gk+=1 are i.i.d., and !k and 1tk are independent, by the strong law of large numbers, we have

(23)

(

tkN

)1

=

E

 !1 )1t1N );

( (

::

a s

N x;t

(26)

where  N is the unique invariant probability on 1c 2 [0; N ] induced by the Markov transition probabilities P N ((A; B )j(x; t)) and N x;t is the probability on (B1c 2 BN )( ) when using the initial distribution N c (!1 ; 1t1 ) = (x; t). Obviously,  ((A; B )) =  (A)P (B ). Furthermore, we have

1

1

n

!+1 n k=1  !k

n

lim

=

(

!+1 n

n

n

1

lim

tk

)1

k

n

 (!k )1tkN +

=1

k

=1;1t

>N

 (!k )(1tk 0 N ) (27)

and

0 Mn 

n

tk 0 N ) 

(1

1

n

n k=1;1t =1;1t >N n M (1t 0 N ) n k=1;1t >N k k

Authorized licensed use limited to: FUDAN UNIVERSITY. Downloaded on November 11, 2008 at 06:09 from IEEE Xplore. Restrictions apply.

>N

 (!k )(1tk 0 N ) (28)

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 11, NOVEMBER 2008

1971

Fig. 1. Almost sure consensus of five agents (discrete-time, i.i.d. case).

M

where is the upper bound of j large numbers to (1 ) n k=1;1t we have

=n



on c1 . Appling the strong law of >N (1 k 0 ) and using (26)–(28),

t N

j

E (!1)E 1t1 ME f1 g(1t1 N ) lim 1 (! )1t !+1 n =1 E (!1)E 1t1 + ME f1 g(1t1 N ); a:s: where E f1 g (1t1 N ) is the mathematical expectation of 1t1 N restricted on 1t1 > N . N



0

t >N

j

n



k

n



k

j

k

N



t >N

0

j

t >N

N x;t

0

0

f

0

g

Because

lim E f1t >N g (1t1 N ) = N! lim+1(E 1t1 E 1t1N ) = 0 j

N !+1

lim

0

N x;t

N !+1

=

x

(! )1t = ! lim+1 E (!1 )E 1t1 = E (!1 )E 1t1 < 0; a:s: k



N

0

k

:

1:5102 1:3356 0:2156 0:0844 1:5767 1:7393 0:8077 0:5929 0:7130 0:6757

0

N



0

system (1) will achieve consensus almost surely. The simulation results are shown in Fig. 1 where the initial value is arbitrarily selected as (1) = (01 2006 02 5490 00 1506 0 1477 00 3619)T . Example 2: This example is for the continuous-time network under Assumption 4. Select = 5, and 1c = f 1 2 3 4 5 g, where

c x;t

n

k=1

0

0

we have

lim 1 n!+1 n

n A ;A  A : 0 0:3295 0:2939 0:0070 0:3696 0:4278 0 0:3538 0:1762 0:0422 A1 = 0:3897 0:1083 0 0:4602 0:0417 0:1390 0:4134 0:4399 0 0:0077 0:2337 0:0661 0:3404 0:3597 0 0:1841 0:2907 0:1607 0:2759 0:0885 0:2346 0:1178 0:3423 0:0397 0:3451 A2 = 0:3631 0:3625 0:0523 0:2256 0:0035 : 0:0674 0:1703 0:8430 0:1004 0:1595 0:0582 0:0141 0:5248 0:4349 0:1127 By calculation,  (A1 ) = 0:8464;  (A2 ) = 1:0107, and E = (0:8464 + 1:0107)=2 = 0:9285 < 1. From Corollary 1,  A

0

0

and

Example 1: This example is for the discrete-time network under Assumption 1. Select = 5, and 1 = f 1 2 g with the probability measure (f 1 g) = (f 2 g) = 0 5. Take

c x;t

:

A1 =

The proof is completed.

0

0

III. NUMERICAL EXAMPLES In this section, we will give two simple examples with numerical simulations to verify the theoretical results.

A2 =

; : ; n 0:8749 0:9530 0:0384 0:5477 0:1155 0:5578 2:4924 0:7584 0:6908 0:4430 0

0

:

;: ; : A ;A ;A ;A ;A 1:2225 0:1978 0:3894 0:7633 0:8183 0:4376 0:7350 0:1255 0:7868 0:4103 0:4438 0:5986 0:5291 0:1058 2:1154 0:1790 0:8639 0:1386 0:5691 0:8391 0:2765 2:1281 0:0957 0:6811 0:8375 3:1642 0:9229 0:4035 0:8852 2:4074 0

0

0

0

0

0

0

0

0

0

Authorized licensed use limited to: FUDAN UNIVERSITY. Downloaded on November 11, 2008 at 06:09 from IEEE Xplore. Restrictions apply.

0

0

1972

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 11, NOVEMBER 2008

Fig. 2. Almost sure consensus in five agents (continuous-time, Markov chains).

01:5318 01:2980 A3 = 0:1015 00:5744 0:3918 01:1817 0:1379 A4 = 0:1962 0:0520 0:1634 01:7444 0:5096 A5 = 0:1906 0:4906 0:5648

00:6406 00:6675 1:0761 1:9329 1:1601 0:4922 01:9543 0:3235 0:8984 0:9331 0:6849 01:4082 0:7857 0:2406 0:6619

1:3228 1:8629 01:2440 2:2284 0:3388 0:1401 0:8214 01:0775 0:4161 0:5448 0:7608 0:0714 02:2616 0:4327 0:4369

1:5032 0:2492 00:4164 04:3556 00:1255 0:2705 0:9185 0:5250 01:5133 0:2135 0:1214 0:5674 0:7927 01:9608 0:0871

00:6536 00:1466 0:4828 0:7687 01:7651 0:2789 0:0765 0:0328 0:1468 01:8548 0:1772 0:2599 0:4925 0:7968 01:7508

and the transition matrix

0 0:6526 0 0:3474 0 0 0 0:6918 0 0:3082 T= 0 0:0198 0 0:9802 0 : 0:3970 0 0:6030 0 0 0 0:3323 0 0:6677 0 Because T is irreducible, there is a unique invariant probability measure c on 1c with  c (A1 ) = 0:1519;  c (A2 ) = 0:1173; c (A3 ) = 0:3119; c (A4 ) = 0:3827; and c (A5 ) = 0:0362. By the definition of the function  , it is easy to calculate  (A1 ) = 1:2527; (A2 ) = 01:5640; (A3 ) = 1:0202; (A4 ) = 00:9631; (A5 ) = 01:4626, and E  = 00:0964 < 0. The switching time interval 1tk is uniformly distributed on (0; 1). By Corollary 4, system (11) will achieve consensus almost surely. The simulation results are shown in Fig. 2 with arbitrarily selected initial value x(0) = (00:0998; 01:4343; 0:8033; 0:0237; 01:2390)T .

IV. CONCLUSION In this brief, we investigate the consensus problem in network of agents with stochastic switching topologies and arbitrary weights, which might play a positive or a negative role among the agents in the consensus process. We first make some theoretical analysis under a generalized stochastic framework, and then develop the general results into more special form as corollaries under two important stochastic cases: the i.i.d. case and the Markov chains. Numerical examples with simulation are provided to show the effectiveness of the theoretical results. ACKNOWLEDGMENT The authors would like to thank the reviewers and the editor for their valuable comments and suggestions that improved the presentation of this brief. They would also like to thank Dr. W. Lu for his helpful discussion in the revision of this brief.

REFERENCES [1] A. Fax and R. M. Murray, “Information flow and cooperative control of vehicle formations,” IEEE Trans. Autom. Control, vol. 49, no. 9, pp. 1465–1476, Sep. 2004. [2] R. Olfati and R. M. Murray, “Distributed cooperative control of multiple vehicle formations using structural potential functions,” presented at the 15th IFAC World Congr., Barcelona, Spain, Jun. 2002, unpublished. [3] C. W. Reynolds, “Flocks, herds, and schools: A distributed behavioral model,” in Proc. Comput. Graphics Conf., Jul. 1987, vol. 21, pp. 25–34. [4] J. Cortes and F. Bullo, “Coordination and geometric optimization via distributed dynamical systems,” SIAM J. Control Optim., vol. 44, pp. 1543–1574, 2005. [5] R. Olfati-Saber, J. A. Fax, and R. M. Murray, “Consensus and cooperation in networked multi-agent systems,” Proc. IEEE, vol. 95, no. 1, pp. 215–233, Jan. 2007. [6] R. Olfati-Saber and R. M. Murray, “Consensus protocols for networks of dynamic agents,” in Proc. Amer. Control Conf., 2003, pp. 951–956.

Authorized licensed use limited to: FUDAN UNIVERSITY. Downloaded on November 11, 2008 at 06:09 from IEEE Xplore. Restrictions apply.

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 19, NO. 11, NOVEMBER 2008

[7] R. Olfati-Saber and R. M. Murray, “Consensus problems in networks of agents with switching topology and time-delays,” IEEE Trans. Autom. Control, vol. 49, no. 9, pp. 1520–1533, Sep. 2004. [8] W. Ren, R. W. Beard, and T. W. McLain, , V. Kumar, N. E. Leonard, and A. S. Morse, Eds., “Coordination variables and consensus building in multiple vehicle systems,” in Cooperative Control, ser. Lecture Notes in Control and Information Sciences. Berlin, Germany: Springer-Verlag, 2004, vol. 309, pp. 171–188. [9] W. Ren and R. W. Beard, “Consensus seeking in multi-agent systems under dynamically changing interaction topologies,” IEEE Trans. Autom. Control, vol. 50, no. 5, pp. 655–661, May 2005. [10] J. A. Fax and R. M. Murray, “Information flow and cooperative control of vehicle formations,” IEEE Trans. Autom. Control, vol. 49, no. 9, pp. 1465–1476, Sep. 2004. [11] A. Jadbabaie, J. Lin, and A. S. Morse, “Coordination of groups of mobile autonomous agents using nearest neighbour rules,” IEEE Trans. Autom. Control, vol. 49, no. 6, pp. 988–1001, Jun. 2003. [12] L. Moreau, “Stability of multiagent systems with time-dependent communication links,” IEEE Trans. Autom. Control, vol. 50, no. 2, pp. 169–182, Feb. 2005. [13] Z. Lin, M. Broucke, and B. Francis, “Local control strategies for groups of mobile autonomous agents,” IEEE Trans. Autom. Control, vol. 49, no. 4, pp. 622–629, Apr. 2005. [14] L. Xiao and S. Boyd, “Fast linear iterations for distributed averaging,” Syst. Control Lett., vol. 53, pp. 65–78, 2004. [15] M. Porfiri and J. D. Stilwell, “Consensus seeking over random weighted directed graphs,” IEEE Trans. Autom. Control, vol. 52, no. 9, pp. 1767–1773, Sep. 2007. [16] R. Durrett, Probability: Theory and Examples, 3rd ed. Belmont, CA: Duxbury, 2005. [17] A. T. Salehi and A. Jadbabaie, “Necessary and sufficient conditions for consensus over random independent and identically distributed switching graphs,” in Proc. 46th IEEE Conf. Decision Control, Dec. 2007, pp. 4209–4214. [18] L. Breiman, “The strong law of large numbers for a class of Markov chains,” Ann. Math. Statist., vol. 31, no. 3, pp. 801–803, 1960. [19] F. Fagnani and S. Zampieri, “Randomized consensus algorithms over large scale networks,” IEEE J. Sel. Area Commun., vol. 26, no. 4, pp. 634–649, May 2008.

1973

Indoor Location System Based on Discriminant-Adaptive Neural Network in IEEE 802.11 Environments Shih-Hau Fang and Tsung-Nan Lin

Abstract—This brief paper presents a novel localization algorithm, named discriminant-adaptive neural network (DANN), which takes the received signal strength (RSS) from the access points (APs) as inputs to infer the client position in the wireless local area network (LAN) environment. We extract the useful information into discriminative components (DCs) for network learning. The nonlinear relationship between RSS and the position is then accurately constructed by incrementally inserting the DCs and recursively updating the weightings in the network until no further improvement is required. Our localization system is developed in a real-world wireless LAN WLAN environment, where the realistic RSS measurement is collected. We implement the traditional approaches on the same test bed, including weighted -nearest neighbor (WKNN), maximum likelihood (ML), and multilayer perceptron (MLP), and compare the results. The experimental results indicate that the proposed algorithm is much higher in accuracy compared with other examined techniques. The improvement can be attributed to that only the useful information is efficiently extracted for positioning while the redundant information is regarded as noise and discarded. Finally, the analysis shows that our network intelligently accomplishes learning while the inserted DCs provide sufficient information. Index Terms—Adaptive, discriminant analysis, location fingerprinting, neural network, wireless local area network (WLAN).

I. INTRODUCTION Recently, the location information of people and mobile device has been receiving increasing interests from both industry and academics. If the location can be accurately estimated, it could allow the substantial context-aware computing and useful location-based services such as the navigation, object finding, and content delivery [1], [2]. The global positioning system (GPS) is the most popular positioning technique currently. However, several drawbacks are presented in GPS system, such as the poor indoor coverage, insufficient accuracy, specialized hardware requirement, and line-of-sight demand between the satellites and the receiver [3], [4]. Therefore, the integration of location knowledge into today’s wireless technology is regarded as a key complement to GPS [5], [6]. Wireless local area networks (WLANs, also known as Wi-Fi) based on the IEEE 802.11 standard have been considered an alternative solution for localization in recent years [7], [8]. WLAN location system provides several advantages. First, the infrastructures are widespread for Internet and data access, especially in the indoor environments where GPS does not work well. Next, the received signal strength (RSS) sensor function is available in every 802.11 interface. That means the WLAN-based system is cost effective and only software development and installation are required. Finally, such a system performs well for the nonline-of-sight circumstances because the line-of-sight environments are not required at all in the network. Currently, the most popular solution based on RSS of WLAN is the fingerprinting architecture [3], [6]. The fingerprinting approach collects Manuscript received October 05, 2007; revised February 29, 2008; accepted July 25, 2008. Current version published November 5, 2008. This work was supported in part by Taiwan National Science Council under Grants 96-2221-E002-091 and 96-2219-E-002-006. The authors are with the Department of Electrical Engineering and Graduate Institute of Communication Engineering, National Taiwan University, Taipei 10617, Taiwan (e-mail: [email protected]; [email protected]). Color versions of one or more of the figures in this brief paper are available online at http://ieeexplore.ieee.org. Digital Object Identifier 10.1109/TNN.2008.2005494

1045-9227/$25.00 © 2008 IEEE

Authorized licensed use limited to: FUDAN UNIVERSITY. Downloaded on November 11, 2008 at 06:09 from IEEE Xplore. Restrictions apply.

Suggest Documents