Adaptive Lag Synchronization for Competitive Neural ... - IEEE Xplore

1 downloads 0 Views 1MB Size Report
Xinsong Yang, Jinde Cao, Senior Member, IEEE, Yao Long, and Weiguo Rui. Abstract—This paper investigates the problem of adaptive lag synchronization for ...
1656

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 21, NO. 10, OCTOBER 2010

Adaptive Lag Synchronization for Competitive Neural Networks with Mixed Delays and Uncertain Hybrid Perturbations Xinsong Yang, Jinde Cao, Senior Member, IEEE, Yao Long, and Weiguo Rui

Abstract— This paper investigates the problem of adaptive lag synchronization for a kind of competitive neural network with discrete and distributed delays (mixed delays), as well as uncertain nonlinear external and stochastic perturbations (hybrid perturbations). A simple but robust adaptive controller is designed such that the response system can lag-synchronize with a drive system. Based on the Lyapunov stability theory and some suitable Lyapunov–Krasovskii functionals, several sufficient conditions ensuring the lag synchronization are developed. Our synchronization criteria are easily verified and do not need to solve any linear matrix inequality. Some existing results are improved and extended. Moreover, the designed adaptive controller has better anti-interference capacity and is more practical than the usual adaptive controller. Numerical simulations are exploited to show the effectiveness of the theoretical results. Index Terms— Competitive neural networks, lag synchronization, mixed delay, nonlinear perturbations, time scale, vectorform noise.

I. I NTRODUCTION

N

EURAL networks (NNs) have drawn the attention of many researchers from different areas since they have been fruitfully applied in signal and image processing, associative memories, combinatorial optimization, automatic control, and so on (see [1]–[4]) for a survey. In 1983, Cohen and Grossberg [5] proposed competitive neural networks (CNNs). Recently, Meyer-Bäse et al. proposed in [6], [7], and [8] the so-called CNNs with different time scales, which can be seen as the extensions of Hopfield NNs [9], cellular networks [10], Cohen and Grossberg’s CNNs [5] and Amari’s model for primitive neuronal competition [11]. In the CNN model, there are two types of state variables: the short-term memory (STM) variable describing the fast neural activity, and the long-term memory (LTM) variable describing the slow unsupervised synaptic modifications. Therefore, there are two time scales in the CNN model, one of which corresponds to the fast change of the state, and the other

Manuscript received April 16, 2010; revised August 4, 2010; accepted August 5, 2010. Date of publication August 30, 2010; date of current version October 6, 2010. This work was supported in part by the National Natural Science Foundation of China under Grant 11072059 and Grant 10801056, and in part by the Natural Science Foundation of Jiangsu Province of China under Grant BK2009271, and in part by the Scientific Research Fund of Yunnan Province under Grant 2008CD186. X. Yang, Y. Long, and W. Rui are with the Department of Mathematics, Honghe University, Mengzi, Yunnan 661100, China (e-mail: [email protected]; [email protected]; [email protected]). J. Cao is with the Department of Mathematics, Southeast University, Nanjing 210096, China (e-mail: [email protected]; [email protected]). Digital Object Identifier 10.1109/TNN.2010.2068560

to the slow change of the synapse by external stimuli. In [6]–[8], Meyer-Bäse et al. studied the stability of nondelayed CNNs with different time scales. In [12], Meyer-Bäse et al. further studied local uniform stability of CNNs with different time scales under vanishing perturbations. However, time delays always exist in real NNs because of the finite speeds of transmission between neurons as well as traffic congestions, and they are often the source of oscillation and instability of NNs. Therefore, global exponential stability of delayed CNNs with different time scales were investigated in [13]–[17]. In the past decades, since the concept of drive–response synchronization for coupled chaotic systems was proposed in [18], much attention has been paid to control and chaos synchronization, [19]–[21] because of its potential applications in, e.g., secure communication, biological systems, information science, etc. [22]–[25]. In [18], a chaotic system, called the driver (or master), generates a signal sent over a channel to a responder (or slave), which uses this signal to synchronize itself with the driver. In other words, in the drive–response (or master– slave) systems, the response (or slave) system is influenced by the behavior of the drive (or master) system, but the drive (or master) system is independent of the response (or slave) system. After the complete synchronization (CS) was proposed in [18], many other types of synchronization have been presented, such as lag synchronization (LS) [26], anticipated synchronization [19], projective synchronization [27], phase synchronization [28], generalized synchronization [29], etc. CS is characterized by the convergence of the two chaotic trajectories, y(t) = x(t) (where x(t) is the state of driver and y(t) of the responder). However, from the viewpoint of engineering applications and characteristics of channel, a time delay always exists. When the unavoidable delay is taken into account, the CS turns out to LS, which means a coincidence of shifted-in-time states of two coupled systems, i.e., the state variable of the drive system is delayed by positive τ in comparison with that of the response (or slave) system, y(t) = x(t − τ ), τ > 0. Therefore, LS has become a hot topic and attracted much attention from authors in many fields [30], [31], [26]. In the research area of NN synchronization, several results have been obtained in the literature (see [23], [32]–[36]). However, results on synchronization of CNNs are few, we could find only [37] and [38]. In [38], Lou and Cui studied the exponential synchronization for a class of CNNs by the state feedback control scheme. The synchronization

1045–9227/$26.00 © 2010 IEEE

YANG et al.: ADAPTIVE LAG SYNCHRONIZATION FOR COMPETITIVE NEURAL NETWORKS

criteria were given by the linear matrix inequality (LMI). By combining the adaptive control scheme and the LMI approach, Gu [37] investigated synchronization for CNNs with stochastic perturbations. However, [37] and [38] did not consider distributed delays, and the stochastic perturbations in [37] did not include the LTM state. Moreover, authors of [37] and [38] did not consider external perturbations. In practice, NNs usually have a spatial extent due to the presence of an amount of parallel pathways with a variety of axon sizes and lengths [39], [16], [17], [40]. Therefore, it is more practical to consider synchronization of coupled CNNs with both discrete and distributed delays, i.e., mixed delays. Effects of perturbations should be taken into account in researching synchronization of NNs. White noise brought about by some random fluctuations in the course of transmission and other probable causes has received considerable attention in the literature (see [37], [41]–[44], [4], [45]). Unlike [37], this paper will show that stochastic perturbations without the LTM state reduce the difficulty in handling them greatly. On the other hand, from a point of view of practice, CNNs are always in a changing environment and therefore may be disturbed by some unknown factors in the environment. Obviously, such kind of perturbations may be non-stochastic, i.e., their average values are not zero. Moreover, one important reason for drive–response chaos synchronization is the application of chaos to secure communication [22]. A small perturbation to chaotic system will result in a drastic change in the chaotic behavior of the chaotic system. Furthermore, artificially adding some disturbances can make the message transmission more complicated, and hence it is more difficult to realize the drive–response synchronization than in the case without any perturbation, which in turn strengthens the security. However, to the best of our knowledge, the synchronization problem of CNNs with mixed delays and both uncertain external and stochastic perturbations is still open. We call this kind of perturbations hybrid perturbations. Based on the above analysis, in this paper we investigate the LS of CNNs with mixed delays and uncertain hybrid perturbations. The stochastic perturbation is of the form of multidimensional Weiner process (or Brownian motion, see [41], [4]). In the drive–response CNNs, not only the Lipschitz constants of functions but also the bounds of external perturbations are unknown. A simple but robust adaptive controller is designed to overcome these uncertainties and synchronize the coupled systems. Compared with the adaptive controller in [37], which is well known in the literature [22], [23], [41], [21], [36], our adaptive controller has better antiinterference capability and thus is more practical. Moreover, it is not needed to solve any LMI in our synchronization criteria. Numerical simulations demonstrate the effectiveness of the new adaptive controller. The rest of this paper is organized as follows. In Section II, the model of CNNs with mixed delays and hybrid perturbations is presented. Some necessary assumptions, definitions, and lemmas are also given in this section. Our main results and their rigorous proofs are described in Section III. In Section IV, numerical simulations are offered to show the effectiveness of our results. In Section V, conclusions are given.

1657

Notations: The notations are quite standard. Throughout this paper, R+ and Rn denote, respectively, the set of nonnegative real numbers and the n-dimensional Euclidean space. The superscript T denotes matrix or vector transposition. For a vector x = (x 1 , x 2 , . . . , x n )T ∈ Rn , if |x i |, i = 1, 2, . . . , n is bounded, then we say x is bounded. The notation X ≤ Y (respective, X < Y ), where X and Y are symmetric matrices, means that X − Y is negative semidefinite (respectively, negative definite). In is the n ×n identity matrix. · is the Euclidean norm in Rn . If A is a symmetric matrix, λmax (A) means the largest eigenvalue of A. Moreover, let (, F , {Ft }t ≥0 , P) be a complete probability space with filtration {Ft }t ≥0 satisfying the usual conditions (i.e., the filtration contains all PP ([−κ, 0]; Rn ) null sets and is right continuous). Denote by L F 0 the family of all F0 -measurable C([−κ, 0]; Rn )-valued random variables ξ = {ξ(s) : −κ ≤ s ≤ 0} such that sup−κ≤s≤0 Eξ(s) p < ∞, where E{·} stands for mathematical expectation operator with respect to the given probability measure P. The shorthand diag(w1 , w2 , . . . , wn ) denotes a diagonal matrix with the diagonal elements w1 , w2 , . . . , wn . Sometimes, the arguments of a function or a matrix will be omitted in the analysis when no confusion can arise. II. P RELIMINARIES The mixed time-delayed CNN with different time scales and external perturbations is described as ⎧ n  ⎪ ⎪ ε x˙ i (t) = −ci x i (t) + ai j f j (x j (t)) ⎪ ⎪ ⎪ j =1 ⎪ ⎪ n ⎪  ⎪ ⎪ + bi j f j (x j (t − θ )) ⎪ ⎪ ⎪ ⎪ j =1 ⎪ ⎪ n t ⎪  ⎪ ⎨ di j t −η f j (x j (s)ds + j =1 (1) ⎪ p ⎪  ⎪ ⎪ +E i m il (t)Fl + σix (t), ⎪ ⎪ ⎪ l=1 ⎪ ⎪ ⎪ ⎪ i = 1, 2, . . . , n, ⎪ ⎪ ⎪ ⎪ ⎪ m ˙ (t) = −α il i m il (t) + βi Fl f i (x i (t)), ⎪ ⎪ ⎩ l = 1, 2, . . . , p where the first equation denotes the STM, the second equation denotes the LTM, n denotes the number of neurons, p denotes the number of the constant external stimulus, x i (t) is the neuron current activity level, ci > 0 is the time constant of the neuron, f j (x j (t)) is the output of neurons, m il (t) is the synaptic efficiency, Fl is the constant external stimulus, ai j represents the connection weight between the i th neuron and the j th neuron, E i is the strength of the external stimulus, ε is the time scale of STM state, bi j and di j represent the synaptic weight of delayed feedback, scalars θ > 0 and η > 0 are the discrete and the distributed time delay, respectively, αi and βi denote disposable scaling constants with αi > 0, F = (F1 , F2 , . . . , Fp )T , x(t) = (x 1 (t), x 2 (t), . . . , x n (t))T ∈ t Rn , σix (t)  σix (t, S(t), x(t), x(t − θ ), and t −η x(s)ds) represents the nonlinear scalar which may include parametric perturbations and other external disturbances, where p m i = (m i1 (t), m i2 (t), . . . , m ip )T , Si (t) = l=1 m il (t)Fl = m iT (t)F, i = 1, 2, . . . , n, S(t) = (S1 (t), S2 (t), . . . , Sn (t))T ∈ Rn .

1658

From (1), we obtain the following CNN: ⎧ n  ⎪ ⎪ ε x˙i (t) = −ci x i (t) + ai j f j (x j (t)) ⎪ ⎪ ⎪ j =1 ⎪ ⎪ ⎪ n  ⎪ ⎪ ⎪ + bi j f j (x j (t − θ )) ⎪ ⎪ ⎨ j =1 n t  ⎪ di j t −η f j (x j (s) ds + ⎪ ⎪ ⎪ j =1 ⎪ ⎪ ⎪ +E i Si (t) + σix (t), ⎪ ⎪ ⎪ ⎪ ⎪ S˙i (t) = −αi Si (t) + βi |F|2 fi (x i (t)), ⎪ ⎩ i = 1, 2, . . . , n

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 21, NO. 10, OCTOBER 2010

(2)

where |F|2 = F12 + F22 + · · · + Fp2 is a constant. Without loss of generality, the input stimulus vector F is assumed to be normalized with unit magnitude |F|2 = 1. Then, (2) turns out to the following network: ⎧ n  ⎪ ⎪ ε x ˙ (t) = −c x (t) + ai j f j (x j (t)) ⎪ i i i ⎪ ⎪ j =1 ⎪ ⎪ ⎪ n  ⎪ ⎪ ⎪ bi j f j (x j (t − θ )) + ⎪ ⎪ ⎨ j =1 n t  (3) ⎪ + di j t −η f j (x j (s) ds ⎪ ⎪ ⎪ j =1 ⎪ ⎪ x ⎪ +E ⎪ i Si (t) + σi (t), ⎪ ⎪ ⎪ ˙ S (t) = −αi Si (t) + βi f i (x i (t)), ⎪ ⎪ ⎩ i i = 1, 2, . . . , n or the following compact form: ⎧ ⎪ x(t) ˙ = − 1ε C x(t) + 1ε A f (x(t)) ⎪ ⎪ t ⎨ + 1ε B f (x(t − θ )) + 1ε D t −η f (x(s) ds ⎪ + 1ε E S(t) + 1ε σ x (t), ⎪ ⎪ ⎩ ˙ S(t) = −αS(t) + β f (x(t))

(4)

where C = diag(c1 , c2 , . . . , cn )T , A = (ai j )n×n , B = (bi j )n×n , D = (di j )n×n , E = diag(E 1 , E 2 , . . . , E n )T , α = diag(α1 , α2 , . . . , αn ), β = diag(β1 , β2 , . . . , βn ) and σ x (t) = (σ1x (t), σ2x (t), . . . , σnx (t))T . The initial condition of (4) is given as x(t) = φ x (t) ∈ C([−κ, 0], Rn ), S(t) = φ S (t) ∈ C([−κ, 0], Rn ), where κ = max{θ, η}. Based on the concept of drive–response synchronization, we take (4) as the drive system and design the following response system: ⎧  1 dy(t) = − ε C y(t) + 1ε A f (y(t)) ⎪ ⎪ ⎪ ⎪ ⎪ + 1ε B f (y(t − θ )) ⎪ ⎪ t ⎨ + 1ε D t −η f (y(s))ds (5)  ⎪ ⎪ + 1ε E R(t) + 1ε σ y (t) + U dt ⎪ ⎪ ⎪ ⎪ +h(t)dω(t), ⎪ ⎩ d R(t) = [−α R(t) + β f (y(t))]dt y y y y where σ y (t) = (σ1 (t), σ2 (t), . . . , σn (t))T with σi (t)  y t σi (t, R(t), y(t), y(t − θ ), t −η y(s)ds) represents the nonlin-

ear vector that may include parametric perturbations and other external disturbances to (5), U = (u 1 , u 2 , . . . , u n )T is the controller to be designed, ω(t) = (ω1 (t), . . . , and ωn (t))T is a n-dimensional Weiner process defined on (, F , {Ft }t ≥0, P). Here, the white noise dωi (t) is independent of  t dω j (t) for i = j , and h(t)  h(t, z(t), e(t), e(t − θ ), t −η e(s)ds) :

R+ ×R p ×Rn ×Rn ×Rn → Rn×n is called the noise intensity function matrix. This type of stochastic perturbation can be regarded as a result of the occurrence of random uncertainties during the process of transmission. The initial condition of (5) is given by y(t) = ϕ y (t) ∈ C([−κ, 0], Rn ), R(t) = ϕ R (t) ∈ C([−κ, 0], Rn ), where κ = max{θ, η}. We assume that solutions of networks (4) and (5) are bounded and the output signals of the NN (4) can be received by (5) with transmission delay τ ≥ 0. Definition 1 (LS): Systems (4) and (5) are called globally lag-synchronized in mean square if for any given initial condition there exists a constant τ > 0 such that the states of the two systems satisfy lim Ey(t) − x(t − τ )2 = 0

t →+∞

and lim ER(t) − S(t − τ )2 = 0.

t →+∞

Our main purpose of this paper is to design a suitable robust adaptive controller U such that systems (4) and (5) can realize globally asymptotic LS. In order to study the LS between (4) and (5) with the lag time τ ≥ 0, we define the error states e(t) = y(t) − x(t − τ ) and z(t) = R(t) − S(t − τ ). Subtracting (4) from (5) yields the following error system:  1 ⎧ de(t) = − ε Ce(t) + 1ε Ag(e(t)) ⎪ t ⎪ ⎪ 1 ⎪ + ε D t −η g(e(s))ds + 1ε Ez(t) ⎨ (6) + 1ε σ y (t) − 1ε σ x (t − τ) ⎪ ⎪ 1 ⎪ + ε Bg(e(t − θ )) + U dt + h(t)dω(t), ⎪ ⎩ dz(t) = [−αz(t) + βg(e(t))]dt. The initial condition of (6) is e(t) = ξ(t) = ϕ y (t) − φ x (t − τ ), z(t) = ζ(t) = ϕ R (t) − φ S (t − τ ), where ξ(t), ζ (t) ∈ L 2F0 ([−κ, 0], Rn ). In order to get our main results in the next section, we state here some needed properties of the Weiner process (or Brownian motion) [41], [4]: 1) E(hdω) = 0, (hdω)T hdω = trace(h T h)dt. 2) Suppose that V = V (x, t) is a scalar function, where x = (x 1 , x 2 , . . . , x n )T is obtained from an Itô’s differential equation. The differential form of V is obtained as dV =

n n ∂V 1 ∂2V ∂V dt + dx i + d xi d x j . ∂t ∂ xi 2 ∂ xi ∂ x j i, j =1

i=1

In the expansion of the above equation, the following algebraic operations are used: dt · dt = 0, dt · dωi = 0, dωi · dωi = dt, dωi · dω j = 0 (i = j ). Throughout this paper, we make the following assumptions. (H1) There exist unknown positive constants δi , i = 1, 2, . . . , n, such that | f i (x) − f i (y)| ≤ δi |x − y|, ∀x, y ∈ R, x = y. y

(H2) σix (t, 0, 0, 0, 0) = 0, σi (t, 0, 0, 0, 0) = 0 and, if u, v, w, s ∈ Rn are bounded and t ∈ R+ , then

YANG et al.: ADAPTIVE LAG SYNCHRONIZATION FOR COMPETITIVE NEURAL NETWORKS

y

|σix (t, u, v, w, s)| and |σi (t, u, v, w, s)|, i = 1, 2, . . . , n are bounded. (H3) h(t, 0, 0, 0, 0) = 0 and there exist unknown positive constants µ1 , µ2 , µ3 and µ4 such that

trace h T (t)h(t) ≤ µ1 e(t)2 + µ2 e(t − θ )2 t + µ3 e(s)2 ds + µ4 z(t)2 (7) t −η

and µ4 < 2 min1≤i≤n {αi }. Remark 1: Models of this paper are general. In order to avoid too much complexity of symbols, we do not consider time-varying mixed delays. The models in [37] and [38] are special cases of the results of this paper. Note  t h(t, z(t), e(t), e(t −  tthat noise intensity function matrix θ ), t −η e(s)ds) containing z(t) and t −η e(s)ds is more general than h(t, e(t), e(t − θ )) in [37]. One can see from the subsequent proofs in this paper that z(t) has a major effect on synchronization of CNNs. Remark 2: Note that condition (H2) is very mild. We do not impose the usual conditions such as Lipschitz condition and differentiability on the external perturbation functions. They can be discontinuous or even impulsive functions [2], [3]. Note that, if systems (4) and (5) have equilibrium points, nontrivial periodic orbits, limit cycles, or even chaotic orbits, then solutions  t of (4) and (5) are bounded, hence, S(t), x(t), x(t − θ ), t −η x(s)ds, R(t), y(t), y(t − θ ) t and t −η y(s)ds are bounded. Therefore, (H2 ) is satisfied. Lemma 1 [5]: For any vector x, y ∈ Rn and positive definite matrix G ∈ Rn × Rn , the following matrix inequality holds: 2x T y ≤ x T Gx + y T G −1 y. Lemma 2 [46]: For any constant matrix D ∈ Rn×n , D T = D > 0, scalar σ > 0 and vector function ω : [0, σ ] → Rn , one has

σ T σ σ T ω (s)Dω(s) ds ≥ ω(s)ds D ω(s) ds σ 0

0

0

provided that the integrals are all well defined, where D > 0 denotes that D is positive definite matrix. Definition 2: The trivial solution of system (6) is said to be globally asymptotically stable in mean square if for any given initial condition they satisfy 2

lim Ee(t) = 0 and

t →+∞

2

lim Ez(t) = 0.

t →+∞

According to assumptions (H1 )–(H3), the system (6) admits a trivial solution. Obviously, if the trivial solution of system (6) realizes asymptotically stable in mean square for any given initial condition, then, by virtue of Definition 1, the global LS in mean square between (4) and (5) is achieved. III. M AIN R ESULTS Our main objective in this section is to design an allpowerful adaptive feedback controller which is added to the infrastructure of (5) such that states of (5) can globally asymptotically lag-synchronize in mean square with those of (4), i.e., the trivial solution of system (6) is globally

1659

asymptotically stable in mean square. Moreover, when there is no perturbation in (4) or (5), they can also be lag-synchronized by the same adaptive controller. Theorem 1: Suppose that the assumptions (H1)–(H3) hold. Then under the controller u i = −li ei (t) − ωρi sign(ei (t)), i = 1, 2, . . . , n and adaptive law  l˙i = εi ei2 (t), ρ˙i = pi |ei (t)|,

i = 1, 2, . . . , n, i = 1, 2, . . . , n

(8)

(9)

the trivial solution of system (6) is globally asymptotically stable in mean square for any given initial condition, where ω > (1/ε), εi > 0, pi > 0, i = 1, 2, . . . , n, are arbitrary constants. Proof: Since solutions of networks (4) and (5) are bounded, one can see from assumption (H2) that there exist positive constants Mi1 and Mi2 such that |σix (t − τ )| ≤ Mi1 , y |σi (t)| ≤ Mi2 , for t ∈ R+ , i = 1, 2, . . . , n. Let Mi = max{Mi1 +Mi2 } and define the following Lyapunov–Krasovskii functional candidate: V (t) =

4

Vi (t)

(10)

i=1

where

1 T e (t)e(t) + z T (t)z(t) 2 1 t T e (s)Qe(s)ds V2 (t) = 2 t −θ t 1 t e T (s)Me(s)dsdz V3 (t) = 2 t −η z   n n 1 1 1 (Mi − ρi )2 V4 (t) = (li − ki )2 + 2 εi pi ε V1 (t) =

i=1

(11) (12) (13) (14)

i=1

Q and M are positive definite matrices, K = diag(k1 , k2 , . . . , kn ), Q, M, and K are to be determined. In view of the properties (a) and (b) of the Weiner process, differentiating both sides of (11) along trajectories of error system (6), one obtains d V1 (t) = e T (t)de(t)+

n

i, j =1

dei de j + z T (t)dz(t)+

n

dz i dz j

i, j =1

 

1 1 C + L e(t) + Ag(e(t)) = e T (t) − ε ε t 1 1 + Bg(e(t − θ )) + D g(e(s))ds ε ε t −η 1 1 1 + Ez(t) + σ y (t) − σ x (t − τ ) − ωρsign(e(t)) ε ε ε

1 T + trace h (t)h(t) 2

 + z T (t) − αz(t) + βg(e(t)) dt + e T (t)h(t)dω(t) (15) where L = diag(l1 , l2 , . . . , ln ), ρ = diag(ρ1 , ρ2 , . . . , ρn ), sign(e(t)) = (sign(e1 (t)), sign(e2 (t)), . . . , sign(en (t)))T .

1660

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 21, NO. 10, OCTOBER 2010

From (H1), one knows that g T (e(t))g(e(t)) T

g (e(t − θ ))g(e(t − θ ))

≤ e T (t)e(t),

(16)

T

≤ e (t − θ )e(t − θ ) (17)

where  = It follows from Lemma 1, (16), and (17) that 1 1 1 T e (t)Ag(e(t)) ≤ e T (t)A A T e(t) + g T (e(t))g(e(t)) ε 2ε  2ε  1 1 T T A A +  e(t) (18) ≤ e (t) 2ε 2ε and 1 T 1 T e (t)Bg(e(t − θ )) ≤ e (t − θ )B B T e(t − θ ) ε 2ε 1 + g T (e(t − θ ))g(e(t − θ )) 2ε 1 T e (t)B B T e(t) ≤ 2ε 1 + e T (t − θ )e(t − θ ). (19) 2ε Furthermore, it can be derived from Lemma 2 that T t

t 1 g T (e(s))ds g T (e(s))ds 2ε t −η t −η t 1 T ≤ g (e(s))g(e(s))ds η 2ε t −η 

t 1 η e(s)ds. (20) e T (s) ≤ 2ε t −η Hence t 1 T g(e(s))ds e (t)D ε t −η 1 ≤ e T (t)D D T e(t) 2ε T t

t 1 + g T (e(s))ds g T (e(s))ds 2ε t −η t −η 1 T T ≤ e (t)D D e(t) 2ε 

t 1 η e(s)ds. (21) e T (s) + 2ε t −η For the positive scalars r1 > 0, r2 > 0, it follows from Lemma 1 that r1 1 T 1 T e (t)Ez(t) ≤ e T (t)E 2 e(t) + z (t)z(t), (22) ε 2ε 2εr1 1 T r2 z T (t)βg(e(t)) ≤ z (t)z(t) + g T (e)(t)β 2 g(e(t)) 2εr2 2ε r  1 T 2 2 ≤ β  e(t). z (t)z(t) + e T (t) 2εr2 2ε (23) diag(δ12 , δ22 , . . . , δn2 ).

By (7), (18), (19) and (21)–(23), one obtains from (15)

d V1 (t) ≤ e T (t)(1 − L)e(t) + z T (t)2 z(t) + e T (t − θ )3 e(t − θ ) t e T (s)4 e(s)ds + e T (t) dt + t −η T

+ e (t)h(t)dω(t)

where 1 = −(1/ε)C + (1/2ε)A A T + (1/2ε) + (1/2ε)B B T + (1/2ε)D D T + (r1 /2ε)E 2 + (r2 /2ε)β 2  + (1/2)µ1 In , 2 = ((1/2εr1 ) + (1/2εr2 ) + (1/2)µ4 )In − α, 3 = (1/2ε) + (1/2)µ2 In , 4 = (1/2ε)η + (1/2)µ3 In ],  = (1/ε)σ y (t) − (1/ε)σ x (t − τ ) − ωρsign(e(t)). Differentiating both sides of (12)–(14) along trajectories of error system (6), one has   1 T 1 e (t)Qe(t) − e T (t − θ )Qe(t − θ ) dt, (25) d V2 (t) = 2 2   1 T 1 t T ηe (t)Me(t) − d V3 (t) = e (s)Me(s)ds dt, (26) 2 2 t −η  n  n 1 (Mi − βi )|ei (t)| dt, (li − ki )ei2 (t) − d V4 (t) = ε i=1 i=1   n 1 T = e (t)(L − K )e(t) − (Mi − βi )|ei (t)| dt. ε i=1 (27) Noting Mi = max{Mi1 +Mi2 } and ω > (1/ε), one can obtain the following inequality: e T (t) −

n 1

  1 |ei (t)| Mi1 + Mi2 ε n

(Mi − ρi )|ei (t)| ≤

ε i=1 i=1

 n n 1 1 − ω− Mi |ei (t)| |ei (t)|ρi − ε ε i=1 i=1

 n 1 ≤− ω− |ei (t)|ρi ≤ 0. ε

(28)

i=1

Taking Q = 23 , and M = 24 , one gets from (10) and (24)–(28) that d V (t) =

4

d Vi (t)

i=1

≤ e T (t)(1 + 3 + η4 − K )e(t) +z T (t)2 z(t) dt + e T (t)h(t)dω(t).

(29)

By the density nature of real numbers and µ4 < 2 min1≤i≤n {αi } in (H3), there are real numbers r1 and r2 such that 1 1 1 + + µ4 < min {αi }. 1≤i≤n 2εr1 2εr2 2 Therefore, 2 < 0.

(30)

Taking χ = min1≤i≤n {αi } − (1/2εr1 ) − (1/2εr2 ) − (1/2)µ4 > 0 and ki = λmax (1 + 3 + η4 ) + 1, one derives from (29) and (30) that

d V (t) ≤ −e T (t)e(t) − χ z T (t)z(t) dt + e T (t)h(t)dω(t)   ≤ − e(t)2 + z(t)2 dt + e T (t)h(t)dω(t) (31)

(24)

where  = min{1, χ}.

YANG et al.: ADAPTIVE LAG SYNCHRONIZATION FOR COMPETITIVE NEURAL NETWORKS

5

2

4

1

3

0 x2(t)

x2(t)

2 1 0

−2

−4

−2 −3 0.05

−1

−3

−1

0.1

0.15

0.2

0.25 x1(t)

0.3

0.35

0.4

−5 −1

0.45

1.6

0.5

1.4

0.4

1.2

0.3

1 S2(t)

0.6

0.2 S2(t)

1661

0.1

−0.9 −0.8 −0.7 −0.6 −0.5 −0.4 −0.3 −0.2 x1(t)

0.8 0.6 0.4

0.0

0.2

−0.2 −0.1 0.02

0 0.03

0.04

0.05

0.06 S1(t)

0.07

0.08

0.09

0.1

Taking mathematical expectations on both sides of (31) and noticing E(e T (t)h(t)dω(t)) = 0, one obtains   d(EV (t)) (32) ≤ − E e(t)2 + z(t)2 . dt Moreover, in (32), the equality holds if and only if E(e(t)2 + z(t)2 ) = 0, i.e., Ee(t)2 = 0 and Ez(t)2 = 0. It can now be concluded from Lyapunov stability theory that lim Ee(t)2 = 0 and

−0.1

0

0.1

0.2

0.3

0.4

0.5

S1(t)

Fig. 1. Trajectories of x(t) (upper) and S(t) (lower) of (35) with σ1x (t) and initial conditions x(t) = (0.4, 0.6)T , S(t) = (0.1, 0.6)T ∀t ∈ [−1, 0].

t →+∞

−0.2 −0.2

lim Ez(t)2 = 0.

t →+∞

According to Definition 2, the trivial solution of system (6) is globally asymptotically stable in mean square. At the same time, l˙i → 0, ρ˙i → 0, which implies that li and ρi approach some constants. This completes the  t proof. If h(t) =  h(t, e(t), e(t − θ ), t −η e(s)ds)   h(t) in (5), then the error system between (4) and (5), i.e., (6), becomes the following system: ⎧

⎪ de(t) = − 1ε Ce(t) + 1ε Ag(e(t)) + 1ε Ez(t) ⎪ ⎪ ⎪ t ⎪ ⎪ ⎪ + 1ε D t −η g(e(s))ds + 1ε Bg(e(t − θ )) ⎪ ⎨ (33) + 1ε σ y (t) − 1ε σ x (t − τ ) + U dt ⎪ ⎪ ⎪ ⎪ ⎪ + h(t)dω(t), ⎪ ⎪ ⎪ ⎩ dz(t) = [−αz(t) + βg(e(t))]dt.

Fig. 2. Trajectories of x(t) (upper) and S(t) (lower) of (35) with σ2x (t) and initial conditions x(t) = (−1, −0.5)T , S(t) = (0.5, 1.5)T ∀t ∈ [−1, 0].

We make the following assumption for model (33). (H3)  h(t, 0, 0, 0) = 0 and there exist unknown positive constants µ1 , µ2 and µ3 such that trace[ h T (t) h(t)] ≤ µ1 e(t)2 + µ2 e(t − θ )2 t + µ3 e(s)2 ds. t −η

Theorem 2: Suppose that the assumption conditions (H1), 3 ) hold. Then, under the controller (8) and (H2), and ( H adaptive law (9), system (5) with h(t) =  h(t) can be lagsynchronized in mean square with (4). Proof: Let the Lyapunov–Krasovskii functional candidate be the same V (t) as that in the proof of Theorem 1. By the density nature of real numbers, we can take r1 and r2 such that (1/2εr1 ) + (1/2εr2 ) < min1≤i≤n {αi }. Differentiating V (t) along solutions of (33) and by the same procedure of the proof of Theorem 1, we can finish the proof. This completes the proof. If σ x (t) ≡ 0 and σ y (t) = 0 or σ x (t) = and σ y (t) ≡ 0, it is obvious that the controllers (8) and update laws (9) can also synchronize (4) and (5). Especially, if σ x (t) = σ y (t) ≡ 0,

1662

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 21, NO. 10, OCTOBER 2010

1

0.6

0.4

0.4

0.35

0.2 0 −0.2 −0.4

0.3 0.25 0.2 0.15

−0.6

0.1

−0.8

0.05

−1

S1(t) R1(t)

0.45

S1(t), R1(t)

y1(t), x1(t)

0.5

y1(t) x1(t)

0.8

0

5

10

15

20 t

25

30

35

10

0

40

0

5

10

15

20 t

25

30

35

40

1

y2(t) x2(t)

S2(t) R2(t)

5 y2(t), x2(t)

S2(t), R2(t)

0.5

0

−5

0

0

5

10

15

20 t

25

30

35

−0.5

40

0

5

10

15

20 t

25

30

35

40

Fig. 3. Time response of x(t) and y(t) of the drive (solid) and response (dot) networks.

Fig. 4. Time response of S(t) and R(t) of the drive (solid) and response (dot) networks.

then (6) turns out to the following system:

⎧ ⎪ de(t) = − 1ε Ce(t) + 1ε Ag(e(t)) ⎪ ⎪ ⎪ ⎪ ⎪ + 1ε Bg(e(t ⎨  t − θ )) + 1ε D t −η g(e(s))ds ⎪ ⎪ ⎪ 1 ⎪ Ez(t) + U dt + h(t)dω(t), + ⎪ ε ⎪ ⎩ dz(t) = [−αz(t) + βg(e(t))]dt.

From Theorems 2 and 3, the following corollaries can be easily obtained. We omit their proofs here. 3) Corollary 1: Suppose that the assumptions (H1) and ( H x y  hold,  t and σ (t) = σ (t) ≡ 0 and h(t) = h(t, e(t), e(t − θ ), t −η e(s)ds) in (4) and (5). Then the response network (5) can be lag-synchronized with driver network (4) under the controllers (8) and the adaptive laws (9). Corollary 2: Suppose that the assumption conditions (H1) holds and h(t) = σ x (t) = σ y (t) ≡ 0. Then the response network (5) can be lag-synchronized with driver network (4) under the controllers (8) and the adaptive laws (9). Remark 3: Note that the controllers (8) are discontinuous and the phenomenon of chattering will appear [47], [48]. In order to eliminate the chattering, the controller (8) can be modified as ei (t) , i = 1, 2, . . . , n u i = −li ei (t) − ωρi |ei (t)| + i where i , i = 1, 2, . . . , n, are sufficiently small positive constants. Remark 4: From the above results, one can see that the controller (8) and the adaptive law (9) can lag-synchronize (4) and (5), whether or not σ x (t) or σ y (t) is zero. Hence, the designed controller has good robustness. Particularly, when σ x (t) = σ y (t) ≡ 0 in (4) and (5), the control parameter ω can be relaxed to ω ≥ 0. If ω = 0, then the controller (8) and the adaptive law (9) turn out to the usual adaptive controllers in [22], [23], [37], [41], [21], and [36]. It can be observed from

(34)

Theorem 3: Suppose that the assumption conditions (H1) and (H3) hold and σ x (t) = σ y (t) ≡ 0. Then, under the controller (8) and the adaptive law (9), the trivial solution of system (34) is globally asymptotically stable in mean square. Proof: Define the following Lyapunov–Krasovskii functional candidate:  t 1 T e (t)e(t) + z T (t)z(t) + V (t) = e T (s)Qe(s)ds 2 t −θ  t t n 1 T 2 e (s)Me(s)dsdz + (li − ki ) + εi t −η z i=1

where Q and M are positive definite matrices, K = diag(k1 , k2 , . . . , kn ), and Q, M, and K are to be determined. solutions of (34) and noting that Differentiating V (t) along −ωe T (t)ρsign(e(t)) = −ω ni=1 |ei (t)|ρi ≤ 0, by the same procedure of the proof of Theorem 1, one can easily finish this proof. This completes the proof.

YANG et al.: ADAPTIVE LAG SYNCHRONIZATION FOR COMPETITIVE NEURAL NETWORKS

2

40 e1(t) e2(t)

1

30

l1, l2

e1(t), e2(t)

25

−1 −2

10

−4

5 0

5

10

15

20 t

25

30

35

0

40

1

5

10

15

20 t

25

30

35

0.6

4

0.4

3.5

0.2

3

0

2.5

−0.2

2

−0.4

1.5

−0.6

1

−0.8

0.5 0

5

10

15

20 t

25

30

35

0

40

Time response of the error states e(t) (upper) and z(t) (lower).

numerical simulations in the next section that the designed controller has better anti-interference capability than those in [22], [23], [37], [41], [21], and [36]. Therefore, the designed controller is more practical than the usual adaptive controllers. Remark 5: Our models (4) and (5) include models in [37] as special cases when σ x (t) = σ y (t) ≡ 0, h(t) =  h(t, e(t), e(t − θ )) and D = 0. The assumption (H2) in [37] 3 ) of this paper. From Corollary 1 and is equivalent to ( H Remark 2, one can see that the models in [37] can be easily synchronized by the controller (8) and the adaptive law (9) even with ω = 0. Note that, in Corollary 1, none of LMI has to be solved, whereas solving LMI is necessary in [37]. Therefore, our results improve those of [37]. IV. N UMERICAL E XAMPLE In this section, one example with numerical simulations is provided to illustrate the effectiveness of the theoretical results obtained above. Numerical simulations demonstrating the better anti-interference capability of the new controller than that of usual adaptive controller are also given. Consider the following CNNs with mixed delays and uncertain nonlinear external perturbations: ⎧ x(t) ˙ = − 1ε C x(t) + 1ε A f (x(t)) ⎪ ⎪ t ⎪ ⎨ + 1ε B f (x(t − θ )) + 1ε D t −η f (x(s)ds (35) ⎪ + 1ε E S(t) + 1ε σ x (t), ⎪ ⎪ ⎩˙ S(t) = −αS(t) + β f (x(t))

Fig. 6.

40

ρ1 ρ2

4.5

ρ1, ρ2

z1(t), z2(t)

0

5 z1(t) z2(t)

0.8

Fig. 5.

20 15

−3

−1

l1 l2

35

0

−5

1663

0

5

10

15

20 t

25

30

35

40

Evolutions of control gains l and ρ.

where x(t) = (x 1 (t), x 2 (t))T , f (x(t)) = (tanh(x 1 (t), tanh(x 2 (t))T ), θ = 1, η = 0.3, ε = 2.5



 3 −0.3 C= , A= , 8 5



 −1.4 0.1 −1.2 −0.1 B= , D= , 0.3 −8 −2 −2



 0.5 0 2 0 E= , α= , 0 1.5 0 1.5

 0.5 0 β= . 0 0.3 1.2 0 0 1

t In the case that σ x (t) = σ1x (t) = (0.2 t −0.3 x 1 (s)ds + t t 0.18 t −0.3 x 2 (s)ds − 0.15S1 (t), t −0.3 x 2 (s)ds − 0.15S2 (t))T and the initial conditions are chosen as x(t) = (0.4, 0.6)T , S(t) = (0.1, 0.6)T , ∀t ∈ [−1, 0], the chaotic-like trajectories of (35) can be seen in Fig. 1. t In the case that σ x (t) = σ2x (t) = (0.15 t −0.3 x 1 (s)ds + t t 0.17 t −0.3 x 2 (s)ds + r S1 (t), t −0.3 x 2 (s)ds + r S2 (t))T with r = 0.01 and the initial conditions are chosen as x(t) = (−1, −0.5)T , S(t) = (0.5, 1.5)T , ∀t ∈ [−1, 0], the trajectories of (35) can be seen in Fig. 2.

1664

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 21, NO. 10, OCTOBER 2010

2

2 e1(t) e2(t)

1

0 e1(t), e2(t)

e1(t), e2(t)

0 −1 −2

−4

−4 0

5

10

15

20 t

25

30

35

−5

40

0.4

0.4

0.2

0.2

z1(t), z2(t)

0.6

0 −0.2

10

15

20 t

25

30

35

40

0 −0.4

−0.6

−0.6

−0.8

−0.8 0

5

10

15

20 t

25

30

35

−1

40

Time response of the error states e(t) (upper) and z(t) (lower).

Let (35) with σ1x (t) be the driver network, we design the response network as  1 ⎧ 1 ⎪ ⎪d y(t) = − ε C y(t) + ε A f (y(t)) ⎪ ⎪ 1 ⎪ + ε B f (y(t − θ )) ⎪ ⎪ ⎪ ⎪ t ⎪ ⎪ + 1ε D t −η f (y(s))ds ⎨

 + 1ε E R(t) + 1ε σ y (t) + U dt

(36)

where σ y (t) = σ2x (t)|x=y,S(t )=R(t ) and r = 0.01, transmission delay is τ = 0.8, z(t) = R(t) − S(t − 0.8), e(t) = y(t)− x(t − 0.8), the noise intensity function matrix is h(t) =

e2 (t − 1) e1 (t).

 .

t t −0.3

2 e1 (s)ds

≤ 0.3 ≤ 0.3

t

t −0.3 t t −0.3

15

20 t

25

30

35

40

Time response of the error states e(t) (upper) and z(t) (lower).

From (37), one gets trace(h T (t)h(t)) = z 12 (t) + e22 (t − 1) + e12 (t) 2

t e1 (s)ds + ≤ z(t) + e(t − 1)2 + e(t)2 t + 0.3 e(s)2 ds. (38) Obviously, (H1) is satisfied with δ1 = δ2 = 1. By the formulations of σ1x (t) and σ2x (t), one can see that σ1x (t) and σ2x (t) are bounded as long as x(t) and S(t) are bounded. Hence, (H2 ) is satisfied. Furthermore, (H3) is satisfied with µ1 = µ2 = µ4 = 1, µ3 = 0.3, and µ4 = 1 < 3 = 2 min1≤i≤n {αi }. Take ω = 1 such that ω > (1/2.5) = 0.4. According to Theorem 1 and Remark 3, network (36) can lag-synchronize with (35) under the controller ei (t) , |ei (t)| + 0.01

and the following adaptive law:  l˙i = εi ei2 (t), i = 1, 2, i = 1, 2. ρ˙i = pi |ei (t)|,

(e1 (s))2 ds e(s)2 ds.

10

u i = −li ei (t) − ωρi

By Lemma 2, one has



5

t −0.3

d R(t) = [−α R(t) + β f (y(t))]dt

 t z 1 (t) t −0.3 e1 (s)ds

Fig. 8.

0

t −0.3 2

+h(t)dω(t),

z1(t) z2(t)

−0.2

−0.4

⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎩

5

0.8

0.6

−1

0

1 z1(t) z2(t)

0.8

z1(t), z2(t)

−2 −3

1

Fig. 7.

−1

−3

−5

e1(t) e2(t)

1

(37)

i = 1, 2

(39)

(40)

In the simulations, the Euler–Maruyama numerical technique given in [49] is adopted to simulate the drive–response

YANG et al.: ADAPTIVE LAG SYNCHRONIZATION FOR COMPETITIVE NEURAL NETWORKS

3 2 1

x2(t)

0 −1 −2 −3 −4 −5 −1 −0.9 −0.8 −0.7 −0.6 −0.5 −0.4 −0.3 −0.2 −0.1 x1(t)

0

1.6 1.4 1.2

S2(t)

perturbations can realize LS under the controller (39) and (40). Fig. 7 represents the time response of the error states, and z(t) and e(t) approach zero quickly, which implies that LS is achieved under the controller (39) and (40). However, when we take ω = 0 in (39) such that (39) and (40) become the usual adaptive controller used in [22], [23], [37], [41], [21], and [36], the two systems with the new hybrid perturbations cannot realize LS, see Fig. 8 for the time response of the error states. Figs. 7 and 8 confirm the better anti-interference capability of the new controller than that of usual adaptive controller. Remark 6: Initial conditions of (35) play key effects on the trajectory. For example, when we take x(t) = (−1, −0.5)T , S(t) = (0.5, 1.5)T ∀t ∈ [−1, 0], the trajectory of (35) with σ1x (t) is obtained as shown in Fig. 9, which is different from that in Fig. 1. V. C ONCLUSION

1 0.8 0.6 0.4 0.2 0 −0.2 −0.2

1665

−0.1

0

0.1 0.2 S1(t)

0.3

0.4

0.5

Fig. 9. Trajectory of (35) with σ1x (t) and initial conditions x(t) = T (−1, −0.5) , S(t) = (0.5, 1.5)T ∀t ∈ [−1, 0].

systems (35) and (36). The initial conditions are taken as follows: x(t) = (0.4, 0.6)T , S(t) = (0.1, 0.6)T , y(t) = (−1, −0.5)T , R(t) = (0.5, 1.5)T , l1 (t) = l2 (t) = ρ1 (t) = ρ2 (t) = 0.1, ∀t ∈ [−1, 0]. Some initial parameters are given as follows: T = 40, and the time step size is δt = 0.0005, ω = 1, εi = 0.1, pi = 0.2. The simulation results are shown in Figs. 3–6. Figs. 3 and 4 represent the time response of the states of drive and response networks. We can see the states of the response system lag those of drive system. Fig. 5 shows the time response of the error states. It can be seen that z(t) and e(t) approach zero quickly, hence R(t) → S(t − 0.8) and y(t) → x(t − 0.8) quickly. Simulations in Figs. 3–5 show that (36) lag-synchronize with (35) with the transmission delay τ = 0.8 quickly, which verifies the theoretical results perfectly. At the same time, the control gains approach some constants. Fig. 6 describes the evolutions of control gains l and ρ. In order to show the advantage of our new controller, we replace r = 0.01 in (36) with r = 10, which means that the disturbance to the response system is large, and replace the noise intensity function matrix h(t) with h(t) = diag(z 1 (t), e2 (t)), the other parameters remaining the same as those above. It is easy to see all the conditions in Theorem 1 are satisfied. Hence, according to Theorem 1 and Remark 3, networks (36) [with r and h(t)] and (35) with the new hybrid

Hybrid perturbations and mixed delays are unavoidable in practice. In this paper, we proposed a new kind of competitive NNs with mixed delays and hybrid perturbations, which are more practical than those in [37] and [38]. Sufficient conditions for LS of the new model are obtained by designing a simple and robust adaptive controller. The designed controller has better anti-interference capability and is more practical than the usual adaptive controller. The synchronization criteria are very simple, and we do not need to solve any LMI. Therefore, results of this paper improved and extended those of [37] and [38]. Numerical simulations verified the effectiveness of the theoretical results. Since CNNs are extensions of usual NNs [9], [11], [5], [10], our results can be easily extended to synchronization of the usual NNs in [23], [32], [34], [35], and [45] with hybrid perturbations and mixed delays. In conclusion, the results of this paper are new and important in real applications. ACKNOWLEDGMENT The authors are deeply grateful to the editor and the anonymous reviewers of this paper for helpful suggestions, which greatly improved it. R EFERENCES [1] Y.-Y. Hou, T.-L. Liao, C.-H. Lien, and J.-J. Yan, “Stability analysis of neural networks with interval time-varying delays,” Chaos, vol. 17, no. 3, pp. 033120-1–033120-9, 2007. [2] R. Samidurai, S. M. Anthoni, and K. Balachandran, “Global exponential stability of neutral-type impulsive neural networks with discrete and distributed delays,” Nonlinear Anal.: Hybrid Syst., vol. 4, no. 1, pp. 103–112, Feb. 2010. [3] L. Sheng and H. Yang, “Exponential synchronization of a class of neural networks with mixed time-varying delays and impulsive effects,” Neurocomput., vol. 71, nos. 16–18, pp. 3666–3674, Oct. 2008. [4] X. Yang and J. Cao, “Stochastic synchronization of coupled neural networks with intermittent control,” Phys. Lett. A, vol. 373, no. 36, pp. 3259–3272, Aug. 2009. [5] M. A. Cohen and S. Grossberg, “Absolute stability of global pattern formation and parallel memory storage by competitive neural networks,” IEEE Trans. Syst. Man, Cybern., B, vol. 13, no. 5, pp. 815–826, Sep. 1983. [6] A. Meyer-Bäse, S. Pilyugin, A. Wismüler, and S. Foo, “Local exponential stability of competitive neural networks with different time scales,” Eng. Appl. Artificial Intell., vol. 17, no. 3, pp. 227–232, Apr. 2004.

1666

[7] A. Meyer-Bäse, S. S. Pilyugin, and Y. Chen, “Global exponential stability of competitive neural networks with different time scales,” IEEE Trans. Neural Netw., vol. 14, no. 3, pp. 716–719, May 2003. [8] A. Meyer-Bäse, F. Ohl, and H. Scheich, “Singular perturbation analysis of competitive neural networks with different time scales,” Neural Comput., vol. 8, no. 8, pp. 1731–1742, Nov. 1996. [9] H. Alonso, T. Mendonça, and P. Rocha, “Hopfield neural networks for on-line parameter estimation,” Neural Netw., vol. 22, no. 4, pp. 450–462, May 2009. [10] W. Ding, “Synchronization of delayed fuzzy cellular neural networks with impulsive effects,” Commun. Nonlinear Sci. Numer. Simul., vol. 14, no. 11, pp. 3945–3952, Nov. 2009. [11] S. Amari, “Field theory of self-organizing neural net,” IEEE Trans. Syst. Man, Cybern., B, vol. 13, no. 5, pp. 741–748, Sep. 1983. [12] A. Meyer-Bäse, R. Roberts, and V. Thümmler, “Local uniform stability of competitive neural networks with different time-scales under vanishing perturbations,” Neurocomput., vol. 73, nos. 4–6, pp. 770–775, Jan. 2010. [13] H. Lu and G. Chen, “Global exponential convergence of multitime-scale competitive neural networks,” IEEE Trans. Circuits Syst. II, vol. 52, no. 11, pp. 761–765, Nov. 2005. [14] H. Lu and S. Amari, “Global exponential stability of multitime scale competitive neural networks with nonsmooth functions,” IEEE Trans. Neural Netw., vol. 17, no. 5, pp. 1152–1164, Sep. 2006. [15] H. Lu and Z. He, “Global exponential stability of delayed competitive neural networks with different time scales,” Neural Netw., vol. 18, no. 3, pp. 243–250, Apr. 2005. [16] X. Nie and J. Cao, “Exponential stability of competitive neural networks with time-varying and distributed delays,” Proc. IMechE, Part 1: J. Syst. Control Eng., vol. 222, no. 6, pp. 583–594, 2008. [17] X. Nie and J. Cao, “Multistability of competitive neural networks with time-varying and distributed delays,” Nonlinear Anal.: Real World Appl., vol. 10, no. 2, pp. 928–942, Apr. 2009. [18] L. M. Pecora and T. L. Carroll, “Synchronization in chaotic systems,” Phys. Rev. Lett., vol. 64, no. 8, pp. 821–824, Feb. 1990. [19] A. A. Budini and M. O. Cáceres, “Adiabatic small noise fluctuations around anticipated synchronization: A perspective from scalar masterslave dynamics,” Phys. A, vol. 387, no. 18, pp. 4483–4496, Jul. 2008. [20] N. Chopra and M. W. Spong, “On exponential synchronization of Kuramoto oscillators,” IEEE Trans. Autom. Control, vol. 54, no. 2, pp. 353–357, Feb. 2009. [21] F. Sorrentio and E. Ott, “Adaptive synchronization of dynamics on evolving complex networks,” Phys. Rev. Lett., vol. 100, no. 11, pp. 114101–114104, Mar. 2008. [22] S. Bowong, F. M. M. Kakmeni, and H. Fotsin, “A new adaptive observerbased synchronization scheme for private communication,” Phys. Lett. A, vol. 355, no. 3, pp. 193–201, Jul. 2006. [23] J. Cao and J. Lu, “Adaptive synchronization of neural networks with or without time-varying delay,” Chaos, vol. 16, no. 1, pp. 013133-1– 013133-6, Mar. 2006. [24] W. He and J. Cao, “Generalized synchronization of chaotic systems: An auxiliary system approach via matrix measure,” Chaos, vol. 19, no. 1, pp. 013118-1–013118-10, Mar. 2009. [25] S. Sundar and A. A. Minai, “Synchronization of randomly multiplexed chaotic systems with application to communication,” Phys. Rev. Lett., vol. 85, no. 25, pp. 5456–5459, Dec. 2000. [26] E. M. Shahverdiev, S. Sivaprakasam, and K. A. Shore, “Lagsynchronization in time-delayed systems,” Phys. Lett. A, vol. 292, no. 6, pp. 320–324, Jan. 2002. [27] Y. C. Hung, C. C. Hwang, T. L. Liao, and J. J. Yan, “Generalized projective synchronization of chaotic systems with unknown dead-zone input: Observer-based approach,” Chaos, vol. 16, no. 3, pp. 033125-1– 033125-9, Sep. 2006. [28] A. B. Fabricio, L. Zhao, G. Q. Marcos, and E. N. M. Elbert, “Chaotic phase synchronization and desynchronization in an oscillator network for object selection,” Neural Netw., vol. 22, nos. 5–6, pp. 728–737, Jul.– Aug. 2009. [29] N. F. Rulkov, M. M. Sushchik, L. S. Tsimring, and H. D. Abarbanel, “Generalized synchronization of chaos in directionally coupled chaotic systems,” Phys. Rev. E, vol. 51, no. 2, pp. 980–994, Feb. 1995. [30] Y. Huang, Y.-W. Wang, and J.-W. Xiao, “Generalized lagsynchronization of continuous chaotic system,” Chaos Sol. Fract., vol. 40, no. 2, pp. 766–770, Apr. 2009. [31] C. Li, X. Liao, and K. Wong, “Chaotic lag-synchronization of coupled time-delayed systems and its applications in secure communication,” Physica D, vol. 194, nos. 3–4, pp. 187–202, Jul. 2004.

IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 21, NO. 10, OCTOBER 2010

[32] G. Chen, J. Zhou, and Z. Liu, “Global synchronization of coupled delayed neural networks and applications to chaotic CNN models,” Int. J. Bifur. Chaos, vol. 14, no. 7, pp. 2229–2240, 2004. [33] J. Lu and D. W. C. Ho, “Globally exponential synchronization and synchronizability for general dynamical networks,” IEEE Trans. Syst., Man, Cybern., B, vol. 40, no. 2, pp. 350–361, Apr. 2010. [34] H. Lu and C. van Leeuwen, “Synchronization of chaotic neural networks via output or state coupling,” Chaos Sol. Fract., vol. 30, no. 1, pp. 166– 176, Oct. 2006. [35] Y. Yang and J. Cao, “Exponential lag synchronization of a class of chaotic delayed neural networks with impulsive effects,” Physica A, vol. 386, no. 1, pp. 492–502, Dec. 2007. [36] W. Yu and J. Cao, “Adaptive synchronization and lag synchronization of uncertain dynamical system with time delay based on parameter identification,” Physica A, vol. 375, no. 2, pp. 467–482, Mar. 2007. [37] H. Gu, “Adaptive synchronization for competitive neural networks with different time scales and stochastic perturbation,” Neurocomput., vol. 73, nos. 1–3, pp. 350–356, Dec. 2009. [38] X. Lou and B. Cui, “Synchronization of competitive neural networks with different time scales,” Physica A, vol. 380, pp. 563–576, Jul. 2007. [39] T. Li, S.-M. Fei, and K.-J. Zhang, “Synchronization control of recurrent neural networks with distributed delays,” Physica A, vol. 387, no. 4, pp. 982–996, Feb. 2008. [40] W. Yu, J. Cao, G. Chen, J. Lu, J. Han, and W. Wei, “Local synchronization of a complex network model,” IEEE Trans. Syst. Man, Cybern., B, vol. 39, no. 1, pp. 230–241, Feb. 2009. [41] S. Hassan and A. Aria, “Adaptive synchronization of two chaotic systems with stochastic unknown parameters,” Commun. Nonlinear Sci. Numer. Simul., vol. 14, no. 2, pp. 508–519, Feb. 2009. [42] J. Lu, D. W. C. Ho, and Z. Wang, “Pinning stabilization of linearly coupled stochastic neural networks via minimum number of controllers,” IEEE Trans. Neural Netw., vol. 20, no. 10, pp. 1617–1629, Oct. 2009. [43] Y. Sun, J. Cao, and Z. Wang, “Exponential synchronization of stochastic perturbed chaotic delayed neural networks,” Neurocomput., vol. 70, nos. 13–15, pp. 2477–2485, Aug. 2007. [44] Z. Wang, Y. Liu, M. Li, and X. Liu, “Stability analysis for stochastic Cohen–Grossberg neural networks with mixed time delays,” IEEE Trans. Neural Netw., vol. 17, no. 3, pp. 814–820, May 2006. [45] W. Yu and J. Cao, “Synchronization control of stochastic delayed neural networks,” Physica A, vol. 373, pp. 252–260, Jan. 2007. [46] K. Gu, V. L. Kharitonov, and J. Chen, Stability of Time-Delay System. Boston, MA: Birkhäuser, 2003. [47] C. Edwards, S. K. Spurgeon, and R. J. Patton, “Sliding mode observers for fault detection and isolation,” Automatica, vol. 36, no. 4, pp. 541– 553, Apr. 2000. [48] J.-S. Lin and J.-J. Yan, “Adaptive synchronization for two identical generalized Lorenz chaotic systems via a single controller,” Nonlinear Anal.: Real World Appl., vol. 10, no. 2, pp. 1151–1159, Apr. 2009. [49] D. J. Higham, “An algorithmic introduction to numerical simulation of stochastic differential equations,” SIAM Rev., vol. 43, no. 3, pp. 525–546, 2001.

Xinsong Yang received the B.S. degree in mathematics from Huaihua Normal University, Hunan, China, in 1992, and the M.S. degree in mathematics from Yunnan University, Kunming, China, in 2006. He was a Visiting Scholar in the Department of Mathematics, Southeast University, Nanjing, China, from 2008 to 2009. He is currently an Associate Professor with the Department of Mathematics, Honghe University, Yunan, China. He is the author or coauthor of more than 10 papers in refereed international journals. His current research interests include collective behavior in complex dynamical networks, multiagent system, chaos synchronization, control theory, discontinuous dynamical systems, and neural networks. Prof. Yang serves as a reviewer for Neurocomputing, Physics Letters A, and Communication in Nonlinear Science and Numerical Simulation, etc.

YANG et al.: ADAPTIVE LAG SYNCHRONIZATION FOR COMPETITIVE NEURAL NETWORKS

Jinde Cao (M’07–SM’07) received the B.S. degree from Anhui Normal University, Wuhu, China, the M.S. degree from Yunnan University, Kunming, China, and the Ph.D. degree from Sichuan University, Chengdu, China, all in mathematics/applied mathematics, in 1986, 1989, and 1998, respectively. He was with Yunnan University from 1989 to 2000. Since 2000, he has been with the Department of Mathematics, Southeast University, Nanjing, China. From 2001 to 2002, he was a Post-Doctoral Research Fellow with the Department of Automation and Computer-Aided Engineering, Chinese University of Hong Kong, Shatin, Hong Kong. He was a Visiting Research Fellow and a Visiting Professor with the School of Information Systems, Computing and Mathematics, Brunel University, Middlesex, U.K., from 2006 to 2008. He is the author or co-author of more than 160 research papers and five edited books. His current research interests include nonlinear systems, neural networks, complex systems, complex networks, stability theory, and applied mathematics. Dr. Cao was an Associate Editor of the IEEE T RANSACTIONS ON N EURAL N ETWORKS from 2006 to 2009. He is an Associate Editor of the Journal of the Franklin Institute, Mathematics and Computers in Simulation, Neurocomputing, International Journal of Differential Equations, Discrete Dynamics in Nature and Society, and Differential Equations and Dynamical Systems. He is a reviewer of Mathematical Reviews and Zentralblatt-Math.

1667

Yao Long was born in Yunan, China, in 1957. She is currently a Professor in the Department of Mathematics, Honghe University, Yunan. She is the author or co-author of over 10 papers in refereed international journals. Her current research interests include differential equations and dynamical systems. Prof. Long serves as a reviewer for many international journals.

Weiguo Rui received the B.S. degree in mathematics from Yunnan University, Kunming, China, in 1995. He was a Visiting Scholar with the Department of Mathematics, Fudan University, Shanghai, China, from 2008 to 2009. He is currently with the Department of Mathematics, Honghe University, Yunan, China. He has published over 10 papers in refereed international journals. His current research interests include differential equations and dynamical systems. Prof. Rui serves as a reviewer for many international journals.

Suggest Documents