Neural Comput & Applic DOI 10.1007/s00521-016-2697-6
ORIGINAL ARTICLE
Global asymptotic and exponential synchronization of ring neural network with reaction–diffusion term and unbounded delay Swati Tyagi1 • Syed Abbas1 • Mokhtar Kirane2
Received: 10 June 2016 / Accepted: 3 November 2016 Ó The Natural Computing Applications Forum 2016
Abstract In this paper, we consider a ring neural network of coupled neurons with distributed and discrete timevarying delays along with the reaction–diffusion terms. We derive sufficient conditions that ensure the existence and uniqueness of the equilibrium point, synchronized asymptotic stability and exponential synchronization by using the theory of topological degree, properties of M-matrix, Lyapunov functional and analytic methods. The obtained results remove the assumption on the boundedness of activation functions. At the end, we give two examples to show the validity of our analysis. Keywords Reaction–diffusion term Ring neural network Time-varying delay Lyapunov functional Asymptotic stability Exponential synchronization Mathematics Subject Classification 35K57 92B20 37B25 93D20 34D06
1 Introduction In recent years, the dynamics of recurrent neural networks such as cellular neural networks (CNNs) with and without delay has been widely investigated due to its wide applications in various fields, such as image processing, signal
& Syed Abbas
[email protected] 1
School of Basic Sciences, Indian Institute of Technology Mandi, Mandi, H.P. 175001, India
2
LaSIE, Faculte´ des Sciences et Technologies, Universite´ de La Rochelle, Avenue Michel Cre´neau, 17000 La Rochelle, France
processing, pattern recognition, optimization and detection of speed of moving object. Evidently, all such applications depend massively on the dynamical behaviour of the networks, for instance stability, bifurcation, chaos and periodic oscillatory behaviour, and especially the asymptotic stability. Thus, studying dynamical behaviour is a requisite in practical designing of the neural networks [1–3, 15, 20, 33]. As we know, due to finite switching speed of amplifiers in electronic implementation of analog neural networks, time delays occur inevitably. Moreover, due to the presence of an amount of parallel pathways of a variety of axon sizes and lengths, neural networks owe spatial nature. Therefore, conduction velocities and propagation delays are distributed along these pathways. Thus, it becomes desirable to introduce distributed delays also. Meanwhile, due to motion of electrons in asymmetric electromagnetic field, diffusion effects also cannot be avoided. Consequently, both discrete and distributed delays along with the diffusion term, considering that the activations vary in space, as well as time, should be considered when realistic neural networks are modelled. Several authors have studied various dynamical behaviours such as stability, periodic oscillation and synchronization of neural networks with diffusion terms, which are described by partial differential equations. For instance, see [13, 14, 16, 18, 28, 30, 32, 36]. Among all classical neural networks, ring neural network is one of the most important neural networks. In 1994, Baldi and Atiya [4] considered a ring of neurons, connected in a cyclic manner with delayed interaction and internal membrane potential ui described by the following differential equation: dui ¼ dt
ui ðtÞ þ wi;i 1 fi 1 ðui 1 ðt ri
si;i 1 ÞÞ;
i mod n; ð1:1Þ
123
Neural Comput & Applic
where wi;i 1 are synaptic connection weights, ri denote the time constants, si;i 1 are time delays, and fi 1 are the input– output transfer function. They derived some results based on oscillatory behaviour and bifurcation analysis. Later in [6], Campbell generalized the system (1.1) to the following system: Ci
dui 1 ui ðtÞ þ Fi ðui ðt ¼ dt Ri i mod n;
sk ÞÞ þ Gi ðui 1 ðt
si 1 ÞÞ; ð1:2Þ
where Ci [ 0 and Ri [ 0 denote the capacitance and resistance of the respective individual neurons; Fi and Gi are nonlinear neuron activation functions. The network consists of a ring of neurons, where ith neuron receives two delayed inputs: one from itself with delay sk and other from the preceding neuron with time delay si 1 . In 2005, Campbell et al. [7] investigated the following ring neural network with two different delays: dxi ¼ xi ðtÞ þ af ðxi ðt dt þgðxiþ1 ðt sÞÞ;
ss ÞÞ þ b½gðxi 1 ðt i mod n:
sÞÞ ð1:3Þ
Since then, there has been a lot of investigation on various ring neural network models (for instance, refer to [5, 9, 25, 26, 34]). On the other hand, we know that synchronization is an important and special property, according to which x ¼ ðx1 ; x2 ; . . .; xn ÞT a solution to (1.3) with x1 ðtÞ ¼ x2 ðtÞ ¼ ¼ xn ðtÞ; when t h, h ¼ maxfss ; sg. There has been a lot of research on the bifurcation and stability of nontrivial synchronous solutions using centre manifold construction, synchronous bifurcation, analysing Hopf bifurcation properties and synchronized periodic solutions. In [29], Wang et al. investigated a ring neural network of coupled neurons with mixed delays and diffusion effects. They studied the synchronized stability and synchronized Hopf bifurcation for the trivial equilibrium point. Recently, it has been reported that even if we choose the network parameters and time-varying delays appropriately, the neural networks may exhibit complicated behaviour with chaotic attractors. Furthermore, while signal transmission, due to diffusion effects, the strength of signal reduces and it becomes weak. As a result, an external control needs to be added to increase the strength of the signal to an upper level. Due to this and its potential applications in various fields, the problem of synchronization of chaotic neural networks with mixed timevarying delays has attracted the attention of various researchers in recent years from both theoretical and practical perspectives. In 1990, Pecora and Carroll discussed the synchronization of chaotic systems using the concept of drive–response system [19]. The main idea
123
behind this concept is to use the output of the drive system to control the response system in order to oscillate in a synchronized manner. For more details on synchronization of neural networks, we refer to [8, 10, 21, 23, 24, 31, 35]. Neural activity is a harmonious process, in which neurons and the synchronization of the neuronal firing play a vital role in information processing by various brain systems. It has been investigated experimentally that the synchronization of oscillation in the visual cortex plays an important role, when the global features are extracted from local information. In many systems, due to various types of interactions, systems adjust their individual behaviour such that they can behave identically. This adjustment of behaviour, called synchronization, has been a major research focus over past few years (refer to [17, 22, 27]). In the literature, the existence, uniqueness of equilibrium point and asymptotic as well as exponential synchronization have been mainly investigated for a class of delayed neural networks with time-varying and distributed delays and reaction–diffusion terms, but for ring neural networks have not been investigated so far. Motivated by the above discussion, in this paper, we study the problem of stability and synchronization for a class of ring of diffusively coupled neurons with both time-varying and distributed delays, which is quite new. The results obtained are in the form of algebraic inequalities, which are easy to verify. We consider the following ring neural network with reaction–diffusion term and mixed delays described by the following differential equations: oyi ðt; xÞ o2 yi ðt; xÞ ¼ di yi ðt; xÞ þ cf ðyi ðt ot ox2 Z t þa kðt sÞf ðyi ðs; xÞÞds
sðtÞ; xÞÞ
1
þ b½ f ðyi 1 ðt sðtÞ; xÞÞ þ f ðyiþ1 ðt i mod n; 0\x\p; t [ 0
sðtÞ; xÞÞ; ð1:4Þ
oyi ðt; 0Þ oyi ðt; pÞ ¼ ¼ 0; ox ox
t [ 0;
ð1:5Þ
with initial condition yi ðs; xÞ ¼ /i ðs; xÞ;
i mod n;
ðs; xÞ 2 ð 1; 0 ½0; p: ð1:6Þ
where /i ðs; xÞ are bounded and continuous on ð 1; 0 X, X ¼ ½0; p is a bounded compact set with smooth boundary oX and mesX ¼ p [ 0; yi ðt; xÞ denotes the state of the ith neuron at time t in space x; f represents the signal function and is assumed to be sufficiently smooth; sðtÞ denotes the transmission delay of neural net_ works satisfying 0\sðtÞ\s for t [ 0 and 0\sðtÞ\r\1; the parameters a; c measure the strength of self-connection
Neural Comput & Applic
between the neurons; b denotes the strength of nearestneighbour connection, and di [ 0 corresponds to the transmission diffusion operator along the ith neuron. The connections are said to be excitatory (inhibitory), if the connection strengths are positive (negative). At the point of synchronization, the model (1.4) is transformed into following scalar delay differential equation: oyðt; xÞ o2 yðt; xÞ yðt; xÞ þ cf ðyðt ¼d ot ox2 Z t þa kðt sÞf ðyðs; xÞÞds sðtÞ; xÞÞ;
oyðt; 0Þ oyðt; pÞ ¼ ¼ 0; ox ox yðs; xÞ ¼ /ðs; xÞ;
0\x\p;
ð1:8Þ
ðs; xÞ 2 ð 1; 0 ½0; p;
0
0
Definition 2.1 Let f : R ! R be a continuous function. The upper right Dini-derivative Dþ f is defined as h!0þ
t [ 0 ð1:7Þ
t [ 0;
The delay kernels k : ½0; 1Þ ! ½0; 1Þ are bounded, real-valued, continuous functions and satisfy Z 1 Z 1 kðsÞds ¼ 1; ðiiÞ skðsÞds\1: ðiÞ ð2:3Þ
Dþ f ðtÞ ¼ lim sup
sðtÞ; xÞÞ
1
þ 2bf ðyðt
A2:
ð1:9Þ
The paper is organized as follows. In Sect. 2, we discuss some preliminary results and definitions. We also discuss the synchronization of the delayed neural ring network with reaction–diffusion term. Later, in Sect. 3, we derive sufficient conditions for the existence and uniqueness of the equilibrium point. In Sect. 4, we investigate the global asymptotic synchronization and exponential synchronization by formulating an appropriate Lyapunov functional, controller gain matrix and utilizing some inequality techniques. In Sect. 5, we give two suitable examples to show the effectiveness of the proposed results. Finally, we conclude our work and discuss some future aspects in Sect. 6.
f ðt þ hÞ h
f ðtÞ
:
Definition 2.2 A real matrix A ¼ ðaij Þnn is said to be an M-matrix, if aij 0 ði; j ¼ 1; 2; . . .; n; i 6¼ jÞ and all successive principle minors of A are positive. The chaos dynamics has high sensitive dependence on the initial conditions. Thus, even an infinitesimal change in initial condition can result in an asymptotic divergence of orbits. It is well known that if the network parameters and time delays are suitably chosen, then neural networks may lead to bifurcation, oscillation, divergence or instability. These networks become unstable in such cases. In order to control dynamical behaviour of system (1.4) and (1.7), we first introduce the control model of system (1.4) with identical dynamical equations and state variable y~i ðt; xÞ with different initial condition described by the following equation: o~ yi ðt; xÞ o2 y~i ðt; xÞ ¼ di y~i ðt; xÞ þ cf ð~ yi ðt ot ox2 Z t þa kðt sÞf ð~ yi ðs; xÞÞds
sðtÞ; xÞÞ
1
þ bf ð~ yi 1 ðt
sðtÞ; xÞÞ þ bf ð~ yiþ1 ðt
sðtÞ; xÞÞ þ vi ðtÞ;
ð2:4Þ
2 Preliminaries with initial condition
Throughout this paper, we denote yðt; xÞ ¼ ðy1 ðt; xÞ; y2 ðt; xÞ; . . .; yn ðt; xÞÞT ;
y~i ðs; xÞ ¼ wi ðs; xÞ;
x 2 X:
For any x ¼ ðx1 ; x2 ; . . .; xn ÞT , kxk denotes 1 norm of x, P that is kxk1 ¼ ni¼1 jxi j. For yðt; xÞ ¼ ðy1 ðt; xÞ; y2 ðt; xÞ; . . .; yn ðt; xÞÞT ; we define kyðt; xÞk as Z kyðt; xÞk2 ¼ yðt; xÞT yðt; xÞdx: ð2:1Þ X
We assume that the activation functions and the delay kernels satisfy the following properties: A1:
The activation functions are Lipschitz continuous, that is, there exists a constant Lf [ 0; such that jf ðn1 Þ
f ðn2 Þj Lf jn1
n2 j;
8 n1 ; n2 2 R: ð2:2Þ
ðs; xÞ 2 ð 1; 0 ½0; p:
ð2:5Þ
Similarly, at the point of synchronization, the control model of system (1.7) is described by the following equation; o~ yðt; xÞ o2 y~ðt; xÞ y~ðt; xÞ þ cf ð~ yðt ¼d ot ox2 Z t þa kðt sÞf ð~ yðs; xÞÞds
sðtÞ; xÞÞ ð2:6Þ
1
þ 2bf ð~ yðt
sðtÞ; xÞÞ þ vðtÞ;
where vðtÞ ¼ ðv1 ðtÞ; v2 ðtÞ; . . .; vn ðtÞÞT denotes the external control input, which is appropriately chosen to be designed for certain control objective or to obtain desired output. The initial condition associated with (2.6) is given as follows:
123
Neural Comput & Applic
y~ðs; xÞ ¼ wðs; xÞ;
ðs; xÞ 2 ð 1; 0 ½0; p:
ð2:7Þ
Definition 2.3 System (1.4) (or (1.7)) and the uncontrolled system (2.4) (or (2.6)) (i.e. vi ðtÞ 0 in (2.4) (or in 2.6)) are said to be asymptotically synchronized, if lim kyðt; xÞ
t!1
y~ðt; xÞk ¼ 0;
as t ! 1;
ð2:8Þ
where y ¼ ðy1 ; y2 ; . . .; yn ÞT , y~ ¼ ð~ y1 ; y~2 ; . . .; y~n ÞT 2 Rn . Definition 2.4 Drive–response systems (1.4) and (2.4) (or (1.7) and (2.6)) are said to be globally exponentially asymptotically synchronous, if there exists a control input v(t, x) and constants M 1 and g [ 0; such that for any initial condition, we have kyðtÞ
wk expð gtÞ; 8 t 0; P R where kyðtÞ y~ðtÞk ¼ ni¼1 X jyi ðt; xÞ y~i ðt; xÞjdx; and the constant g is called the degree of synchronization.
We denote
Proof
hðy1 ; y2 ; . . .; yn Þ ¼ ðh1 ; h2 ; . . .; hn ÞT ; where hi ¼ y i
cf ðyi Þ
af ðyi Þ
hi ¼ 0;
i ¼ 1; 2; . . .; n:
Hðy1 ; y2 ; . . .; yn Þ ¼ khðy1 ; y2 ; . . .; yn Þ
Remark 2.7 In general, the topological degree of F(x) relative to X and p can be calculated as the algebraic number of solution of FðxÞ ¼ p in X for FðoXÞ 6¼ 0. If degðF; X; 0Þ ¼ 1, then FðxÞ ¼ 0 has at least one solution in X.
3 Existence and uniqueness of an equilibrium point Theorem 3.1 Under the assumption (A1) and (A2), the ring neural network (1.4) has a unique equilibrium point, if
holds.
123
kÞðy1 ; y2 ; . . .; yn ÞT ;
þ ð1
k 2 ½0; 1: ð3:4Þ
Then kHðy1 ; y2 ; . . .; yn Þk1 ¼
n X
i¼1 n X
jkhi þ ð1
ð1
kÞyi j
kÞjyi j þ k
i¼1
where Jf denotes the determinant of the Jacobian relative to F.
ðjcj þ jaj þ 2jbjÞLf [ 0
ð3:3Þ
Now, we define the following homotopic mapping:
1 1 ab ap þ bq : p q Definition 2.6 [11] Assume that X Rn is an open and bounded set. Let FðxÞ : X ! Rn is a continuously differentiable function. If p 62 FðoXÞ and Jf ðxÞ 6¼ 0 for any x 2 F 1 ðpÞ, then the topological degree relative to X and p is defined as 8 X sgnJf ðxÞ; F 1 ðpÞ 6¼ / < 1 degðF; X; pÞ ¼ x2F ðpÞ ; : 0; F 1 ðpÞ ¼ /
ð3:2Þ
Using Assumption (A2), the equilibrium solution of the system (1.4) is given by the solution of the equation
Lemma 2.5
1
bf ðyiþ1 Þ;
i ¼ 1; 2; . . .; n:
y~ðtÞk Mk/
(Young’s inequality) Assume that a [ 0, 1 1 b [ 0; p [ 1, þ ¼ 1, then the following inequality p q holds:
bf ðyi 1 Þ
n n X jyi j
cLf jyi j
i¼1
o aLf jyi j bLf jyi 1 j bLf jyiþ1 j n X jcj þ jaj þ 2jbj f ð0Þ: k i¼1
Let l¼
1 þ nðjaj þ jcj þ 2jbjÞf ð0Þ : 1 ðjaj þ jcj þ 2jbjÞLf
Further, define the set X ¼ fyi j jyi j l; i ¼ 1; 2; . . .; ng. Since from (3.1), l [ 0, the set X is bounded and nonempty. Thus for any ðy1 ; y2 ; . . .; yn ÞT 2 oX; we have kHðy1 ; y2 ; . . .; yn Þk1 [ 0 and hence Hðy1 ; y2 ; . . .; yn Þ 6¼ 0 for k 2 ½0; 1. From the topological degree theory, we obtain
ðh; X; 0Þ ¼ ðH; X; 0Þ ¼ 1:
ð3:5Þ
Thus, there exists at least one equilibrium point for the delayed ring neural network (1.4). Now to prove the uniqueness, we assume that there exist two equilibrium solutions yi and y~i to the system (1.4). Then we have ðyi
y~i Þ
c½f ðyi Þ
b½f ðyi 1 Þ
f ð~ yi Þ
f ð~ yi 1 Þ
a
Z
t
kðt 1
b½f ðyiþ1 Þ
sÞ½f ðyi Þ
f ð~ yiþ1 Þ ¼ 0:
ð3:1Þ From Assumption (A1) and (A2), we obtain
f ð~ yi Þds
Neural Comput & Applic n n X
1
ðjcj þ jajÞLf jyi
y~i j
bLf jyi
1
Define
y~i 1 j
ð3:6Þ
i¼1
bLf jyiþ1
o
l¼
y~iþ1 j 0:
1 þ nðjcj þ jaj þ 2jbjÞf ð0Þ : 1 ðjcj þ jaj þ 2jbjÞLf
ð3:11Þ
Writing Eq. (3.6) in matrix form, we have
0
ðjcj þ jajÞLf bLf .. .
1
B B B @
.. .
bLf ðjcj þ jajÞLf .. .
1
bLf
0
1 jy1 y~1 j CB jy2 y~2 j C CB C CB C 0; .. A@ A . jyn y~n j ðjcj þ jajÞLf
1
or in compact form Ay 0. Using (3.1), we deduce that the matrix A is M matrix and thus A 1 0; which implies that y 0. Therefore, yi ¼ y~i for i ¼ 1; 2; . . .; n. Thus the delayed neural ring network model (1.4) has one unique equilibrium point. h Theorem 3.2 Under the assumption (A1) and (A2), there exists a unique equilibrium point to the ring neural network (1.7), provided the following condition holds; 1
ðjcj þ jaj þ 2jbjÞLf [ 0:
ð3:7Þ
From (3.7), we have l [ 0. Let X ¼ fðy1 ; y2 ; . . .; yn ÞT jyi j l; i ¼ 1; 2; . . .; ng. Clearly, the set X is non-empty and bounded. Furthermore, for any ðy1 ; y2 ; . . .; yn ÞT 2 oX; we have kHðy1 ; y2 ; . . .; yn Þk1 [ 0;
In this case, we denote
hðy1 ; y2 ; . . .; yn Þ ¼ ðh1 ; h2 ; . . .; hn ÞT ; where hi ¼ y i
cf ðyi Þ
af ðyi Þ
2bf ðyi Þ;
i ¼ 1; 2; . . .; n: ð3:8Þ
Clearly, the equilibrium point of system (1.7) is the solution of equation hi ¼ 0
i ¼ 1; 2; . . .; n:
k 2 ½0; 1;
which implies that Hðy1 ; y2 ; . . .; yn Þ 6¼ 0 for all ðy1 ; y2 ; . . .; yn ÞT 2 oX and k 2 ½0; 1. Thus from topological degree invariance theory, we obtain
Proof
10
bLf 0 .. .
ðh; X; 0Þ ¼ ðH; X; 0Þ ¼ 1:
ð3:12Þ
Hence, we get that the system (3.9) has at least one solution in X, that is, model (1.7) possesses at least one equilibrium point. Now we show that the solution of system (3.9) is unique. Assume that there exist two solutions ðy1 ; y2 ; . . .; yn ÞT and ð~ y1 ; y~2 ; . . .; y~n ÞT . Then for i ¼ 1; 2; . . .; n; we have ðyi
y~i Þ
a½f ðyi Þ
c½f ðyi Þ
ð3:9Þ
f ð~ yi Þ
2b½f ðyi Þ
f ð~ yi Þ
f ð~ yi Þ
From Assumption (A1), we obtain Following similar steps as done in Theorem 3.1, we define the following homotopic mapping: Hðy1 ; y2 ; . . .; yn Þ ¼ khðy1 ; y2 ; . . .; yn Þ þ ð1
T
kÞðy1 ; y2 ; . . .; yn Þ ;
ð3:10Þ where k 2 ½0; 1. From Assumption (A1), we obtain kHðy1 ; y2 ; . . .; yn Þk1 ¼
n X
jkhi þ ð1 kÞ
n X i¼1
jyi j þ k
n X
½1
i¼1
þ2jbjÞLf jyi j n X ðjcj þ jaj þ 2jbjÞf ð0Þ: k i¼1
1
i¼1
ðjaj þ 2jbj þ jcjÞLf jyi
y~i j 0:
ð3:13Þ
Using (3.7) in (3.13), we deduce that yi ¼ y~i , i ¼ 1; 2; . . .; n. Thus, the equilibrium point of model (1.7) is unique. h Theorem 3.3 If Assumptions (A1) and (A2) hold, then there exists a unique equilibrium point to the ring neural network (1.7), provided
kÞyi j
i¼1
ð1
n X
ðjcj þ jaj
ðjaj þ 2jbj þ jcjÞLf \1:
ð3:14Þ
holds. Proof Following similar steps as in Theorem 3.2, we define the following homotopic mapping:
123
Neural Comput & Applic
Let
Hðy1 ; y2 ; . . .; yn Þ ¼ khðy1 ; y2 ; . . .; yn Þ kÞðy1 ; y2 ; . . .; yn ÞT ;
þ ð1
M ¼ diag jy1
k 2 ½0; 1:
Let Hi ði ¼ 1; 2; . . .; nÞ denotes the ith component of Hðy1 ; y2 ; . . .; yn Þ. Then from Assumption (A1), we obtain jHi j 1 kðjcj þ jaj þ 2jbjÞLf jyi j kðjcj þ jaj þ 2jbjÞf ð0Þ;
i ¼ 1; 2; . . .; n:
ð3:15Þ
Denote H þ ¼ ðjH1 j; jH2 j; . . .; jHn jÞT ;
y~1 j; jy2
y~2 j; . . .; jyn
y~n j
Writing (3.16) in matrix form, we obtain ðI
DQÞM 0:
Since ðI DQÞ is an M-matrix, therefore ðI DQÞ 1 0; and thus M 0. This implies that yi ¼ y~i for i ¼ 1; 2; . . .; n. Thus, the ring neural network model (1.7) has one equilibrium point. h Note 3.4 The condition obtained for the existence of a solution is the same as in Theorems 3.1–3.3. Thus, we can interpret that the solution exists for the ring neural networks (1.4) and (1.7) independent of the consideration of the point of synchronization.
y ¼ ðy1 ; y2 ; . . .; yn ÞT ; C ¼ diagð1; 1; . . .; 1Þnn ; D ¼ diagðjcj þ jaj þ 2jbj; jcj þ jaj þ 2jbj; . . .; jcj þ jaj þ 2jbjÞ; Q ¼ diagðLf ; Lf ; . . .; Lf Þ; P ¼ ðjf ð0Þj; jf ð0Þj; . . .; jf ð0ÞjÞT :
4 Global asymptotic synchronization
Writing (3.15) in a matrix form, we obtain H þ ½I kDQy kDP ð1 kÞy þ k½ðI DQÞy ð1
kÞy þ kðI
DQÞ½y
4.1 Controller design
DP ðI
1
DQÞ DP:
Using (3.14), the matrix ðI DQÞ is an M-matrix. This implies that ðI DQÞ 1 0 and there exists a vector z ¼ ðz1 ; z2 ; . . .; zn ÞT [ 0; such that ðI DQÞz [ 0. Let X ¼ fðy1 ; y2 ; . . .; yn ÞT j y z þ ðI DQÞ 1 DPg. Then the set X is non-empty and from (3.15), for any y 2 oX; we have H þ ð1
kÞy þ kðI
DQÞ½y
¼ ð1
kÞy þ kðI
DQÞz [ 0;
ðI
DQÞ 1 DP
ðh; X; 0Þ ¼ ðH; X; 0Þ ¼ 1;
and hence we get that the model (1.7) has at least one equilibrium point. Now, we prove that the solution to the system of Eq. (3.9) is unique. Assume that there exist two solutions ðy1 ; y2 ; . . .; yn ÞT and ð~ y1 ; y~2 ; . . .; y~n ÞT . Then for i ¼ 1; 2; . . .; n; we have ðyi
y~i Þ
a½f ðyi Þ
c½f ðyi Þ
f ð~ yi Þ
2b½f ðyi Þ
123
f ð~ yi Þ
þ b½f ðei 1 ðt
sðtÞ; xÞ þ y~i 1 ðt
sðtÞ; xÞÞ
f ð~ yi 1 ðt sðtÞ; xÞÞ þ b½f ðeiþ1 ðt sðtÞ; xÞ þ y~iþ1 ðt
sðtÞ; xÞÞ
f ð~ yiþ1 ðt
sðtÞ; xÞÞ
vi ðtÞ:
ð4:1Þ
At the point of synchronization, the dynamics of the system (1.7) and (2.6) is illustrated by the following equation: oeðt; xÞ o2 eðt; xÞ ¼d eðt; xÞ þ c½f ðeðt sðtÞ; xÞ ot ox2 þ y~ðt sðtÞ; xÞÞ f ð~ yðt sðtÞ; xÞÞ Z t þa kðt sÞ½f ðeðs; xÞ þ y~ðs; xÞÞ f ð~ yðs; xÞÞds 1
f ð~ yi Þ ¼ 0:
From Assumption (A1), we obtain 1 ðjaj þ 2jbj þ jcjÞLf jyi y~i j 0;
oei ðt; xÞ o2 ei ðt; xÞ ei ðt; xÞ þ c½f ðei ðt sðtÞ; xÞ ¼ di ot ox2 þ y~i ðt sðtÞ; xÞÞ f ð~ yi ðt sðtÞ; xÞÞ Z t þa kðt sÞ½f ðei ðs; xÞ þ y~i ðs; xÞÞ f ð~ yi ðs; xÞÞds 1
k 2 ½0; 1:
This implies that H ¼ 6 0 for any ðy1 ; y2 ; . . .; yn ÞT 2 oX and k 2 ½0; 1. Thus by topological degree invariance theory, we have
Let yi ðt; xÞ and y~i ðt; xÞ denote the ith state variables of the drive and response neural network systems, respectively. We define the synchronization error signal as ei ðt; xÞ ¼ yi ðt; xÞ y~i ðt; xÞ. Then the dynamics of the system (1.4) and (2.4) can be illustrated by the following equation:
þ 2b½f ðeðt sðtÞ; xÞ þ y~ðt sðtÞ; xÞÞ f ð~ yðt sðtÞ; xÞÞ vðtÞ
i ¼ 1; 2; . . .; n: ð3:16Þ
ð4:2Þ
We can design the control input vector with state feedback as follows:
Neural Comput & Applic
2 Pn i¼1 W1i ðyi 6 v2 ðtÞ 7 6 Pn W2i ðyi 7 6 i¼1 6 7 6 6 .. 6 .. 7 ¼ 6 4 . 5 4 . Pn vn ðtÞ W ni ðyi i¼1 2 W11 W12 6W 6 21 W22 ¼6 .. 6 .. 4 . . 2
v1 ðtÞ
3 y~i Þ y~i Þ 7 7 7 7 5
3
Wn1
Wn2
y~i Þ 32 3 e1 ðtÞ W1n 7 6 W2n 7 76 e2 ðtÞ 7 7 6 .. 76 .. 7 .. 7 ¼ WeðtÞ; . 54 . 5 . Wnn
en ðtÞ
ð4:3Þ
nn
where W ¼ ðWij Þnn 2 R is the controller gain matrix, which is chosen appropriately to achieve asymptotic synchronization between both drive and response system. 4.2 Asymptotic synchronization For the sake of simplicity, throughout this section, we write eðt; xÞ ¼ eðtÞ in the proofs. Theorem 4.1 Under the assumption (A1) and (A2), the error dynamic system (4.1) is asymptotically synchronized, if there exists a constant hi [ 0, i ¼ 1; 2; . . .; n such that the controller gain matrix W in (4.3) satisfies; n X ðhiþ1 þ hi 1 Þ i¼1
\
n X
hi 2
c
r a
L2f
2b
i¼1
n X
n X
k¼1
Proof
r
1 !
L2f
Z
X i¼1 t
t sðtÞ
1 Z
c r
e2i 1 ðsÞds þ L2f
t
Z
1
0
Z X n h o2 ei ðtÞ hi 2ei ðtÞdi ox2 X i¼1
1
r
L2f
r
L2f
t t sðtÞ
e2i ðsÞds þ a
t s
Z
Z
t
þ 2ei ðtÞbf ðei 1 ðt sðtÞÞÞ þ 2ei ðtÞbf ðeiþ1 ðt sðtÞÞÞ Z 1 n X 2aei ðtÞ kðsÞf ðei ðt sÞÞds 2ei ðtÞ wik ek ðtÞ 0
e2iþ1 ðsÞds
r
1
c r
1
k¼1
L2f e2i 1 ðtÞ
bL2f e2i 1 ðt
sðtÞÞ þ
L2f e2i ðtÞ
b r
1
cL2f e2i ðt
L2f e2iþ1 ðtÞ sðtÞÞ þ a
bL2f e2iþ1 ðt Z
r
Z X n h o2 ei ðtÞ hi 2ei ðtÞdi ox2 X i¼1
þ
ð4:6Þ
kðsÞ
i f ðei ðzÞÞj2 dzds dx:
b
L2f e2iþ1 ðtÞ
sðtÞÞ
1
kðsÞjf ðei ðtÞ
0
þ y~i ðtÞÞ f ðei ðtÞÞj2 ds Z 1 a kðsÞjf ðei ðt sÞ þ y~i ðt
b
r
1
f ðei ðt
sÞÞ
i sÞÞj2 ds dx;
2e2i ðtÞ þ 2ei ðtÞcf ðei ðt
sðtÞÞÞ
þ
c r
1
k¼1
L2f e2i 1 ðtÞ
bL2f e2i 1 ðt
0
Z X n h oei ðtÞ b _ þ L2 e2 ðtÞ hi 2ei ðtÞ VðtÞ ot 1 r f i 1 X i¼1 1
b
0
t sðtÞ 1
sðtÞÞÞ
þ 2ei ðtÞbf ðei 1 ðt sðtÞÞÞ þ 2ei ðtÞbf ðeiþ1 ðt sðtÞÞÞ Z 1 n X kðsÞf ðei ðt sÞÞds þ jwik jðe2i ðtÞ þ e2k ðtÞÞ 2aei ðtÞ
Computing the time derivative of V along the solutions of (4.1), we obtain
sðtÞÞ þ
2e2i ðtÞ þ 2ei ðtÞcf ðei ðt
0
b
jf ðei ðzÞ þ y~i ðzÞÞ
bL2f e2i 1 ðt
1
ð4:5Þ
b
sðtÞÞÞ
cL2f e2i ðt sðtÞÞ þ a kðsÞjf ðei ðtÞ þ y~i ðtÞÞ f ðei ðtÞÞj2 ds 0 Z 1 i kðsÞjf ðei ðt sÞ þ y~i ðt sÞÞ f ðei ðt sÞÞj2 ds dx; a
þ
aL2f
We analyse the following Lyapunov functional
Z X n h VðtÞ ¼ hi e2i ðtÞ þ
þ
c
2e2i ðtÞ þ 2ei ðtÞcf ðei ðt
þ 2ei ðtÞbf ðei 1 ðt sðtÞÞÞ þ 2ei ðtÞbf ðeiþ1 ðt sðtÞÞÞ Z 1 b 2aei ðtÞ kðsÞf ðei ðt sÞÞds 2ei ðtÞvi ðtÞ þ L2 e2 ðtÞ 1 r f i 1 0 b bL2f e2i 1 ðt sðtÞÞ þ L2 e2 ðtÞ bL2f e2iþ1 ðt sðtÞÞ 1 r f iþ1 c þ L2 e2 ðtÞ 1 r f i Z
ð4:4Þ
hk jwki j : h k¼1 i
jwik j
0
Z X n h o2 ei ðtÞ hi 2ei ðtÞdi ox2 X i¼1
þ
b 1
c bL2f e2iþ1 ðt sðtÞÞ þ L2f e2i ðtÞ cL2f e2i ðt sðtÞÞ 1 r Z 1 þa kðsÞjf ðei ðtÞ þ y~i ðtÞÞ f ðei ðtÞÞj2 ds 0 Z 1 i a kðsÞjf ðei ðt sÞ þ y~i ðt sÞÞ f ðei ðt sÞÞj2 ds dx
sðtÞÞ þ
L2f e2i ðtÞ
b 1
cL2f e2i ðt
r
L2f e2iþ1 ðtÞ sðtÞÞ þ a
þ y~i ðtÞÞ f ðei ðtÞÞj2 ds Z 1 a kðsÞjf ðei ðt sÞ þ y~i ðt
bL2f e2iþ1 ðt Z
sðtÞÞ
1
kðsÞjf ðei ðtÞ
0
sÞÞ
f ðei ðt
0
i sÞÞj2 ds dx:
ð4:7Þ
Using Lemma 2.5, we obtain 2cei ðtÞLf ei ðt
sðtÞÞ ce2i ðtÞ þ cL2f e2i ðt
sðtÞÞ;
ð4:8Þ
123
Neural Comput & Applic
sðtÞÞ be2i ðtÞ þ bL2f e2i 1 ðt
2bei ðtÞLf ei 1 ðt
sðtÞÞ; ð4:9Þ
sðtÞÞ be2i ðtÞ þ bL2f e2iþ1 ðt
2bei ðtÞLf eiþ1 ðt
sðtÞÞ; ð4:10Þ
2aei ðtÞLf
Z
t
kðsÞei ðt 1
sÞds ae2i ðtÞ þ aL2f
Z
t 1
kðsÞe2i ðt
sÞds:
ð4:11Þ
On the other hand, we have Z X n o2 ei ðt; xÞ ki ei ðt; xÞdi dx ox2 X i¼1 Z X n h oei ðt; xÞ i ki ei ðt; xÞr di dx ¼ ox X i¼1 Z X n h oei ðt; xÞ i dx ¼ ki r ei ðt; xÞdi ox X i¼1 Z X n oei ðt; xÞ ki di rðei ðt; xÞÞdx ox X i¼1 Z X n oei ðt; xÞ ¼ ki ei ðt; xÞdi dr ox oX i¼1 Z X n oei ðt; xÞ 2 ki di dx ox X i¼1 Z X n oei ðt; xÞ 2 ki di dx: ¼ ox X i¼1
X i¼1
þ aL2f þ
n X
jwik j þ
k¼1
þ hiþ1
b 1
r
n X hk k¼1
L2f
þ hi
hi
Now we examine the Lyapunov functional given as Z X Z n h 2b 2 t 2 L VðtÞ ¼ ki ei ðtÞ þ e2 ðsÞdx 1 r f t sðtÞ i X i¼1 Z t c e2i ðsÞdx þ L2f 1 r t sðtÞ Z 1 Z t i þa kðsÞ jf ðei ðzÞ þ y~i ðzÞÞ f ð~ yi ðzÞÞj2 dzds dx: 0
1
r
r
L2f
#
e2i ðtÞdx:
ð4:12Þ
Computing time derivative of V along the solutions of (4.2), we obtain _ VðtÞ
L2f
Z X n h oei ðtÞ 2b 2 2 þ L e ðtÞ ki 2ei ðtÞ ot 1 r f i X i¼1
2bL2f e2i ðt sðtÞÞ c L2 e2 ðtÞ cL2f e2i ðt þ 1Z r f i
sðtÞÞ
1
kðsÞjf ðei ðtÞ þ y~i ðtÞÞ
þa
f ð~ yi ðtÞÞj2 ds
0
a
Z
1
kðsÞjf ðei ðt
sÞ þ y~i ðt i sÞÞj2 ds dx;
0
sÞÞ
f ð~ yi ðt Z X n n o2 e ðtÞ h i ki 2ei ðtÞ di ei ðtÞ 2 ox X i¼1 Z t þa kðt sÞjf ðei ðsÞ þ y~i ðsÞÞ f ð~ yi ðsÞÞjds 1
þ 2bjf ðei ðt þ cjf ðei ðt
ð4:14Þ
_ 0. Hence, using Lyapunov Using (4.5), we obtain that VðtÞ theorem of functional differential equation, we can conclude that the origin is asymptotically stable. Thus, the error dynamic system (4.1) is asymptotically synchronized. h
123
t s
ð4:16Þ
jwki j
b
1
c 1
Theorem 4.3 If Assumptions (A1)–(A2) hold, then the drive–response neural networks (1.7) and (2.6) along with initial conditions (1.9) and (2.7) and boundary condition (1.8) are asymptotically synchronized, if there exists ki [ 0, i ¼ 1; 2; . . .; n, such that the controller gain matrix W in (4.3) satisfies; n n X X kk 2 jwik j jwki j [ ð1 þ L2f Þa k k¼1 k¼1 i ð4:15Þ 1 1 L2 Þb þ ð1 þ L2 Þc; þ 2ð1 þ 1 r f 1 r f i ¼ 1; 2; . . .; n: Proof
Since di [ 0; using (4.8), (4.9), (4.10), (4.11) and (4.12) in (4.7), we obtain Z X n hn c _ VðtÞ L2 hi 2 þ c þ a þ 2b þ 1 r f X i¼1 n n o X X hk þ aL2f þ jwik j þ jwki j e2i ðtÞ h k¼1 k¼1 i i b b ð4:13Þ L2f e2i 1 ðtÞ þ L2f e2iþ1 ðtÞ dx: þ 1 r 1 r We can rewrite (4.13) as " Z X n _ VðtÞ 2 þ c þ a þ 2b þ hi
Note 4.2 In Theorem 4.1, for the constant hi , we have h0 ¼ hn , hnþ1 ¼ h1 , hnþ2 ¼ h2 and similarly for other values of n.
þ
2b r
1
sðtÞÞ þ y~i ðt
sðtÞÞ þ y~i ðt
sðtÞÞÞ
f ð~ yi ðt
sðtÞÞÞj
sðtÞÞÞj
c 2bL2f e2i ðt sðtÞÞ þ L2f e2i ðtÞ 1 r Z 1 sðtÞÞ þ a kðsÞjf ðei ðtÞ þ y~i ðtÞÞ
L2f e2i ðtÞ
cL2f e2i ðt
f ð~ yi ðtÞÞj2 ds Z 1 a kðsÞjf ðei ðt 0
f ð~ yi ðt
sðtÞÞÞ
o vi ðtÞ
0
sÞ þ y~i ðt
sÞÞ
f ð~ yi ðt
i sÞÞj2 ds dx:
Neural Comput & Applic
From Assumption (A1), we have Z X n h o2 ei ðt; xÞ _ ki 2ei ðtÞdi 2e2i ðtÞ VðtÞ 2 ox X i¼1 Z t kðt sÞLf ei ðsÞds þ 2aei ðtÞ
o~ yðt; xÞ o2 y~ðt; xÞ ¼d y~ðt; xÞ þ cf ð~ yðt ot ox2 Z t kðt sÞf ð~ yðs; xÞÞds þa
sðtÞ; xÞÞ ð4:21Þ
1
þ 2bf ð~ yðt
sðtÞ; xÞÞ þ vðtÞ;
1
þ 4bLf ei ðtÞei ðt sðtÞÞ þ 2cei ðtÞLf ei ðt 2ei ðtÞvi ðtÞ 2b 2 2 L e ðtÞ 2bL2f e2i ðt sðtÞÞ þ 1 r f i c L2 e2 ðtÞ cL2f e2i ðt sðtÞÞ þ 1Z r f i Z 1
þ
aL2f
0
y~ðs; xÞ ¼ wðs; xÞ;
aL2f
i sÞds dx:
kðsÞe2i ðt
0
ð4:17Þ
Using Lemma 2.5, we obtain sðtÞÞ 2be2i ðtÞ
4bei ðtÞLf ei ðt
2aei ðtÞLf
Z
þ
t
kðt 1
sÞei ðsÞds ae2i ðtÞ þ aL2f
sðtÞÞ Z
ð4:18Þ
where
ð4:19Þ
eðt; xÞ ¼ ðe1 ðt; xÞ; e2 ðt; xÞ; . . .; en ðt; xÞÞT ; y1 ðt; xÞÞ; f ðeðt; xÞÞ ¼ f ðe1 ðt; xÞ þ y~1 ðt; xÞÞ f ð~
1 0
kðsÞe2i ðt
sÞds:
Thus we obtain
þ
X i¼1 n X
jwik j þ
k¼1
2 þ a þ 2b þ c þ n X kk k¼1
ki
2b 1
r
L2f þ
f ðe2 ðt; xÞ þ y~2 ðt; xÞÞ
ð4:20Þ
k¼1
Z X n ki
oeðt; xÞ o2 eðt; xÞ ¼D eðt; xÞ þ Bf ðeðt sðtÞ; xÞÞ ot ox2 þ Cf ðeðt sðtÞ; xÞÞ Z t Kðt sÞf ðeðs; xÞÞds Weðt; xÞ; þa
sðtÞÞ
Using (4.18), (4.19) and (4.20) in (4.17), and following similar steps as in Theorem 4.1, we obtain Z X n h 2b 2 _ L VðtÞ ki 2 þ a þ 2b þ c þ 1 r f X i¼1 c L2f þ aL2f e2i ðtÞ þ 1 r n i X jwik jðe2i ðtÞ þ e2k ðtÞÞ dx _ VðtÞ
ð4:22Þ
1
2bL2f e2i ðt
sðtÞÞ ce2i ðtÞ þ cL2f e2i ðt
2cei ðtÞLf ei ðt
ðs; xÞ 2 ð 1; 0 ½0; p:
We can write the corresponding error system in the following compact form as;
1
kðsÞe2i ðtÞds
with the initial condition
sðtÞÞ
c 1
r
L2f þ aL2f
jwki j e2i ðtÞdx 0:
_ 0. Thus, by Lyapunov From (4.15), we obtain VðtÞ theorem of functional differential equation, the origin of error system (3.16) is asymptotically stable. Hence, the two systems (1.7) and (2.6) are asymptotically synchronized. h 4.3 Exponential synchronization The response system (2.6) corresponding to drive system (1.7) can be described by the following equation;
f ð~ y2 ðt; xÞÞ;
; f ðen ðt; xÞ þ y~n ðt; xÞÞ
Kðt
ð4:23Þ
f ð~ yn ðt; xÞÞ
T
;
D ¼ diagðd; d; . . .; d Þ;
B ¼ diagð2b; 2b; . . .; 2bÞ;
C ¼ diagðc; c; . . .; cÞ; sÞ ¼ diagðkðt sÞ; kðt
sÞ; . . .; kðt
sÞÞ:
Theorem 4.4 Assume that Assumption (A1) holds, then the drive–response neural network model (1.7) and (4.21) with the given initial condition (1.9) and (4.22) and boundary condition (1.8) are exponentially synchronized, if there exists a constant r [ 1, such that the controller gain matrix in (4.3) satisfies the following ð2
aÞI
BBT
ð2r þ aÞL
CC T þ 2W [ 0;
ð4:24Þ
where L ¼ diagðL2f ; L2f ; . . .; L2f Þ. Furthermore, the exponential synchronization index can also be estimated. Proof Consider the following Lyapunov functional V defined as Z t Z Z a 1 T e ðtÞeðtÞ þ KðsÞ jf ðeðnÞ þ yðnÞÞ VðtÞ ¼ 2 X t s i 0 f ðyðnÞÞj2 dnds dx:
ð4:25Þ
Clearly V(t) is nonnegative function defined over ½ s; þ1Þ. Moreover, it is radially unbounded, that is
123
Neural Comput & Applic
VðtÞ ! 1 as keðtÞk ! 1. Computing the time derivative of V(t) along the solutions of (4.23), we obtain _ 2 VðtÞ
Z h X
gkeðtÞk2 :
_ VðtÞ
ð4:29Þ
2
o eðtÞ e ðtÞD ox2 T
þ eT ðtÞCf ðeðt
T
T
e ðtÞeðtÞ þ e ðtÞBf ðeðt Z t sðtÞÞÞ þ aeT ðtÞ Kðt sÞ
sðtÞÞÞ
f ðeðs; xÞÞds eT ðtÞWeðtÞ Z a 1 KðsÞjf ðeðtÞ þ yðtÞÞ f ðyðtÞÞj2 ds þ 2 0 Z a 1 KðsÞjf ðeðt sÞ þ yðt sÞÞ f ðyðt 2 0 Z h 2 eT ðtÞeðtÞ þ eT ðtÞBf ðeðt sðtÞÞÞ
sÞÞj2 ds dx;
þ e ðtÞCf ðeðt
sðtÞÞÞ Z 1 þ aeT ðtÞeðtÞ þ aL2f KðsÞeT ðt sÞeðt sÞds eT ðtÞWeðtÞ 0 Z a 1 þ KðsÞjf ðeðtÞ þ yðtÞÞ f ðyðtÞÞj2 ds 2 0 Z i a 1 KðsÞjf ðeðt sÞ þ yðt sÞÞ f ðyðt sÞÞj2 ds dx; 2 0 Z h 2eT ðtÞeðtÞ þ eT ðtÞBBT eðtÞ X
sðtÞÞÞT f ðeðt
sðtÞÞÞ
sðtÞÞÞT f ðeðt i 2eT ðtÞWeðtÞ dx:
þ eT ðtÞCC T eðtÞ þ f ðeðt þ aL2f eT ðtÞeðtÞ
sðtÞÞ
sðtÞÞT eðt
sðtÞÞ:
X
ð4:28Þ
Substituting (4.27) and (4.28) in Eq. (4.26), we obtain Z h _ VðtÞ ð2 aÞeT ðtÞeðtÞ eT ðtÞBBT eðtÞ X
2rLe ðtÞeðtÞ
e ðtÞCC eðtÞ i aLe ðtÞeðtÞ þ 2e ðtÞWeðtÞ dx Z eT ðtÞ ð2 aÞI BBT ð2r þ aÞL X CC T þ 2W eðtÞdx: T
T
et VðeðtÞÞ Vðeð0ÞÞ: Thus, VðeðtÞÞ e Hence, keðtÞk e
2t
t
Vðeð0ÞÞ
or
keðtÞk2 e
t
keð0Þk2 :
keð0Þk:
Thus the error system (4.23) is exponentially synchronized h with exponential synchronization rate . 2
Example 5.1 model
Now, using Lemma 4 of [12], we assume that there exists a scalar r [ 1; such that Z eðt sðtÞÞT eðt sðtÞÞdx ¼ Vðeðt sðtÞÞÞ X Z rVðeðtÞÞ ¼ r eT ðtÞeðtÞdx:
T
For t [ 0; integrating both sides of (4.31) from 0 to t, we obtain
5 Numerical examples
ð4:27Þ
T
ð4:31Þ
ð4:26Þ
i¼1
T
d t ðe VðeðtÞÞÞ 0: dt
sðtÞÞÞ þ aeT ðtÞeðtÞ
From Assumption (A1), we have n X f ðeðt sðtÞÞÞT f ðeðt sðtÞÞÞ L2f e2i ðt Leðt
ð4:30Þ
For ¼ g, the inequality (4.30) becomes i
X T
þ f ðeðt
For any scalar [ 0; we have d t _ ðe VðeðtÞÞÞ ¼ et VðeðtÞÞ þ VðtÞ dt et ð gÞkeðtÞk2 :
1
123
Using (4.24), there exists g [ 0; such that
Consider the following 3-neuron network
oyi ðt; xÞ o2 yi ðt; xÞ ¼ di yi ðt; xÞ þ cf ðyi ðt ot ox2 Z t þa kðt sÞf ðyi ðs; xÞÞds
sðtÞ; xÞÞ
1
þ bf ðyi 1 ðt
sðtÞ; xÞÞ þ bf ðyiþ1 ðt
sðtÞ; xÞÞ ð5:1Þ
with the corresponding control system given by: o~ yi ðt; xÞ o2 y~i ðt; xÞ y~i ðt; xÞ þ cf ð~ yi ðt ¼ di ot ox2 Z t þa kðt sÞf ð~ yi ðs; xÞÞds
sðtÞ; xÞÞ
1
þ bf ð~ yi 1 ðt þ vi ðtÞ;
sðtÞ; xÞÞ þ bf ðyiþ1 ðt
sðtÞ; xÞÞ
ði ¼ 1; 2; 3Þ ð5:2Þ
with parameter values a ¼ 0:5, c ¼ 0:07, b ¼ 0:01 and di ¼ 1; ði ¼ 1; 2; 3Þ. We choose kernel function kðsÞ ¼ se s 1j and nonlinear activation function as f ðyÞ ¼ jyþ1jþjy . 2 Clearly Lf ¼ 1. Then 1 ðjajþ jcj þ 2jbjÞLf ¼ 0:41 [ 0. Using Theorem 3.1, there exists a unique equilibrium point for the system (5.1).
Neural Comput & Applic
Further, let r ¼ 0:5, k1 ¼ 1; k2 ¼ 0:5, k3 ¼ 0:5 and the control matrix be w11 ¼ 0:2; w12 ¼ 0:06; w13 ¼ 0:01; w21 ¼ 0:06; w22 ¼ 0:5; w23 ¼ 0:02; w31 ¼ 0:1; w32 ¼ 0:2, and w33 ¼ 0. Then for the given expression as in Theorem 4.1, the value on left-hand side is 0.08, whereas the value on right-hand side is 4.2, which implies L:H:S:\R:H:S. Thus the condition (4.5) of Theorem 4.1 holds and the system (5.1) and (5.2) is asymptotically synchronized. Let ei ðt; xÞ ¼ yi ðt; xÞ y~i ðt; xÞ for i ¼ 1; 2; 3. The simulation results for the synchronized behaviour of solution of error system are depicted in Figs. 1 and 2. From the figures, we can observe that the system (5.1) and (5.2) is asymptotically synchronized converging to the equilibrium. Now we check the validity of results at the point of synchronization. We have 1 ðjaj þ jcj þ 2jbjÞLf ¼ 0:41 [ 0. Using Theorem 3.2, there exists a unique equilibrium point for the system (5.1).
Furthermore 3 X kk
jw1k j
k¼1
3 X
2
k¼1
3 X
2
k1
3 X kk
jw2k j
k¼1
k¼1
k2
3 X kk
jw3k j
k¼1
k¼1
ð1 þ L2f Þa þ 2ð1 þ
k3
jwk1 j ¼ 1:45 jwk2 j ¼ 0:60 jwk3 j ¼ 1:66
1
1
r
L2f Þb þ ð1 þ
1 1
L2 Þc ¼ 0:73 r f P3 k k k¼1 ki jwki j [
P3 Clearly, for i ¼ 1; 2; 3, 2 k¼1 jwik j 1 1 ð1 þ L2f Þa þ2ð1 þ L2f Þb þ ð1 þ L2 Þc. Thus by 1 r 1 r f Theorem 4.3, the drive–response system (5.1) and (5.2) is globally asymptotically synchronized. e2(x,t)
e1(x,t)
(a)
3 X
2
(b) 0
1
−0.5 0.8 −1 0.6
−1.5
0.4
−2
0.2
−2.5
0 1
−3 1 1
1 0.8
0.5
0.8
0.5
0.6
0.6 0.4
0.4
Time t
0
0.2 0
0
Time t
Distance x
0.2 0
Distance x
e (x,t) 3
(c) 1 0.8 0.6 0.4 0.2 0 1 1 0.8
0.5
0.6 0.4
Time t
0
0.2 0
Distance x
Fig. 1 Asymptotic behaviour of the error e1 ðtÞ; e2 ðtÞ; e3 ðtÞ. a Surface plot for y1 ðtÞ. b Surface plot for y2 ðtÞ. c Surface plot for y3 ðtÞ
123
Neural Comput & Applic
(a)
1
(b)
0
0.9 −0.5
0.8 0.7
−1
e (t)
0.5
2
1
e (t)
0.6 −1.5
0.4 −2
0.3 0.2
−2.5
0.1 0
0
0.2
0.4
0.6
0.8
−3
1
0
0.2
0.4
t
0.6
0.8
1
t
(c)
1 0.9 0.8 0.7
3
e (t)
0.6 0.5 0.4 0.3 0.2 0.1 0
0
0.2
0.4
0.6
0.8
1
t
Fig. 2 Asymptotic behaviour for the synchronized state solution converging to equilibrium. a Time portrait for e1 ðtÞ. b Time portrait for e2 ðtÞ. c Time portrait for e3 ðtÞ
Example 5.2 model
Consider the following ring neural network
oyi ðt; xÞ o2 yi ðt; xÞ ¼ di yi ðt; xÞ þ cf ðyi ðt ot ox2 þ 2bf ðyi ðt sðtÞ; xÞÞ
sðtÞ; xÞÞ
ð5:3Þ with the corresponding control system given as o~ yi ðt; xÞ o2 y~i ðt; xÞ y~i ðt; xÞ þ cf ð~ yi ðt ¼ di ot ox2 þ 2bf ð~ yi ðt sðtÞ; xÞÞ þ vi ðtÞ;
sðtÞ; xÞÞ
ð5:4Þ where i ¼ 1; 2; 3, the activation function is f ðyÞ ¼ jyþ1jþjy 1j . 2
Then we have Lf ¼ 1. Other parameters are chosen as b ¼ 0:2, a ¼ 0, c ¼ 0:3; and r ¼ 1:1. Then,
123
1 ðjcj þ 2jbjÞLf ¼ 0:66 [ 0. Thus, there exists a unique equilibrium point to the model system (5.3). We choose the controller gain matrix as W ¼ 0 1 3:2 0 0 @ 0 3:2 0 A. Then we have 2I BBT 2rL 0 0 3:2 0 1 0:75 0 0 CC T þ 2W ¼ @ 0 0:75 0 A; which is pos0 0 0:75 itive definite. According to Theorem 4.4, the drive– response system (5.3) and (5.4) are globally exponentially synchronized. Let ei ðt; xÞ ¼ yi ðt; xÞ y~i ðt; xÞ for i ¼ 1; 2; 3. The simulation results for the synchronized state solution behaviour of error system are depicted in Figs. 3 and 4. It can be observed from the figures that the system (5.1) and (5.2) is asymptotically synchronized converging to the equilibrium.
Neural Comput & Applic e1(x,t)
e2(x,t)
(a)
(b)
1
0
0.8
−0.5 −1
0.6 −1.5 0.4 −2 0.2
−2.5
0 1
−3 1 1
1
0.8
0.5
0.8
0.5
0.6
0.6
0.4
Time t
0
0.4
0.2 0
0
Time t
Distance x
0.2 0
Distance x
e3(x,t)
(c) 1 0.8 0.6 0.4 0.2 0 1 1 0.8
0.5
0.6 0.4
Time t
0
0.2 0
Distance x
Fig. 3 Asymptotic behaviour of the error solutions e1 ðtÞ; e2 ðtÞ; e3 ðtÞ of (5.3). a Surface plot for y1 ðtÞ. b Surface plot for y2 ðtÞ. c Surface plot for y3 ðtÞ
6 Conclusion In this work, we have investigated a ring neural network of coupled neurons with reaction–diffusion terms and multiple time delays. We have proved the existence and uniqueness of the equilibrium by using topological degree theory and properties of M-matrix. Based on linear feedback control techniques, we have obtained some useful criteria to establish global asymptotic synchronization and exponential synchronization of the diffusive neural ring network model by constructing suitable Lyapunov functions and employing inequality techniques. From the obtained results, we can interpret that if the reaction–
diffusion term satisfies weaker conditions, then the results for the asymptotic synchronization can be merely derived from the network parameters itself. Moreover, the obtained results can be easily verified by using simple algebraic methods. We have provided illustrative simulation results to support the effectiveness of obtained results. The obtained results posses significant importance in various applications, such as they can be usefully applied in designing globally exponentially stable and periodic oscillatory neural circuits. Among various complex networks, a well-known examination of food webs has been the focus area of research in past some years. However, considering diffusion effects in food webs becomes quite
123
Neural Comput & Applic
(a)
1
(b)
0
0.9 −0.5
0.8 0.7
−1
e (t)
0.5
2
1
e (t)
0.6 −1.5
0.4 −2
0.3 0.2
−2.5
0.1 0
0
0.2
0.4
0.6
0.8
−3
1
0
0.2
0.4
t
(c)
0.6
0.8
1
t 1 0.9 0.8 0.7
3
e (t)
0.6 0.5 0.4 0.3 0.2 0.1 0
0
0.2
0.4
0.6
0.8
1
t
Fig. 4 Asymptotic behaviour for the synchronized state solution converging to equilibrium. a Time portrait for e1 ðtÞ. b Time portrait for e2 ðtÞ. c Time portrait for e3 ðtÞ
significant and interesting to investigate. Our results can be possibly applied to food webs with diffusion effects and ecosystem with reaction–diffusion terms. Acknowledgements We are thankful to the editor, associate editor and anonymous reviewers for their insightful comments and suggestions, which helped in improving the manuscript considerably. Conflict of interest We would like to declare that there is no conflict of interests.
References 1. Abbas S (2012) Existence and attractivity of k-pseudo almost automorphic sequence solution of a model of bidirectional neural networks. Acta Appl Math 119(1):57–74 2. Abbas S (2009) Pseudo almost periodic sequence solutions of discrete time cellular neural networks. Nonlinear Anal Model Control 14(3):283–301
123
3. Balasubramaniam P, Vembarasan V, Rakkiyappan R (2011) Delay-dependent robust exponential state estimation of Markovian jumping fuzzy Hopfield neural networks with mixed random time-varying delays. Commun Nonlinear Sci Numer Simul 16(4):2109–2129 4. Baldi P, Atiya AF (1994) How delays affect neural dynamics and learning. IEEE Trans Neural Netw 5(4):612–621 5. Bungay SD, Campbell SA (2007) Patterns of oscillation in a ring of identical cells with delayed coupling. Int J Bifurc Chaos 17(09):3109–3125 6. Campbell SA, Ruan S, Wolkowicz G, Wu J (1999) Stability and bifurcation of a simple neural network with multiple time delays. Fields Inst Commun 21(4):65–79 7. Campbell SA, Yuan Y, Bungay SD (2005) Equivariant Hopf bifurcation in a ring of identical cells with delayed coupling. Nonlinearity 18(6):2827 8. Chen S, Cao J (2012) Projective synchronization of neural networks with mixed time-varying delays and parameter mismatch. Nonlinear Dyn 67(2):1397–1406 9. Feng C, Plamondon R (2012) An oscillatory criterion for a time delayed neural ring network model. Neural Netw 29:70–79
Neural Comput & Applic 10. Gana Q, Liub T, Chang Liua TL (2016) Synchronization for a class of generalized neural networks with interval time-varying delays and reaction–diffusion terms. Nonlinear Anal Model Control 21(3):379–399 11. Guo D (1985) Nonlinear functional analysis. Shundong Sci. Tech. Press, Jinan 12. Hale JK, Lunel SMV (1993) Introduction to functional differential equations. Applied Mathematical Sciences, vol 99. SpringerVerlag, New York, x?447 pp. ISBN: 0-387-94076-6 13. Hu C, Jiang H, Teng Z (2010) Impulsive control and synchronization for delayed neural networks with reaction–diffusion terms. IEEE Trans Neural Netw 21(1):67–81 14. Li R, Cao J (2016) Stability analysis of reaction–diffusion uncertain memristive neural networks with time-varying delays and leakage term. Appl Math Comput 278:54–69 15. Li X, Shen J (2010) LMI approach for stationary oscillation of interval neural networks with discrete and distributed timevarying delays under impulsive perturbations. IEEE Trans Neural Netw 21(10):1555–1563 16. Liao X, Fu Y, Gao J, Zhao X (2000) Stability of Hopfield neural networks with reaction–diffusion terms. Acta Electron Sin 28(1):78–80 17. Lou XY, Cui BT (2006) Asymptotic synchronization of a class of neural networks with reaction–diffusion terms and time-varying delays. Comput Math Appl 52(6):897–904 18. Lu JG, Lu LJ (2009) Global exponential stability and periodicity of reaction–diffusion recurrent neural networks with distributed delays and Dirichlet boundary conditions. Chaos Solitons Fractals 39(4):1538–1549 19. Pecora LM, Carroll TL (1990) Synchronization in chaotic systems. Phys Rev Lett 64(8):821 20. Phat VN, Trinh H (2010) Exponential stabilization of neural networks with various activation functions and mixed timevarying delays. IEEE Trans Neural Netw 21(7):1180–1184 21. Sheng L, Yang H (2008) Exponential synchronization of a class of neural networks with mixed time-varying delays and impulsive effects. Neurocomputing 71(16):3666–3674 22. Sheng L, Yang H, Lou X (2009) Adaptive exponential synchronization of delayed neural networks with reaction–diffusion terms. Chaos Solitons Fractals 40(2):930–939 23. Song Q (2009) Design of controller on synchronization of chaotic neural networks with mixed time-varying delays. Neurocomputing 72(13):3288–3295
24. Song Q, Cao J (2011) Synchronization of nonidentical chaotic neural networks with leakage delay and mixed time-varying delays. Adv Differ Equ 2011(1):1–17 25. Song Y, Han Y, Peng Y (2013) Stability and Hopf bifurcation in an unidirectional ring of n neurons with distributed delays. Neurocomputing 121:442–452 26. Tyagi S, Abbas S, Ray RK (2015) Stability analysis of an integro differential equation model of ring neural network with delay. Springer Proc Math Stat 143:37–49 27. Wang Y, Cao J (2007) Synchronization of a class of delayed neural networks with reaction–diffusion terms. Phys Lett A 369(3):201–211 28. Wang L, Zhang R, Wang Y (2009) Global exponential stability of reaction–diffusion cellular neural networks with S-type distributed time delays. Nonlinear Anal Real World Appl 10(2):1101–1113 29. Wang L, Zhao H, Cao J (2016) Synchronized bifurcation and stability in a ring of diffusively coupled neurons with time delay. Neural Netw 75:32–46 30. Wang Z, Zhang H (2010) Global asymptotic stability of reaction– diffusion Cohen–Grossberg neural networks with continuously distributed delays. IEEE Trans Neural Netw 21(1):39–49 31. Wei PC, Wang JL, Huang YL, Xu BB, Ren SY (2016) Impulsive control for the synchronization of coupled neural networks with reaction–diffusion terms. Neurocomputing 207:539–547 32. Wei-Yuan Z, Jun-Min L (2011) Global exponential stability of reaction–diffusion neural networks with discrete and distributed time-varying delays. Chin Phys B 20(3):030701 33. Yuan K, Cao J, Li HX (2006) Robust stability of switched Cohen–Grossberg neural networks with mixed time-varying delays. IEEE Trans Syst Man Cybern Part B Cybern 36(6):1356–1363 34. Yuan Y, Campbell SA (2004) Stability and synchronization of a ring of identical cells with delayed coupling. J Dyn Differ Equ 16(3):709–744 35. Zhang CK, He Y, Wu M (2010) Exponential synchronization of neural networks with time-varying mixed delays and sampleddata. Neurocomputing 74(1):265–273 36. Zhu Q, Cao J (2011) Exponential stability analysis of stochastic reaction–diffusion Cohen–Grossberg neural networks with mixed delays. Neurocomputing 74(17):3084–3091
123