Neural Process Lett (2015) 41:1–27 DOI 10.1007/s11063-013-9331-8
Robust Stability of Markovian Jump Stochastic Neural Networks with Time Delays in the Leakage Terms Quanxin Zhu · Jinde Cao · Tasawar Hayat · Fuad Alsaadi
Published online: 26 November 2013 © Springer Science+Business Media New York 2013
Abstract This paper deals with the problem of exponential stability for a class of Markovian jump stochastic neural networks with time delays in the leakage terms and mixed time delays. The jumping parameters are modeled as a continuous-time, finite-state Markov chain, and the mixed time delays consist of time-varying delays and distributed delays. By using the method of model transformation, Lyapunov stability theory, stochastic analysis and linear matrix inequalities techniques, several novel sufficient conditions are derived to guarantee the exponential stability in the mean square of the equilibrium point of the suggested system in two cases: with known or unknown parameters. Moreover, some remarks and discussions are given to illustrate that the obtained results are significant, which comprises and generalizes those obtained in the previous literature. In particular, the obtained stability conditions are delay-dependent, which depends on all the delay constants, and thus the presented results are less conservatism. Finally, two numerical examples are provided to show the effectiveness of the theoretical results.
Q. Zhu (B) School of Mathematical Sciences and Institute of Finance and Statistics, Nanjing Normal University, Nanjing 210023, Jiangsu, China e-mail:
[email protected] J. Cao Department of Mathematics and Research Center for Complex Systems and Network Sciences, Southeast University, Nanjing 210096, Jiangsu, China e-mail:
[email protected] T. Hayat Department of Mathematics, Quaid-I-Azam University, Islamabad 44000, Pakistan J. Cao · T. Hayat Department of Mathematics, Faculty of Science, King Abdulaziz University, Jeddah, Saudi Arabia F. Alsaadi Department of Electrical and Computer Engineering, Faculty of Engineering, King Abdulaziz University, Jeddah, Saudi Arabia
123
2
Q. Zhu et al.
Keywords Exponential stability · Stochastic neural network · Lyapunov functional · Linear matrix inequality · Markovian jump parameter · Leakage time delay
1 Introduction Over the past decades, neural networks have been widely studied in the literature because of their important applications in many areas such as signal processing, associative memory, pattern recognition, parallel computation and optimization. Stability is the best important problem in the field of neural networks since it is the primary requirement in modern control theories. Therefore, there has been an increasing interest in the stability analysis of neural networks, and many results on this topic have been reported in the literature (see e.g. [1–46] and the references therein). As is well known, time delays are always unavoidably encountered in the implementation of neural networks due to the finite switching speed of neurons and amplifiers. They can describe a kind of ubiquitous phenomenon present in real systems where the rate of change of the state depends on not only the current state of the system but also its state at some time in history. Moreover, it is shown that the existence of time delays may lead to oscillation and instability of neural networks, which is harmful to the applications of neural networks. As a consequence, it is very important to study the delay effects on the dynamical behavior of neural networks. The existing works on the stability of neural networks with time delays can be summarized as four cases: constant delays [4,5,9], time-varying delays [6,8,10], distributed delays [12–16] and mixed time delays [17,18,32–36]. Different from the above time delays, a new class of delays called time delays in the leakage (or “forgetting”) term has been introduced by Gopalsamy [19] in the study of neural networks. Like the traditional time delays, the leakage delays also have a great impact on the dynamics of neural networks. Just as pointed out by Gopalsamy [19], the leakage delays have a tendency to destabilize the neural networks. Therefore, it is very significant to investigate the stability of neural networks with time delays in the leakage term. It is inspiring that there has appeared a large number of works on the stability of neural networks with time delays in the leakage term (see e.g. [20–25] and the references therein). By constructing an augmented Lyapunov–Krasovskii functional, and employing the delay decomposition approach and some analysis techniques, Balasubramaniam et al. [20] derived some sufficient conditions for the existence, uniqueness, and stability of a class of T–S fuzzy cellular neural networks with time delays in the leakage term. Moreover, they also proved the existence and globally asymptotic stability analysis of the equilibrium point of recurrent neural networks with time delays in the leakage term and unbounded distributed delays based on the Lyapunov–Krasovskii functional with free-weighting matrix, and the homeomorphism mapping principle and linear matrix inequalities [21]. By using the topological degree theory, Lyapunov–Kravsovskii functional and some analysis techniques, Li et al. [23] investigated the existence, uniqueness and the globally asymptotic stability of recurrent neural networks with time delays in the leakage term. In [24], Liu studied the existence, uniqueness and the globally exponential stability of the equilibrium of general bidirectional associative memory neural networks with time-varying delays in the leakage terms through constructing a Lyapunov functional and using the fixed point theorem. By using the properties of M-matrix, the properties of fuzzy logic operator and delay differential inequality, Long et al. [25] derived some sufficient conditions for the exponential stability of fuzzy cellular networks with time delay in the leakage term and impulsive perturbations. For more results on time delays in the leakage term, we refer the reader to [26–31].
123
Robust Stability of Markovian Jump Stochastic Neural Networks
3
From the foregoing discussion we know that time delays in the leakage terms do have a great impact on the dynamics of neural networks, which implies that the effects of leakage delays on neural networks cannot be ignored. However, all the above mentioned works on time delays in the leakage terms neglected the effects of stochastic disturbances, which has also an important effect on the stability of neural networks. In fact, the synaptic transmission in real neural networks can be viewed as a noisy process introduced by random fluctuations from the release of neurotransmitters and other probabilistic causes, and so that a neural network could be stabilized or destabilized by certain stochastic inputs. Therefore, stochastic disturbance is a major source of instability and poor performances in neural networks, and we should consider this important factor when investigating the stability of neural networks. Usually, neural networks with stochastic disturbances are called stochastic neural networks. Recently, a class of important stochastic neural networks known as Markovian jump stochastic neural networks have received a great deal of attention since it can model the phenomenon of information latching, and the abrupt phenomena such as random failures or repairs of the components, sudden environmental changes, changing subsystem interconnections, etc. As we know, a Markovian jump stochastic neural network is a hybrid system with state vector that has two components x(t) and r (t). The first one x(t) is in general referred to as the state, which is described by a stochastic differential equation, and the second one r (t) is regarded as the mode, which is governed by a continuous-time Markov chain with a finite state space. In its operation, the jump system will switch from one mode to another in a random way, and the modes are governed by the continuous-time Markov chain. Hence, Markovian jump stochastic neural networks are very complex and have lots of important applications, and these in turn attract many researchers’ interests [32–39]. For example, Liu et al. investigated the robust exponential stability of stochastic neural networks with mixed time delays and Markovian switching in [32]; Balasubramaniam and Rakkiyappan [35] discussed the globally asymptotical stability problem for a class of Markovian jumping stochastic Cohen–Grossberg neural networks with discrete interval and distributed delays; In [36], Zhang and Wang dealt with the asymptotical stability analysis of a class of Markovian jumping stochastic Cohen–Grossberg neural networks based on the linear matrix inequality technique; Very recently, Zhu and Cao introduced some new classes of Markovian jump stochastic neural networks with mixed time delays with/without impulse control in [37–39], and they studied the exponential stability in the mean square of the equilibrium point. However, all the works in [32–39] did not consider the effects of time delays in the leakage terms owing to some theoretical and technical difficulties. This situation encourages our present research. Motivated by the above discussion, in this paper we investigate the stability problem for a class of Markovian jump stochastic neural networks with time delays in the leakage terms and mixed time delays. By applying the method of model transformation, Lyapunov stability theory, stochastic analysis and linear matrix inequalities techniques, we derive several novel sufficient conditions to ensure the exponential stability in the mean square of the equilibrium point of the suggested system under two cases: with known or unknown parameters. Our stability conditions are delay-dependent, which depends on all the delay constants, and thus the presented results are less conservatism. Moreover, two numerical examples are given to demonstrate the effectiveness of the obtained results. The remainder of this paper is organized as follows. In Sect. 2, we introduce the model of a class of Markovian jump stochastic neural networks with mixed time delays and leakage timevarying delays, and give some assumptions and lemmas needed in this paper. By constructing some novel Lyapunov–Krasovskii functionals, we prove the exponential stability in the mean square of the equilibrium point for the suggested system in Sect. 3. In Sect. 4, two numerical
123
4
Q. Zhu et al.
examples are provided to illustrate the effectiveness of the theoretical results. Finally, in Sect. 5, we concluded the paper with some general remarks. 1.1 Notation Throughout this paper, the following notations will be used. Rn and Rn×m denote the n-dimensional Euclidean space and the set of all n × m real matrices, respectively. The superscript “T” denotes the transpose of a matrix or vector, and the symbol “” denotes the symmetric term of the matrix. Trace (·) denotes the trace of the corresponding matrix and I denotes the identity matrix with compatible dimensions. For any matric A, λmax (A) [respectively, λmin (A)] denotes the largest (respectively, smallest) eigenvalue of A. For square matrices M1 and M2 , the notation M1 > (≥, 0 and C([−τ, 0]; Rn ) denote the family of continuous function φ from [−τ, 0] to Rn with the uniform norm φ = sup−τ ≤θ ≤0 |φ(θ )|. Denote by L 2F0 ([−τ, 0]; Rn ) the family of all F0 measurable, C([−τ, 0]; Rn )-valued stochas0 tic variables ξ = {ξ(θ ) : −τ ≤ θ ≤ 0} such that −τ E|ξ(s)|2 ds < ∞, where E[·] stands for the correspondent expectation operator with respect to the given probability measure P.
2 Model Description and Problem Formulation Let {r (t), t ≥ 0} be a right-continuous Markov chain on a complete probability space (, F , P) taking values in a finite state space S = {1, 2, . . . , N } with generator Q = (qi j ) N ×N given by qi j t + o( t) if i = j, P{r (t + t) = j|r (t) = i} = 1 + qii t + o( t) if i = j, o( t) where t > 0 and lim t→0 t = 0. Here, qi j ≥ 0 is the transition rate from i to j if i = j while qii = − j=i qi j . In this paper, we introduce the following new class of Markovian jump stochastic neural networks with mixed time delays and time delays in the leakage terms: ⎡
⎢ d x(t) = ⎣ − D(r (t))x(t − β) + A(r (t)) f (x(t)) t + B(r (t))g(x(t − τ1 (t))) + C(r (t))
⎤ ⎥ h(x(s))ds ⎦ dt
t−τ2 (t)
+ σ (x(t), x(t − β), x(t − τ1 (t)), x(t − τ2 (t)), t, r (t))dw(t),
(1)
where x(t) = [x1 (t), x2 (t), . . . , xn (t)]T is the state vector associated with the n neurons, the diagonal matrix D(r (t)) = diag(d1 (r (t)), d2 (r (t)), . . . , dn (r (t))) has positive entries di (r (t)) > 0 (i = 1, 2, . . . , n). The matrices A = (ai j (r (t)))n×n , B = (bi j (r (t)))n×n and C = (ci j (r (t)))n×n are the connection weight matrix, the time-varying delay connection weight matrix, and the distributed delay connection weight matrix, respectively. f (x(t)) =
123
Robust Stability of Markovian Jump Stochastic Neural Networks
5
[ f 1 (x1 (t)), f 2 (x2 (t)), . . . , f n (xn (t))]T , g(x(t)) = [g1 (x1 (t)), g2 (x2 (t)), . . . , gn (xn (t))]T and h(x(t)) = [h 1 (x1 (t)), h 2 (x2 (t)), . . . , h n (xn (t))]T are the neuron activation functions. The noise perturbation σ : Rn × Rn × Rn × Rn × R+ × S → Rn×m is a Borel measurable function, and β denotes the leakage delay. τ1 (t) and τ2 (t) are the time-varying delays. Throughout this paper, we make the following assumptions. − − − Assumption 1 There exist diagonal matrices Ui− = diag(u i1 , u i2 , . . . , u in ), Ui+ = + + + diag(u i1 , u i2 , . . . , u in ), i = 1, 2, 3 satisfying
u− 1j ≤
f j (α) − f j (β) g j (α) − g j (β) h j (α) − h j (β) ≤ u+ ≤ u+ ≤ u+ , u− ≤ , u− ≤ 1 j 2 j 2 j 3 j 3j, α−β α−β α−β
for all α, β ∈ R, α = β, j = 1, 2, . . . , n. Assumption 2 There exist positive constants τ1 , τ2 , ρ1 , ρ2 such that 0 ≤ τ1 (t) ≤ τ1 , 0 ≤ τ2 (t) ≤ τ2 , τ˙1 (t) ≤ ρ1 , τ˙2 (t) ≤ ρ2 . Assumption 3 There exist positive definite matrices T1i , T2i , T3i and T4i (i ∈ S) such that trace[σ T (x1 , x2 , x3 , x4 , t, i)σ (x1 , x2 , x3 , x4 , t, i)] ≤ x1T T1i x1 + x2T T2i x2 + x3T T3i x3 + x4T T4i x4 for all x1 , x2 , x3 , x4 ∈ Rn and r (t) = i, i ∈ S. Assumption 4 σ (0, 0, 0, 0, t, r (t)) ≡ 0. Noting the facts that f (0) = g(0) = h(0) = 0 and σ (0, 0, 0, 0, t, r (t)) = 0, the trivial solution of system (1) exists. Let x(t; ξ ) denote the state trajectory from the initial data x(θ ) = ξ(θ ) on −τ ≤ θ ≤ 0 in L 2F0 ([−τ, 0]; Rn ), where τ = max{β, τ1 , τ2 , ρ1 , ρ2 }. Clearly, system (1) admits a trivial solution x(t; 0) ≡ 0 corresponding to the initial data ξ = 0. For simplicity, we write x(t; ξ ) = x(t). Let C12 (R+ × Rn × S; R+ ) denote the family of all nonnegative functions V (t, x, i) on R+ × Rn × S which are continuously twice differentiable in x and differentiable in t. If V ∈ C12 (R+ × Rn × S; Rn ), then along the trajectory of system (1) we define an operator LV from R+ × Rn × S to R by ⎡ ⎢
LV (t, x(t), i) = Vt (t, x(t), i) + Vx (t, x(t), i) ⎣ − Di x(t − β) + Ai f (x(t))
t + Bi g(x(t − τ1 (t))) + Ci
⎤ ⎥ h(x(s))ds ⎦ + qi j V (t, x(t), j) N
j=1
t−τ2 (t)
1 + trace[σ T (x(t), x(t −β), x(t −τ1 (t)), x(t −τ2 (t)), t, i)Vx x (t, x(t), i) 2 (2) × σ (x(t), x(t − β), x(t − τ1 (t)), x(t − τ2 (t)), t, i)], where Vt (t, x(t), i) =
∂ V (t, x(t), i) , Vx (t, x(t), i) = ∂t
and
Vx x (t, x(t), i) =
∂ V (t, x(t), i) ∂ V (t, x(t), i) ,..., ∂ x1 ∂ xn
∂ 2 V (t, x(t), i) ∂ x j ∂ xk
. n×n
123
6
Q. Zhu et al.
Now we give the concept of exponential stability for system (1). Definition 1 The equilibrium point of system (1) is said to be exponential stable in the mean square if for every ξ ∈ L 2F0 ([−τ, 0]; Rn ), there exist scalars α > 0 and γ > 0 such that the following inequality holds: E|x(t; ξ )|2 ≤ αe−γ t
sup E|ξ(θ )|2 .
−τ ≤θ ≤0
The following lemmas are needed to prove our main results. Lemma 1 ([40]) For any positive definite matrix G > 0 of appropriate dimensions, any scalars a and b with a < b, and a vector function : [a, b] → Rn such that the integrations concerned are well defined, then the following inequality holds: ⎛ b ⎞T ⎛ b ⎞ ⎞ ⎛ b ⎝ (t)dt ⎠ G ⎝ (t)dt ⎠ ≤ (b − a) ⎝ T (t)G(t)dt ⎠ . a
a
a
Lemma 2 (Schur complement) Given one positive definite matrix G 2 > 0, and constant matrices G 1 , G 3 , where G 1 = G 1T , then G 1 + G 3T G −1 2 G 3 < 0 if and only if
G1 G3
G 3T −G 2
< 0 or
−G 2 G 3T
G3 G1
< 0.
Lemma 3 ([37]) For any real scalars c1 > 0, c2 > 0, c3 > 0 and γ > 0, there exists a unique solution β > 0 such that the following equality holds: c1 x + c2 xec3 x = γ . Lemma 4 ([47]) For any real matrices X, Y , the following matrix inequality holds: X T Y + Y T X ≤ X T X + Y T Y. In the sequel, for simplicity, when r (t) = i, the matrices D(r (t)), A(r (t)), B(r (t)) and C(r (t)) will be written as Di , Ai , Bi and Ci , respectively.
3 Main Results and Proofs In this section, the exponential stability in the mean square of the equilibrium point for system (1) is investigated under Assumptions 1–4. Theorem 1 Under Assumptions 1–4, the equilibrium point of system (1) is exponentially stable in the mean square, if there exist positive scalars λi (i ∈ S), positive diagonal matrices Q 1 , Q 2 , Q 3 , positive definite matrices E, F, G, H, K , L , Pi (i ∈ S), and any matrices Mi , Ni (i = 1, 2, 3) with appropriate dimensions such that the following linear matrix inequalities (LMIs) hold:
123
Robust Stability of Markovian Jump Stochastic Neural Networks
⎡
7
Pi ≤ λi I ,
11 ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ i = ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
13 0 33
0 −E + λi T2i
14 0 0 44
Pi Ai 0 0 0 −Q 1
Pi Bi 0 0 0 0 −Q 2
(3)
18 0 0 0 0 0 0 −K
0 0 0 0 0 0 77
0 0 0 0 0 0 0 0 −L
Pi Ci 0 0 0 0 0 0 0 0 −H
111 0 311 0 0 0 0 0 0 0 1111
⎤ 112 0 ⎥ ⎥ 0 ⎥ ⎥ 412 ⎥ ⎥ 0 ⎥ ⎥ ⎥ 0 ⎥ ⎥ < 0, 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎦ 1212
(4) where
11 = −2Pi Di +
N
qi j P j + λi T1i + E + F + G + U1 Q 1 U1 + U3 Q 3 U3 + β 2 K
j=1
+ M1T + M1 + N1T + N1 + τ12 L , 13 = −M1 + M2T , 14 = −N1 + N2T , 18 = DiT Pi Di −
N
qi j P j D j , 111 = −M1 + M3T , 112 = −N1 + N3T ,
j=1
33 = −(1 − ρ1 )F + λi T3i + U2 Q 2 U2 − M2T − M2 , 311 = −M2 − M3T , 44 = −(1 − ρ2 )G + λi T4i − N2T − N2 , 412 = −N2 − N3T , 77 = −Q 3 + τ22 H, 1111 = −M3T − M3 , 1212 = −N3T − N3 . Proof Fixing ξ ∈ L 2F0 ([−τ, 0]; Rn ) arbitrarily and writing x(t; ξ ) = x(t), consider the following Lyapunov–Krasovskii functional: ⎡ ⎢ V (t, x(t), i) = ⎣x(t) − Di
t
⎤
⎡
⎥ ⎢ x(s)ds ⎦ Pi ⎣x(t) − Di
t−β
⎤ ⎥ x(s)ds ⎦
t−β
t
t x (s)E x(s)ds +
+
t
x T (s)F x(s)ds
T
t−τ1 (t)
t−β
t
0 x (s)Gx(s)ds + τ2
+
T
+β
t+θ
t
0 x (s)K x(s)ds + τ1 T
dθ −β
h T (x(s))H h(x(s))ds
dθ
−τ2
t−τ2 (t)
0
t
t+θ
−τ1
t x T (s)L x(s)ds.
dθ t+θ
123
8
Q. Zhu et al.
Define an infinitesimal generator L of the Markov process acting on V (t, x(t), i) as follows: 1 sup [E{V (t + , x(t + ), r (t + ))|x(t), r (t) = i} − V (t, x(t), r (t) = i)] .
LV (t, x(t), i) := lim
→0+
It is easy to prove that system (1) is equivalent to the following form: ⎤ ⎡ t ⎥ ⎢ x(s)ds ⎦ d ⎣x(t) − Di t−β
t = [−Di x(t) + Ai f (x(t)) + Bi g(x(t − τ1 (t))) + Ci
h(x(s))ds]dt t−τ2 (t)
+ σ (x(t), x(t − β), x(t − τ1 (t)), x(t − τ2 (t)), t, i)dw(t).
(5)
For the sake of simplicity, we denote σ (x(t), x(t − β), x(t − τ1 (t)), x(t − τ2 (t)), t, i) by σ (t, i). Note that the function V belongs to C12 (R+ × Rn × S; R+ ). Then it follows from (2) and (5) as well as Assumptions 2 and 3 that ⎡
t
⎢
LV (t, x(t), i) = 2 ⎣x(t)− Di
⎤T ⎥ x(s)ds ⎦ Pi [−Di x(t) + Ai f (x(t))+ Bi g(x(t −τ1 (t)))
t−β
t h(x(s))ds] + trace[σ T (t, i)Pi σ (t, i)]
+ Ci t−τ2 (t)
⎡
+
N
⎢ qi j ⎣x(t) − D j
j=1
t
⎤T
⎡
⎢ ⎥ x(s)ds ⎦ P j ⎣x(t) − D j
t−β
t t−β
+ x (t)E x(t) − x (t − β)E x(t − β) T
T
+ x T (t)F x(t) − (1 − τ˙1 (t))x T (t − τ1 (t))F x(t − τ1 (t)) + x T (t)Gx(t) − (1 − τ˙2 (t))x T (t − τ2 (t))Gx(t − τ2 (t)) t 2 T + τ2 h (x(t))H h(x(t)) − τ2 h T (x(s))H h(x(s))ds t−τ2
t + β 2 x T (t)K x(t) − β
x T (s)K x(s)ds t−β
t + τ12 x T (t)L x(t) − τ1
x T (s)L x(s)ds
t−τ1
≤ x (t)(−2Pi Di )x(t) + 2x T (t)Pi Ai f (x(t)) T
123
⎤ ⎥ x(s)ds ⎦
Robust Stability of Markovian Jump Stochastic Neural Networks
9
t + 2x T (t)Pi Bi g(x(t − τ1 (t))) + 2x T (t)Pi Ci ⎛
t
⎜ +2⎝ ⎛
t
⎞T
N
h(x(s))ds t−τ2 (t)
qi j P j x(t) − 2x (t) T
j=1
⎛ ⎜ +⎝
t
⎟ x(s)ds ⎠ DiT Pi Ai f (x(t))
t
⎟ x(s)ds ⎠ DiT Pi Ci
t−β
+ x (t)
⎞T
⎟ x(s)ds ⎠ DiT Pi Bi g(x(t − τ1 (t)))
t
T
t t−β
⎞T
t−β
⎜ −2⎝
t−τ2 (t)
⎛
⎟ ⎜ x(s)ds ⎠ DiT Pi Di x(t) − 2 ⎝
t−β
⎜ −2⎝ ⎛
⎞T
h(x(s))ds
N
t qi j P j D j
j=1
x(s)ds t−β
⎞T
t N ⎟ x(s)ds ⎠ qi j D Tj P j D j x(s)ds j=1
t−β
t−β
+ λi x (t)T1i x(t) + λi x (t − β)T2i x(t − β) T
T
+ λi x T (t − τ1 (t))T3i x(t − τ1 (t)) + λi x T (t − τ2 (t))T4i x(t − τ2 (t)) + x T (t)E x(t) − x T (t − β)E x(t − β) + x T (t)F x(t) − (1 − ρ1 )x T (t − τ1 (t))F x(t − τ1 (t)) + x T (t)Gx(t) − (1 − ρ2 )x T (t − τ2 (t))Gx(t − τ2 (t)) t 2 T + τ2 h (x(t))H h(x(t)) − τ2 h T (x(s))H h(x(s))ds t−τ2
t + β 2 x T (t)K x(t) − β
x T (s)K x(s)ds t−β
t + τ12 x T (t)L x(t) − τ1
x T (s)L x(s)ds.
(6)
t−τ1
By using Lemma 1, we obtain t −τ2 t−τ2
⎛
h T (x(s))H h(x(s))ds ≤ − ⎝ ⎛ ⎜ ≤ −⎝
t
⎞T
⎛
h(x(s))ds ⎠ H ⎝
t−τ2
t t−τ2 (t)
⎞T
t
⎞ h(x(s))ds ⎠
t−τ2
⎛
⎟ ⎜ h(x(s))ds ⎠ H ⎝
t
⎞ ⎟ h(x(s))ds ⎠; (7)
t−τ2 (t)
123
10
Q. Zhu et al.
t −β
⎛ ⎜ x T (s)K x(s)ds ≤ −⎝
t−β
t −τ1
t
⎞T
x (s)L x(s)ds ≤ − ⎝
t
T
t−τ1
t
⎟ ⎜ x(s)ds ⎠ K ⎝
t−β
⎛
⎛
⎞ ⎟ x(s)ds ⎠;
t−β
⎞T
⎛
x(s)ds ⎠ L ⎝
t−τ1
t
(8)
⎞ x(s)ds ⎠ .
(9)
t−τ1
Moreover, by Assumption 1, it can be derived f T (x(t))Q 1 f (x(t)) ≤ x T (t)U1 Q 1 U1 x(t),
(10)
g (x(t − τ1 (t)))Q 2 g(x(t − τ1 (t))) ≤ x (t − τ1 (t))U2 Q 2 U2 x(t − τ1 (t)),
(11)
h T (x(t))Q 3 h(x(t)) ≤ x T (t)U3 Q 3 U3 x(t).
(12)
T
T
t . Take Z (t) = −Di x(t − β) + Ai f (x(t)) + Bi g(x(t − τ1 (t))) + Ci t−τ2 (t) h(x(s))ds, and then it follows from (1) that t x(t) − x(t − τ1 (t))) −
t Z (s)ds −
σ (s, r (s))dw(s) = 0,
t−τ1 (t)
t−τ1 (t)
t
t
x(t) − x(t − τ2 (t))) −
Z (s)ds −
t−τ2 (t)
σ (s, r (s))dw(s) = 0.
and
(13)
(14)
t−τ2 (t)
Combining (13) and (14), we have that for any matrices Mi , Ni (i = 1, 2, 3) with appropriate dimensions, ⎛ ⎜ 2x T (t)M1 + 2x T (t − τ1 (t))M2 + 2 ⎝
t
⎞T ⎟ Z (s)ds ⎠ M3 ][x(t)
t−τ1 (t)
t −x(t − τ1 (t)) −
t
σ (s, r (s))dw(s) = 0,
Z (s)ds −
t−τ1 (t)
t−τ1 (t)
⎛ ⎜ 2x T (t)N1 + 2x T (t − τ2 (t))N2 + 2 ⎝
t
(15)
⎞T ⎟ Z (s)ds ⎠ N3 ][x(t)
t−τ2 (t)
t −x(t − τ2 (t)) −
t Z (s)ds −
t−τ2 (t)
σ (s, r (s))dw(s) = 0.
(16)
t−τ2 (t)
Hence, by (6)–(12) and (15)–(16) we get ELV (t, x(t), i) ≤ Eζ T (t)i ζ (t),
123
(17)
Robust Stability of Markovian Jump Stochastic Neural Networks
11
where ⎡ ⎢ ζ T (t) = ⎣ x T (t) x T (t − β) x T (t − τ1 (t)) x T (t − τ2 (t)) f T (x(t)) ⎛ ⎜ g T (x(t − τ1 (t))) h T (x(t)) ⎝ ⎛ ⎜ ⎝
⎞T ⎛
t
⎟ h(x(s))ds ⎠
⎜ ⎝
t−τ2 (t)
t
⎞T ⎛ ⎞T t ⎟ x(s)ds ⎠ ⎝ x(s)ds ⎠
t−β
t
⎞T ⎛ ⎟ Z (s)ds ⎠
t−τ1 (t)
t−τ1
⎞T ⎤
t
⎜ ⎝
⎟ ⎥ Z (s)ds ⎠ ⎦ ,
t−τ2 (t)
i is from (4). By the condition (4), we see that i < 0 for all i ∈ S. Let αi = λmin (−i ) ∀ i ∈ S. Then we claim that αi > 0 for all i ∈ S. This fact together with (17) yields ELV (t, x(t), i) ≤ −αi E|x(t)|2 ∀ i ∈ S.
(18)
On the other hand, it follows from the definition of V (t, x(t), i) and Lemma 4 that λmin (Pi )E|x(t)|2 ≤ EV (t, x(t), i) t ≤ 2λmax (Pi )E|x(t)| + λmax (F) 2
E|x(s)|2 ds
t−τ1
t 2 E|x(s)|2 ds + 2βλmax (Di Pi Di ) + λmax (E) + β λmax (K ) t−β
+ λmax (G) + τ22 λmax (U3 HU3 )
t E|x(s)|2 ds
t−τ2
≤ 2λmax (Pi )E|x(t)|2 + λmax (F) + 2βλmax (Di Pi Di ) + λmax (E) +β
2
t
λmax (K ) + λmax (G) + τ22 λmax (U3 HU3 )
E|x(s)|2 ds.
(19)
t−τ
Also, by Lemma 3, there exists a unique constant γi > 0 (i ∈ S) such that 2γi λmax (Pi ) + i γi τ eγi τ = αi ∀ i ∈ S,
(20)
. where i = λmax (F) + 2βλmax (Di Pi Di ) + λmax (E) + β 2 λmax (K ) + λmax (G) + 2 τ2 λmax (U3 HU3 ). Then applying the generalized Itô’s formula, and by (18) and (19) we have
123
12
Q. Zhu et al.
Eeγi t V (t, x(t), i) − EV (0, x(0), r (0)) t = E eγi s [γi V (s, x(s), r (s)) + LV (s, x(s), r (s))]ds 0
t ≤
⎡
eγi s ⎣2γi λmax (Pi )E|x(s)|2 + γi i
s
⎤ E|x(θ )|2 dθ − αi E|x(s)|2 ⎦ ds.
(21)
s−τ
0
Notice that ⎛ s ⎛ θ +τ ⎞ ⎞ t t γi s ⎝ 2 γi s 2 ⎠ γi τ ⎝ ⎠ e E|x(θ )| dθ ds ≤ e E|x(θ )| ds dθ ≤ τ e eγi s E|x(s)|2 ds.
t 0
−τ
s−τ
−τ
θ
Then, it follows from (20) and (21) that Eeγi t V (t, x(t), i) − EV (0, x(0), r (0)) t 0 ≤ E eγi s [2γi λmax (Pi ) + i γi τ eγi τ − αi ]E|x(s)|2 ds + i γi τ eγi τ eγi s E|x(s)|2 ds −τ
0
= i γi τ eγi τ
0
eγi s E|x(s)|2 ds.
(22)
−τ
Thus, by (19) and (22) we have λmin (Pi )eγi t E|x(t)|2 ≤ Eeγi t V (t, x(t), i) ≤ EV (0, x(0), r (0)) + i γi τ e
γi τ
0
eγi s E|x(s)|2 ds
−τ
0 ≤ i
E|x(s)| ds + i γi τ e 2
γi τ
−τ
0
eγi s E|x(s)|2 ds
−τ
≤ τ i (1 + τ γi eγi τ ) sup E|ξ(θ )|2 , −τ ≤θ ≤0
which yields E|x(t)|2 ≤ ≤
τ i (1 + τ γi eγi τ ) −γi t sup E|ξ(θ )|2 e λmin (Pi ) −τ ≤θ ≤0
τ maxi∈S {i (1 + τ γi eγi τ )} − mini∈S {γi }t sup E|ξ(θ )|2 . e mini∈S {λmin (Pi )} −τ ≤θ ≤0
(23)
Therefore, by Definition (1) and (23) we see that the equilibrium point of Eq. (1) is exponentially stable in the mean square. This completes the proof of Theorem 1. Theorem 1 is our first main result, which gives a novel exponential stability condition for system (1) by constructing a new Lyapunov–Krasovskii functional. Obviously, the model’s parameters in system (1) are known. However, in many applications it is common that some systems’ parameters cannot be exactly known in prior. Therefore, it is interesting to study the stability problem of system (1) with unknown parameters.
123
Robust Stability of Markovian Jump Stochastic Neural Networks
13
In what follows, we will investigate the exponential stability in the mean square of the following delayed Markovian jump stochastic neural network with leakage time-varying delays and unknown parameters.
d x(t) =
⎧ ⎪ ⎨ ⎪ ⎩
−[Di + D(t)]x(t − β) + [Ai + A(t)] f (x(t)) ⎫ ⎪ ⎬
t + [Bi + B(t)]g(x(t − τ1 (t))) + [Ci + C(t)]
h(x(s))ds
t−τ2 (t)
⎪ ⎭
+ σ (x(t), x(t − β), x(t − τ1 (t)), x(t − τ2 (t)), t, i)dw(t),
dt (24)
where A(t), B(t), C(t) and D(t) are unknown matrices denoting time-varying parameter uncertainties and such that the following condition: [ A(t) B(t) C(t) D(t)] = U F(t)[V1 , V2 , V3 , V4 ],
(25)
where U and Vk (k = 1, 2, 3) are known real constant matrices and F(t) is the unknown time-varying matrix-valued function satisfying F T (t)F(t) ≤ I, ∀ t ≥ 0.
(26)
Definition 2 The trivial solution of system (24) is said to be robustly exponentially stable in the mean square if the trivial solution of system (24) is exponentially stable in the mean square for all admissible unknown parameters. Theorem 2 Under Assumptions 1–4, the equilibrium point of system (24) is robustly exponentially stable in the mean square, if there exist positive scalars λi (i ∈ S), positive diagonal matrices Q 1 , Q 2 , Q 3 , positive definite matrices E, F, G, H, K , L , Pi (i ∈ S), and any matrices Mi , Ni (i = 1, 2, 3) with appropriate dimensions such that the following LMIs hold: ⎡
11 ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣ < 0,
Pi ≤ λi I , 0 22
13 0 33
14 0 0 44
Pi Ai 0 0 0 55
Pi Bi 0 0 0 0 66
0 0 0 0 0 0 77
18 0 0 0 0 0 0 −K
(27) 0 0 0 0 0 0 0 0 −L
Pi Ci 0 0 0 0 0 0 0 0 1010
111 0 311 0 0 0 0 0 0 0 1111
112 0 0 412 0 0 0 0 0 0 0 1212
Pi U 0 0 0 0 0 0 0 0 0 0 0 − 41 I
0 0 0 0 0 0 0
⎤
⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ 814 ⎥ ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎦ − 41 I
(28)
123
14
Q. Zhu et al.
where 11 = −2Pi Di +
N
qi j P j + λi T1i + E + F + G + U1 Q 1 U1 + U3 Q 3 U3 + β 2 K
j=1
+ M1T + M1 + N1T + N1 + τ12 L , 13 = −M1 + M2T , 14 = −N1 + N2T , 18 = DiT Pi Di −
N
qi j P j D j , 111 = −M1 + M3T , 112 = −N1 + N3T ,
j=1
22 = −E + λi T2i + 2V4T V4 , 33 = −(1 − ρ1 )F + λi T3i + U2 Q 2 U2 − M2T − M2 , 311 = −M2 − M3T , 44 = −(1 − ρ2 )G + λi T4i − N2T − N2 , 412 = −N2 − N3T , 55 = −Q 1 + 2V1T V1 , 66 = −Q 2 + 2V2T V2 , 77 = −Q 3 + τ22 H, 814 = DiT Pi U, 1010 = −H + 2V3T V3 , 1111 = −M3T − M3 , 1212 = −N3T − N3 . Proof It is not difficult to check that system (24) is equivalent to the following form: ⎡
t
⎢ d ⎣x(t) − Di
=
⎧ ⎪ ⎨ ⎪ ⎩
⎤ ⎥ x(s)ds ⎦
t−β
− Di x(t) − D(t)x(t − β) + [Ai + A(t)] f (x(t)) ⎫ ⎪ ⎬
t + [Bi + B(t)]g(x(t − τ1 (t))) + [Ci + C(t)]
h(x(s))ds
t−τ2 (t)
⎪ ⎭
dt
+ σ (x(t), x(t − β), x(t − τ1 (t)), x(t − τ2 (t)), t, i)dw(t).
(29)
Then, let us consider the same Lyapunov–Krasovskii functional as in Theorem 1. It follows from (2) and (29) that ⎡
⎤T
t
⎢
⎥ x(s)ds ⎦ Pi
LV (t, x(t), i) = 2 ⎣x(t) − Di t−β
⎧ ⎪ ⎨ ⎪ ⎩
− Di x(t) − D(t)x(t − β)
+ [Ai + A(t)] f (x(t)) + [Bi + B(t)]g(x(t − τ1 (t))) ⎫ ⎪ t ⎬ h(x(s))ds + trace[σ T (t, i)Pi σ (t, i)] + [Ci + C(t)] ⎪ ⎭ ⎡ +
N
t−τ2 (t)
⎢ qi j ⎣x(t) − D j
j=1
t
⎤T
⎥ ⎢ x(s)ds ⎦ P j ⎣x(t) − D j
t−β
+ x (t)E x(t) − x (t − β)E x(t − β) T
123
T
⎡
t t−β
⎤ ⎥ x(s)ds ⎦
Robust Stability of Markovian Jump Stochastic Neural Networks
15
+ x T (t)F x(t) − (1 − τ˙1 (t))x T (t − τ1 (t))F x(t − τ1 (t)) + x T (t)Gx(t) − (1 − τ˙2 (t))x T (t − τ2 (t))Gx(t − τ2 (t)) t 2 T + τ2 h (x(t))H h(x(t)) − τ2 h T (x(s))H h(x(s))ds t−τ2
t + β 2 x T (t)K x(t) − β
x T (s)K x(s)ds.
(30)
t−β
To this end, from Theorem 1 we only need to estimate the following equalities by using Lemma 4 and (25)–(26). − 2x T (t)Pi D(t)x(t − β) = −2x T (t)Pi U F(t)V4 x(t − β) ≤ x T (t)Pi U F(t)F T (t)U T Pi x(t) + x T (t − β)V4T V4 x(t − β) ≤ x T (t)Pi UU T Pi x(t) + x T (t − β)V4T V4 x(t − β);
(31)
2x T (t)Pi A(t) f (x(t)) = 2x T (t)Pi U F(t)V1 f (x(t)) ≤ x T (t)Pi U F(t)F T (t)U T Pi x(t) + f T (x(t))V1T V1 f (x(t)) ≤ x T (t)Pi UU T Pi x(t) + f T (x(t))V1T V1 f (x(t));
(32)
2x T (t)Pi B(t)g(x(t − τ1 (t))) = 2x T (t)Pi U F(t)V2 g(x(t − τ1 (t))) ≤ x T (t)Pi U F(t)F T (t)U T Pi x(t) + g T (x(t − τ1 (t)))V2T V2 g(x(t − τ1 (t))) ≤ x T (t)Pi UU T Pi x(t) t 2x T (t)Pi C(t)
+ g T (x(t − τ1 (t)))V2T V2 g(x(t − τ1 (t))); t T h(x(s))ds = 2x (t)Pi U F(t)V3 h(x(s))ds
t−τ2 (t)
(33)
t−τ2 (t)
≤ x (t)Pi U F(t)F (t)U T Pi x(t) ⎛ ⎞T t t ⎜ ⎟ T +⎝ h(x(s))ds ⎠ V3 V3 T
T
t−τ2 (t)
t−τ2 (t)
≤ x T (t)Pi UU T Pi x(t) ⎛ ⎞T t t ⎜ ⎟ T +⎝ h(x(s))ds ⎠ V3 V3 t−τ2 (t)
h(x(s))ds
h(x(s))ds;
t−τ2 (t)
(34)
123
16
Q. Zhu et al.
⎛ ⎜ 2⎝
t
⎞T
⎛
⎟ ⎜ x(s)ds ⎠ DiT Pi D(t)x(t − β) = 2 ⎝
t−β
t
⎜ ≤⎝
⎞T
⎛
t
⎜ ≤⎝
⎞T ⎟ x(s)ds ⎠ DiT Pi UU T Pi Di
t x(s)ds + x T (t − β)V4T V4 x(t − β); t−β
⎞T
⎛
⎞T
t
⎜ ≤⎝
⎞T ⎟ x(s)ds ⎠ DiT Pi U F(t)V1 f (x(t))
⎞T ⎟ x(s)ds ⎠ DiT Pi UU T Pi Di t
⎜ −2 ⎝
x(s)ds + f T (x(t))V1T V1 f (x(t)) t−β
t−β
⎛
t
⎟ x(s)ds ⎠ DiT Pi U F(t)F T (t)U T Pi Di
t−β
⎛
(35)
t−β
t
⎜ ≤⎝
t
⎟ ⎜ x(s)ds ⎠ DiT Pi A(t) f (x(t)) = −2 ⎝
t−β
⎛
x(s)ds + x T (t − β)V4T V4 x(t − β) t−β
t−β
t
t
⎟ x(s)ds ⎠ DiT Pi U F(t)F T (t)U T Pi Di
t−β
⎜ −2 ⎝
⎟ x(s)ds ⎠ DiT Pi U F(t)V4 x(t − β)
t−β
⎛
⎛
⎞T
t
⎞T
t x(s)ds + f T (x(t))V1T V1 f (x(t));
(36)
t−β
⎟ x(s)ds ⎠ DiT Pi B(t)g(x(t − τ1 (t)))
t−β
⎛ ⎜ = −2 ⎝
t
⎞T ⎟ x(s)ds ⎠ DiT Pi U F(t)V2 g(x(t − τ1 (t)))
t−β
⎛
t
⎜ ≤⎝
⎞T
t
⎟ x(s)ds ⎠ DiT Pi U F(t)F T (t)U T Pi Di
t−β
x(s)ds t−β
+ g T (x(t − τ1 (t)))V2T V2 g(x(t − τ1 (t))) ⎛
t
⎜ ≤⎝
t−β
⎞T ⎟ x(s)ds ⎠ DiT Pi UU T Pi Di
t x(s)ds t−β
+ g T (x(t −τ1 (t)))V2T V2 g(x(t −τ1 (t)));
(37)
123
Robust Stability of Markovian Jump Stochastic Neural Networks
⎛
t
⎜ −2 ⎝
⎞T
⎜ = −2 ⎝ ⎛ ⎜ ≤⎝
t t−β
t
⎟ x(s)ds ⎠ DiT Pi U F(t)V3
⎛ ⎜ ≤⎝
t−τ2 (t)
t
⎜ +⎝
h(x(s))ds
t−τ2 (t)
⎞T
t
t t−τ2 (t)
x(s)ds t−β
⎞T ⎟ h(x(s))ds ⎠ V3T V3
t h(x(s))ds
t−τ2 (t)
⎞T
⎟ x(s)ds ⎠ DiT Pi UU T Pi Di
t−β
⎛
t
⎟ x(s)ds ⎠ DiT Pi U F(t)F T (t)U T Pi Di t
⎜ +⎝
h(x(s))ds
t−τ2 (t)
⎞T
t−β
⎛
t
⎟ x(s)ds ⎠ DiT Pi C(t)
t−β
⎛
17
t x(s)ds t−β
⎞T ⎟ h(x(s))ds ⎠ V3T V3
t h(x(s))ds.
(38)
t−τ2 (t)
Then along the same line as for Theorem 1, we can obtain the desired result by applying 2 Lemmas 2–3 and (31)–(38). This completes the proof of Theorem 2. Remark 1 Theorem 1 gives sufficient conditions for Markovian jump stochastic neural networks with time delays in the leakage terms (1) to ascertain the exponential stability in the mean square of the equilibrium point, whereas Theorem 2 further presents sufficient conditions for the correspondent system with unknown parameters (24) to ensure the robustly exponential stability in the mean square of the equilibrium point. It is worth pointing out that some useful techniques such as the method of model transformation, Lyapunov stability theory, stochastic analysis and linear matrix inequalities have been successfully applied in the proofs of Theorems 1 and 2 since time delays in the leakage terms are considered in this paper. In particular, both of Theorems 1 and 2 depend on all the delay constants τ1 , τ2 , ρ1 , ρ2 , β, and thus the obtained results are less conservatism. Remark 2 In [19–25], the authors only considered deterministic neural networks with time delays in the leakage terms. To the best of our knowledge, the stability problem of Markovian jump stochastic neural networks with time delays in the leakage terms (1) has never been studied in the previous literature. Hence, the LMI criteria existed in all the previous literature fail in our results. Remark 3 We now consider some special cases of our models. If we ignore the distributed delays, then we can rewrite systems (1) and (24), respectively, as follows: d x(t) = [−Di x(t − β) + Ai f (x(t)) + Bi g(x(t − τ1 (t))) + σ (x(t), x(t − β), x(t − τ1 (t)), t, i)dw(t),
(39)
123
18
Q. Zhu et al.
d x(t) = {−[Di + D(t)]x(t − β) + [Ai + A(t)] f (x(t)) + [Bi + B(t)]g(x(t − τ1 (t)))}dt + σ (x(t), x(t − β), x(t − τ1 (t)), t, i)dw(t).
(40)
By Theorems 1 and 2, we obtain the following results. Corollary 1 Under Assumptions 1–4, the equilibrium point of system (39) is exponentially stable in the mean square, if there exist positive scalars λi (i ∈ S), positive diagonal matrices Q 1 , Q 2 , Q 3 , positive definite matrices E, F, K , L , Pi (i ∈ S), and any matrices Mi , Ni (i = 1, 2, 3) with appropriate dimensions such that the following LMIs hold: ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
11
13 0 33
0 −E + λi T2i
Pi Ai 0 0 −Q 1
Pi ≤ λi I , Pi Bi 0 0 0 0 0 0 0 −Q 2 0 −Q 3
18 0 0 0 0 0 −K
0 0 0 0 0 0 0 −L
⎤
111 0 ⎥ ⎥ 311 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ < 0, 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎦ 1111
(41)
(42)
where 11 = −2Pi Di +
N
qi j P j + λi T1i + E + F + U1 Q 1 U1 + U3 Q 3 U3 + β 2 K
j=1
+
M1T
+ M1 + N1T + N1 + τ12 L , 13 = −M1 + M2T ,
18 = DiT Pi Di −
N
qi j P j D j , 111 = −M1 + M3T ,
j=1
33 = −(1 − ρ1 )F + λi T3i + U2 Q 2 U2 − M2T − M2 , 311 = −M2 − M3T , 1111 = −M3T − M3 . Corollary 2 Under Assumptions 1–4, the equilibrium point of system (40) is robustly exponentially stable in the mean square, if there exist positive scalars λi (i ∈ S), positive diagonal matrices Q 1 , Q 2 , Q 3 , positive definite matrices E, F, K , L , Pi (i ∈ S), and any matrices Mi , Ni (i = 1, 2, 3) with appropriate dimensions such that the following LMIs hold: ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
11
0 22
123
13 0 33
Pi Ai 0 0 55
Pi Bi 0 0 0 66
Pi ≤ λi I , 0 18 0 0 0 0 0 0 0 0 −Q 3 0 −K
0 0 0 0 0 0 0 −L
111 0 311 0 0 0 0 0 1111
Pi U 0 0 0 0 0 0 0 0 − 13 I
0 0 0 0 0 0
⎤
(43)
⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ < 0, (44) ⎥ 814 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎦ − 41 I
Robust Stability of Markovian Jump Stochastic Neural Networks
19
where 11 = −2Pi Di +
N
qi j P j + λi T1i + E + F + U1 Q 1 U1 + U3 Q 3 U3 + β 2 K
j=1
+ M1T + M1 + N1T + N1 + τ12 L , 13 = −M1 + M2T , 18 = DiT Pi Di −
N
qi j P j D j , 111 = −M1 + M3T ,
j=1
22 = −E + λi T2i + 2V4T V4 , 33 = −(1 − ρ1 )F + λi T3i + U2 Q 2 U2 − M2T − M2 , 311 = −M2 − M3T , 55 = −Q 1 + 2V1T V1 , 66 = −Q 2 + 2V2T V2 , 814 = DiT Pi U, 1111 = −M3T − M3 . Letting τ1 (t) = τ1 in (39) and (40), we get the following Markovian jump stochastic neural networks with constant delays: d x(t) = [−Di x(t − β) + Ai f (x(t)) + Bi g(x(t − τ1 )) + σ (x(t), x(t − β), x(t − τ1 ), t, i)dw(t),
(45)
d x(t) = {−[Di + D(t)]x(t − β) + [Ai + A(t)] f (x(t)) + [Bi + B(t)]g(x(t − τ1 ))}dt + σ (x(t), x(t − β), x(t − τ1 ), t, i)dw(t).
(46)
It is easy to get the following results by using similar methods as in Theorems 1 and 2. Corollary 3 Under Assumptions 1, 3, 4, the equilibrium point of system (45) is exponentially stable, if there exist positive scalars λi (i ∈ S), positive diagonal matrices Q 1 , Q 2 , Q 3 , positive definite matrices E, F, K , L , Pi (i ∈ S) such that the following LMIs hold: ⎡ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
11
0 −E + λi T2i
0 0 33
Pi ≤ λi I , Pi Ai Pi Bi 0 0 0 0 −Q 1 0 −Q 2
0 18 0 0 0 0 0 0 0 0 −Q 3 0 −K
⎤
(47)
0 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎥ ⎥ < 0, 0 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎦ −L
(48)
where 11 = −2Pi Di +
N
qi j P j + λi T1i + E + F + U1 Q 1 U1 + U3 Q 3 U3 + β 2 K + τ12 L ,
j=1
18 = DiT Pi Di −
N
qi j P j D j , 33 = −F + λi T3i + U2 Q 2 U2 .
j=1
Corollary 4 Under Assumptions 1, 3, 4, the equilibrium point of system (46) is robustly exponentially stable in the mean square, if there exist positive scalars λi (i ∈ S), positive
123
20
Q. Zhu et al.
diagonal matrices Q 1 , Q 2 , Q 3 , positive definite matrices E, F, K , L , Pi (i ∈ S) such that the following LMIs hold:
⎡
11 ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎢ ⎣
0 22
0 0 33
Pi Ai 0 0 55
Pi ≤ λi I , Pi Bi 0 0 0 0 0 0 0 66 0 −Q 3
18 0 0 0 0 0 −K
0 0 0 0 0 0 0 −L
Pi U 0 0 0 0 0 0 0 − 13 I
0 0 0 0 0 0
⎤
⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ ⎥ < 0, ⎥ ⎥ 814 ⎥ ⎥ 0 ⎥ ⎥ 0 ⎦ − 41 I
(49)
(50)
where
11 = −2Pi Di +
N
qi j P j + λi T1i + E + F + U1 Q 1 U1 + U3 Q 3 U3 + β 2 K + τ12 L ,
j=1
18 = DiT Pi Di −
N
qi j P j D j , 22 = −E + λi T2i + 2V4T V4 ,
j=1
33 = −F + λi T3i + U2 Q 2 U2 , 55 = −Q 1 + 2V1T V1 , 66 = −Q 2 + 2V2T V2 , 814 = DiT Pi U. Remark 4 To the best of our knowledge, even systems (39), (40), (45) and (46) still have not been investigated in the previous literature since noise disturbances are considered in these systems. Therefore, the results presented in Corollaries 1–4 are essentially “new”. Remark 5 If we do not consider the effect of Markovian jump parameters, i.e., the Markov chain {r (t), t ≥ 0} only takes a unique value 1, then Theorems 1, 2 and Corollaries 1– 4 are reduced to the case of stochastic neural networks with time delays in the leakage terms. Therefore, our obtained results comprise and generalize the correspondent results of stochastic neural networks with time delays in the leakage terms and known or unknown parameters.
4 Illustrative Examples In this section, two numerical examples are given to illustrate the effectiveness of the obtained results. Example 1 Consider a two dimensional Markovian jump stochastic neural networks with mixed time delays and time delays in the leakage terms:
123
Robust Stability of Markovian Jump Stochastic Neural Networks
21
⎡ ⎢ d x(t) = ⎣ − D(r (t))x(t − β) + A(r (t)) f (x(t)) t +B(r (t))g(x(t − τ1 (t))) + C(r (t))
⎤ ⎥ h(x(s))ds ⎦ dt
t−τ2 (t)
+ σ (x(t), x(t − β), x(t − τ1 (t)), x(t − τ2 (t)), t, r (t))dw(t),
(51)
where x(t) = (x1 (t), x2 (t))T , β = 0.3, τ1 (t) = 0.5 cos t + 1.6, τ2 (t) = 0.7 sin t + 1.2, w(t) is a two dimensional Brownian motion, and r (t) is a right-continuous Markov chain taking values in S = {1, 2} with generator −5 5 Q= . 3 −3 Let
f i (xi ) = gi (xi ) = h i (xi ) =
0.01tan(xi ),
xi ≤ 0,
0.02xi ,
xi > 0,
(i = 1, 2),
σ (x(t), x(t − β), x(t − τ1 (t)), x(t − τ2 (t)), t, 1) =
0.3x1 (t)
0.2(x1 (t − β) + x1 (t − τ1 (t)))
0.1x1 (t − τ1 (t))
0.2(x1 (t) + x2 (t − τ2 (t)))
σ (x(t), x(t − β), x(t − τ1 (t)), x(t − τ2 (t)), t, 2) =
0.3x1 (t) + 0.2x2 (t − β)
0.4x1 (t − β)
0.2(x1 (t) + x2 (t − β))
0.4x2 (t − τ2 (t))
! ,
! .
It is easy to check that system (51) satisfies Assumptions 1–4. Other parameters of the network (51) are given as follows: 0.3 0.2 0.3 0.2 0.3 −0.2 1.2 0 , B1 = , C1 = , D1 = , A1 = −0.3 0.4 −0.1 0.4 0.1 −0.5 0 1.5 −0.2 0.5 0.2 −0.4 −0.4 0.2 1.4 0 A2 = , B2 = , C2 = , D2 = , 0.3 0.2 0.2 0.3 0.4 0.5 0 0.9 By using the Matlab LMI toolbox, we can obtain the following feasible solution for the LMIs (3) and (4): 6.7742 −0.0678 6.6271 −0.0739 16.3814 −0.0377 E= , F= ,G= , −0.0678 5.2842 −0.0739 5.0128 −0.0377 15.5809 95.3154 2.2690 136.0551 −0.2243 0.6084 −0.0210 H= , K = ,L= , 2.2690 98.5631 −0.2243 124.2562 −0.0210 0.1461 26.6306 0.0973 24.8780 0.0258 121.6404 0 , P2 = , Q1 = , P1 = 0.0973 23.4158 0.0258 28.5689 0 121.6404 112.5268 0 393.9196 0 Q2 = , Q3 = , 0 112.5268 0 393.9196 λ1 = 27.5548, λ2 = 28.8308,
123
22
Q. Zhu et al. 0.6
x
1
x2
0.4 0.2 0 −0.2 −0.4 −0.6 −0.8
0
2
4
6
8
10
t Fig. 1 The state response of the model 1 in Example 1
−9.6861 M1 = −0.0142 −9.6715 N1 = −0.0158
−0.0142 9.9504 0.0086 10.2823 , M2 = , M3 = −10.0404 0.0086 10.0581 −0.0032 −0.0158 10.0830 0.0035 10.2547 , N2 = , N3 = −10.0501 0.0035 10.1021 −0.0023
−0.0032 , 10.1245 −0.0023 . 10.1505
Therefore, it follows from Theorem 1 that the network (51) is exponentially stable in the mean square. By using the Euler–Maruyama numerical scheme, simulation results are as follows: T = 10 and step size δt = 0.02. Figure 1 is the state response of model 1 [i.e., the network (51) when r (t) = 1] with the initial condition [0.6, −0.5]T , for −2.1 ≤ t ≤ 0, and Fig. 2 is the state response of model 2 [i.e., the network (51) when r (t) = 2] with the initial condition [−0.8, 0.9]T , for −2.1 ≤ t ≤ 0. Example 2 Consider a two dimensional Markovian jump stochastic neural network with leakage time-varying delays and unknown parameters: ⎧ ⎪ ⎨ d x(t) = −[Di + D(t)]x(t − β) + [Ai + A(t)] f (x(t)) ⎪ ⎩ ⎫ ⎪ t ⎬ h(x(s))ds dt + [Bi + B(t)]g(x(t − τ1 (t))) + [Ci + C(t)] ⎪ ⎭ t−τ2 (t)
+ σ (x(t), x(t − β), x(t − τ1 (t)), x(t − τ2 (t)), t, i)dw(t),
(52)
where x(t) = (x1 (t), x2 (t))T , β = 0.2, τ1 (t) = 0.3 cos t + 1, τ2 (t) = 0.5 sin t + 0.7, w(t) is a two dimensional Brownian motion, and r (t) is a right-continuous Markov chain taking values in S = {1, 2} with generator −3 3 Q= . 2 −2
123
Robust Stability of Markovian Jump Stochastic Neural Networks
23
1
x1
0.8
x
2
0.6 0.4 0.2 0 −0.2 −0.4 −0.6 −0.8
0
2
4
6
8
10
t Fig. 2 The state response of the model 2 in Example 1
Let
" f i (xi ) = gi (xi ) = h i (xi ) =
0.02tan(xi ), xi ≤ 0, (i = 1, 2), −0.01xi , xi > 0,
σ (x(t), x(t − β), x(t − τ1 (t)), x(t − τ2 (t)), t, 1) =
0.3(x2 (t − τ2 (t)) + x1 (t − τ1 (t)))
0.3x2 (t)
0.4x1 (t) + 0.3x2 (t − β)
0.3x1 (t − β)
!
σ (x(t), x(t − β), x(t − τ1 (t)), x(t − τ2 (t)), t, 2) =
0.4x2 (t)
0.5x1 (t − β) + 0.3x1 (t − τ1 (t))
0.4x1 (t − τ2 (t))
0.5x1 (t) + 0.3x2 (t − β)
, ! .
It is easy to check that system (52) satisfies Assumptions 1–4. Other parameters of the network (52) are given as follows: 0.1 0.2 0.2 −0.2 −0.1 0.3 1.8 0 A1 = , B1 = , C1 = , D1 = , 0.1 0.2 0.1 0.2 0.2 0.1 0 1.5 0.1 −0.2 −0.1 0.2 0.3 −0.2 1.9 0 A2 = , B2 = , C2 = , D2 = , 0.1 0.4 0.1 0.1 0.1 0.1 0 1.2 0.1 0 0.1 −0.1 sint 0 U= , V = , A(t) = B(t) = C(t) = U V. 0.1 0.2 0.1 0.2 0 cost By using the Matlab LMI toolbox, we can obtain the following feasible solution for the LMIs (27) and (29): 116.6703 −0.0921 69.7374 −0.1558 85.4763 −0.1735 E= , F= ,G= , −0.0921 140.2164 −0.1558 57.0764 −0.1735 71.5262 2.2133 0.0180 487.8127 23.7849 , H= , K = 103 × 0.0180 1.7543 23.7849 474.7595
123
24
Q. Zhu et al. 0.8
x1
0.6
x
2
0.4 0.2 0 −0.2 −0.4 −0.6 −0.8 −1
0
2
4
6
8
10
t Fig. 3 The state response of the model 2 in Example 2
12.8758 −0.1420 188.3577 0.4180 187.6404 −0.0945 L= , P1 = , P2 = , −0.1420 1.5130 0.4180 191.6728 −0.0945 216.1396 515.4508 0 560.8831 0 , Q2 = , Q1 = 0 515.4508 0 560.8831 971.8241 0 −58.5844 −0.0229 59.7396 0.0207 , M1 = , M2 = , Q3 = 0 971.8241 −0.0229 −61.6216 0.0207 61.6510 63.4583 −0.0201 −58.3946 −0.0271 60.6690 0.0132 M3 = , N1 = , N2 = , −0.0201 62.0043 −0.0271 −61.6650 0.0132 61.7154 63.2711 −0.0172 , λ1 = 194.6827, λ2 = 216.8348. N3 = −0.0172 61.9921
Therefore, it follows from Theorem 2 that the network (52) is exponentially stable in the mean square. By using the Euler–Maruyama numerical scheme, simulation results are as follows: T = 10 and step size δt = 0.02. Figure 3 is the state response of model 1 [i.e., the network (52) when r (t) = 1] with the initial condition [−0.8, 0.6]T , for −1.3 ≤ t ≤ 0, and Fig. 4 is the state response of model 2 [i.e., the network (52) when r (t) = 2] with the initial condition [0.5, −0.7]T , for −1.3 ≤ t ≤ 0.
5 Concluding Remarks In this paper we have studied the exponential stability issue for a new class of Markovian jump stochastic neural networks with time delays in the leakage terms and mixed time delays under two cases: with known or unknown parameters. In order to prove the exponential stability in the mean square of the equilibrium point of the suggested system, many techniques such as the method of model transformation, Lyapunov stability theory, stochastic analysis and linear matrix inequalities techniques have been successfully used in the paper. The obtained stability conditions depend on all the delay constants, which means that the presented results are less
123
Robust Stability of Markovian Jump Stochastic Neural Networks
25
0.6
x
1
x
0.4
2
0.2 0 −0.2 −0.4 −0.6 −0.8
0
2
4
6
8
10
t Fig. 4 The state response of the model 2 in Example 2
conservative than delay-independent criteria, especially when the size of the delay is small. Moreover, our results improve and generalize those given in previous literature. In addition, two numerical examples are given to demonstrate the obtained results. Finally, we point out that it is possible to generalize our results to some more complex classes of stochastic neural networks with time delays in the leakage terms such as fuzzy or reaction–diffusion. Research on this topic is in progress. Acknowledgments This work was jointly supported by the National Natural Science Foundation of China (61374080, 61272530, 11072059), the Natural Science Foundation of Zhejiang Province (LY12F03010), the Natural Science Foundation of Ningbo (2012A610032), the Natural Science Foundation of Jiangsu Province (BK2012741), the Specialized Research Fund for the Doctoral Program of Higher Education (20110092110017,20130092110017), the Deanship of Scientific Research (DSR), King Abdulaziz University (KAU), under Grant 3-130/1434/HiCi.
References 1. Arik S (2000) Stability analysis of delayed neural networks. IEEE Trans Circuits Syst I 47(7):1089– 1092 2. Arik S, Orman Z (2005) Global stability analysis of Cohen–Grossberg neural networks with time varying delays. Phys Lett A 341(5–6):410–421 3. Joy MP (2000) Results concerning the absolute stability of delayed neural networks. Neural Netw 13(6):613–616 4. Blythe S, Mao X, Liao X (2001) Stability of stochastic delay neural networks. J Frankl Inst 338(4):481– 495 5. Arik S, Tavsanoglu V (2005) Global asymptotic stability analysis of bidirectional associative memory neural networks with constant time delays. Neurocomputing 68:161–176 6. Huang H, Ho DWC, Lam J (2005) Stochastic stability analysis of fuzzy Hopfield neural networks with time-varying delays. IEEE Trans Circuits Syst II 52(5):251–255 7. Javidmanesh E, Afsharnezhad Z, Effati S (2013) Existence and stability analysis of bifurcating periodic solutions in a delayed five-neuron BAM neural network model. Nonlinear Dyn 72:149–164 8. Rakkiyappan R, Balasubramaniam P (2008) Delay-dependent asymptotic stability for stochastic delayed recurrent neural networks with time varying delays. Appl Math Comput 198(2):526–533
123
26
Q. Zhu et al.
9. Zhou Q, Wan L (2008) Exponential stability of stochastic delayed Hopfield neural networks. Appl Math Comput 199(1):84–89 10. Xu S, Lam J (2006) A new approach to exponential stability analysis of neural networks with time-varying delays. Neural Netw 19(1):76–83 11. Moon YS, Park P, Kwon WH, Lee YS (2001) Delay-dependent robust stabilization of uncertain statedelayed systems. Int J Control 74(14):1447–1455 12. Chen Y (2002) Global stability of neural networks with distributed delays. Neural Netw 15(7):867–871 13. Chen W, Zheng W (2007) Delay-dependent robust stabilization for uncertain neutral systems with distributed delays. Automatica 43(1):95–104 14. Park JH (2008) On global stability criterion of neural networks with continuously distributed delays. Chaos Solitons Fractals 37(2):444–449 15. Zhao H (2004) Global asymptotic stability of Hopfield neural network involving distributed delays. Neural Netw 17(1):47–53 16. Yang H, Chu T (2007) LMI conditions for stability of neural networks with distributed delays. Chaos Solitons Fractals 34(2):557–563 17. Liu X, Wang Z, Liu X (2006) On global exponential stability of generalized stochastic neural networks with mixed time delays. Neurocomputing 70(1–3):314–326 18. Wang Z, Liu Y, Liu X (2005) On global asymptotic stability of neural networks with discrete and distributed delays. Phys Lett A 345(4–5):299–308 19. Gopalsamy K (2007) Leakage delays in BAM. J Math Anal Appl 325(2):1117–1132 20. Balasubramaniam P, Kalpana M, Rakkiyappan R (2011) Global asymptotic stability of BAM fuzzy cellular neural networks with time delay in the leakage term, discrete and unbounded distributed delays. Math Comput Model 53(5–6):839–853 21. Balasubramaniam P, Kalpana M, Rakkiyappan R (2011) Existence and global asymptotic stability of fuzzy cellular neural networks with time delay in the leakage term and unbounded distributed delays. Circuits Syst Signal Process 30(6):1595–1616 22. Li X, Fu X, Balasubramaniam P, Rakkiyappan R (2010) Existence, uniqueness and stability analysis of recurrent neural networks with time delay in the leakage term under impulsive perturbations. Nonlinear Anal RWA 11(5):4092C4108 23. Li X, Rakkiyappan R, Balasubramaniam P (2011) Existence and global stability analysis of equilibrium of fuzzy cellular neural networks with time delay in the leakage term under impulsive perturbations. J Frankl Inst 348(2):135–155 24. Liu B (2013) Global exponential stability for BAM neural networks with time-varying delays in the leakage terms. Nonlinear Anal RWA 14(1):559–566 25. Long S, Song Q, Wang X, Li D (2012) Stability analysis of fuzzy cellular neural networks with time delay in the leakage term and impulsive perturbations. J Frankl Inst 349(7):2461–2479 26. Rakkiyappan R, Chandrasekar A, Lakshmanan S, Park JH, Jung HY (2013) Effects of leakage timevarying delays in Markovian jump neural networks with impulse control. Neurocomputing 121(1):365– 378 27. Lakshmanan S, Park JH, Lee TH, Jung HY, Rakkiyappan R (2013) Stability criteria for BAM neural networks with leakage delays and probabilistic time-varying delays. Appl Math Comput 219(17):9408– 9423 28. Lakshmanan S, Park JH, Jung HY (2013) Robust delay-dependent stability criteria for dynamic systems with nonlinear perturbations and leakage delay. Circuits Syst Signal Process 32(4):1637–1657 29. Balasubramaniam P, Vembarasan V, Rakkiyappan R (2012) Global robust asymptotic stability analysis of uncertain switched Hopfield neural networks with time delay in the leakage term. Neural Comput Appl 21(7):1593–1616 30. Balasubramaniam P, Vembarasan V, Rakkiyappan R (2011) Leakage delays in T–S fuzzy cellular neural networks. Neural Process Lett 33(2):111–136 31. Balasubramaniam P, Vembarasan V (2011) Asymptotic stability of BAM neural networks of neutral-type with impulsive effects and time delay in the leakage term. Int J Comput Math 88(15):3271–3291 32. Liu Y, Wang Z, Liu X (2008) On delay-dependent robust exponential stability of stochastic neural networks with mixed time delays and Markovian switching. Nonlinear Dyn 54(3):199–212 33. Balasubramaniam P, Vembarasan V (2011) Robust stability of uncertain fuzzy BAM neural networks of neutral-type with Markovian jumping parameters and impulses. Comput Math Appl 62(4):1838–1861 34. Balasubramaniam P, Vembarasan V, Rakkiyappan R (2011) Delay-dependent robust exponential state estimation of Markovian jumping fuzzy Hopfield neural networks with mixed random time-varying delays. Commun Nonlinear Sci Numer Simul 16(4):2109–2129
123
Robust Stability of Markovian Jump Stochastic Neural Networks
27
35. Balasubramaniam P, Rakkiyappan R (2009) Delay-dependent robust stability analysis for Markovian jumping stochastic Cohen–Grossberg neural networks with discrete interval and distributed time-varying delays. Nonlinear Anal Hybrid Syst 3(3):207–214 36. Zhang H, Wang Y (2008) Stability analysis of Markovian jumping stochastic Cohen–Grossberg neural networks with mixed time delays. IEEE Trans Neural Netw 19(2):366–370 37. Zhu Q, Cao J (2011) Exponential stability of stochastic neural networks with both Markovian jump parameters and mixed time delays. IEEE Trans Syst Man Cybern B 41(2):341–353 38. Zhu Q, Cao J (2010) Robust exponential stability of Markovian jump impulsive stochastic Cohen– Grossberg neural networks with mixed time delays. IEEE Trans Neural Netw 21(8):1314–1325 39. Zhu Q, Cao J (2012) Stability analysis of Markovian jump stochastic BAM neural networks with impulse control and mixed time delays. IEEE Trans Neural Netw Learn Syst 23(3):467–479 40. Gu K (2000) An integral inequality in the stability problem of time-delay systems. In: Proceedings of 39th IEEE conference on decision and control, Sydney, pp 2805–2810 41. Wang Z, Liu Y, Yu L, Liu X (2006) Exponential stability of delayed recurrent neural networks with Markovian jumping parameters. Phys Lett A 356(4–5):346–352 42. Zhang Y (2013) Stochastic stability of discrete-time Markovian jump delay neural networks with impulses and incomplete information on transition probability. Neural Netw 46:276–282 43. Zhu S, Shen Y (2013) Robustness analysis for connection weight matrices of global exponential stability of stochastic recurrent neural networks. Neural Netw 38:17–22 44. Faydasicok O, Arik S (2012) Robust stability analysis of a class of neural networks with discrete time delays. Neural Netw 29–30:52–59 45. Zhou T, Wang M, Long M (2012) Existence and exponential stability of multiple periodic solutions for a multidirectional associative memory neural network. Neural Process Lett 35(2):187–202 46. Zheng C, Shan Q, Wang Z (2012) Improved stability results for stochastic Cohen–Grossberg neural networks with discrete and distributed delays. Neural Process Lett 35(2):103–129 47. Boyd S, Ghaoui L, Feron E, Balakrishnan V (1994) Linear matrix inequalities in system and control theory. SIAM, Philadelphia
123