Nonlinear Dyn (2018) 91:2571–2592 https://doi.org/10.1007/s11071-017-4032-x
ORIGINAL PAPER
Impulsive discrete-time BAM neural networks with random parameter uncertainties and time-varying leakage delays: an asymptotic stability analysis C. Sowmiya · R. Raja · J. Cao · G. Rajchakit
Received: 1 September 2017 / Accepted: 22 December 2017 / Published online: 8 January 2018 © Springer Science+Business Media B.V., part of Springer Nature 2018
Abstract This proposal analyzes the problem of asymptotic approach on stability criteria for impulsive discrete-time BAM neural networks with random parameter uncertainties and time-varying leakage delays. Reciprocally convex combination technique is approached in this paper for the reduction of decision variables. This lemma is derived from the derivation of Jensen’s inequality. Here, the uncertainties are considered as a randomly occurring parameter uncertainty and it obey certain mutually uncorrelated Bernoullidistributed white noise sequences. A priori indicates the occurrence of uncertain parameters in the probability which is the valuable feature. Some novelty suf-
ficient conditions for ensuring the asymptotic stability of the addressed neural networks are attained in terms of linear matrix inequalities (LMIs) by the aid of Lyapunov–Krasovskii functionals approach, which can be easily checked by MATLAB LMI Toolbox. Finally, three illustrative examples are accomplished to manifest the effectiveness and fruitfulness of the proposed research work.
This work was jointly supported by the Thailand research Grant Fund (RSA5980019) and Maejo University.
1 Introduction
C. Sowmiya Department of Mathematics, Alagappa University, Karaikudi 630 004, India R. Raja Ramanujan Centre for Higher Mathematics, Alagappa University, Karaikudi 630 004, India J. Cao (B) School of Mathematics, Southeast University, Nanjing 211189, China e-mail:
[email protected] J. Cao School of Mathematics and Statistics, Shandong Normal University, Ji’nan 250014, PR China G. Rajchakit Department of Mathematics, Faculty of Science, Maejo University, Chiang Mai, Thailand
Keywords Discrete-time neural networks · Impulse · Stability · Random uncertainties · BAM neural networks · Linear matrix inequality · Leakage delay
Owing to expeditious advances in theoretical study and biological experiments regarding underlying networks mechanisms, synthetic biology has been more engaging consideration from the biology communities. The neural networks in the prior few years have been broadly considered and practiced in different areas such as pattern recognition, image processing, hetero and auto-associative memories, optimization problems, mechanics of structures & materials and other research fields (see [1–16]). The bidirectional associative memory (BAM) is an extension of auto-associator Hopfield neural networks, which was first initiated by Kosko [17]. It is composed of two layers such as X layer and Y -layer, which are fully interconnected to each other. In recent times, due to their promising uti-
123
2572
lization in associative memories, parallel computation and optimization problems [18,19], the stability and periodicity of BAM neural networks have acquired much concentration among researchers. Currently, instead of the bifurcation method, different mathematical techniques were engaged by Wang and Zou [20] to examine the stability and number of stable periodic solutions for discrete-time BAM neural networks. Mohamad [21] derived some conditions for continuous-time BAM neural networks to be exponentially stable by applying Lyapunov–Krasovskii functionals and Halanay-type inequalities. For continuoustime BAM neural networks, the analogues of discrete time were developed and their stability were studied in [21]. Note that, in practical, various types of neural networks have been assumed to act in a continuous-time manner. However, in appliance and applications of neural networks, discrete-time neural networks become more important than their continuous-time counterpart, because when it comes to implementation of continuous-time networks for the sake of computerbased simulation, experimentation or computation, it is usual to discretize the continuous-time networks. On the other hand, discrete-time network could be more suitable to model digitally transmitted signals in a dynamical way. Moreover, when the axon size is small, there exists a possibility for delay in the process of storing or sending information (i.e., discrete time delay). In [22,23], authors deeply analyzed the delay in discrete term and obtained several new time-delayed conditions by using different LKFs. Generally speaking, if the size of the axon length is much larger, then it is not easier to compute the information size, and there occurs delay in the process (i.e., distributed time delay). Among the stability of neural networks, systems with leakage delays become one of the important topics. While carrying information to the axon by the state vectors, a delay will emerge often called as leakage delay. Leakage delay also called as forgetting delay is the negative term in the state vector, which can be drawn back to 1992. The author in [24] noticed that the leakage delay had great impact on neural networks system. Since then, some interesting results have been proposed and much attention has paid to the NNs with leakage delay [25,26]. For instance, Gopalsamy deliberated a population dynamics model with leakage delay in [27] and also found the destabilization of NNs. The
123
C. Sowmiya et al.
author in [28] initially explored the BAM neural networks with leakage delay as follows: dxi (t) = −ai xi (t − τi ) + ai j f j (y j (t − σ j (t))) + Ii dt n
dyi (t) = −bi yi (t − τi ) + dt
j=1 n
bi j g j (x j (t − σ j (t))) + Ji
j=1
An abrupt change at certain moments of time is called impulse, and this phenomenon is a wide variety of evolutionary process which exists universally [7,8,29]. We extend this phenomenon in neural networks. For example, suppose a stimulus from the external environment or the body is received by receptors during the implementation of electronic network, the electrical impulses will be conveyed to the neural net and impulsive phenomenon that is called impulsive perturbations arises naturally [30]. Moreover, impulsive perturbations as well as time delays can affect the dynamical behavior of neural networks system, see [31–34]. Furthermore, in practical phenomena, the uncertain parameters have been paid considerable attention in neural networks. In [35], the authors explored the state performance of the uncertain timedelayed neural networks. There are two main sources of uncertainty: (1) the deviations and (2) perturbations in parameters. Luo et al. [36] analyzed the concept of stability of the equilibrium point for the neural networks with uncertain parameters with the help of different LKFs. Recently, in [37], the problem for stochastic neural networks with probability distribution to check the stability analysis has been explained. The problem of state estimation with Markovian jumping parameters has been proposed in [38,39]. The authors in [40] delivered the stability analysis of stochastic NNs with leakage delay. Meanwhile, the problems on stability issues of NNs have received extensive attention; hence, some conclusions on the discrete-time neural networks with leakage delay in state vector have been reported. Hou and Zhu [41] investigate the stability conditions when the leakage delays affect the neural networks. In [40], the authors explored the concept of robust analysis with uncertain Markovian jumping discrete-time NNs with leakage term. The authors in [42] deliberated the stability in the sense of robust for time-delayed neural networks with leakage delays. In [23], Gao and Cui discussed the time-delayed BAM neural networks leading to robust stability or not. Liu [43] investigated the sta-
Impulsive discrete-time BAM neural networks
bility for discrete-time BAM neural networks, which is exponential, and in [31] authors proposed discrete-time neural networks with stochastic, mixed time delays and impulsive effects. However, to the best of authors’ knowledge, up to now there are no results on the problem of impulsive discrete-time BAM neural networks with random parameter uncertainties and time-varying leakage delays—an asymptotic stability analysis. Inspired by the above motivations, our proposed work is to find the robust stability conditions for a class of impulsive discrete-time BAM neural network systems with randomly occurring uncertainties and timevarying leakage delays. The augmented type LKF is chosen which exploit more information about the neural networks system. Some zero equations are added with Lyapunov–Krasovskii functionals for the stability of NNs, which can be exposed in terms of linear matrix inequalities. By LMI control toolbox in MATLAB software, we can easily verified the feasibility of proposed stability conditions. To propose the advantages of this research work, some illustrative examples are employed. The main core of the proposed research work is as follows: • Robust stability analysis of impulsive discrete-time BAM neural networks with randomly occurring parameter uncertainties and time-varying leakage delays using convex combination approach is proposed. An augmented type Lyapunov–Krasovskii functional terms are initiated, which leads less conservatism than previous work which are expressed in numerical examples with the help of MATLAB LMI Toolbox. • More decision variables are considered in Lyapunov–Krasovskii functionals to employ the bounds of time-varying leakage delays as well as transmission delays. In our proposal, BAM NNs with randomly occurring uncertain parameters are engaged which is different from the existing literature. • Reciprocally convex combination technique is used in our proposal. This lemma helps in reducing the number of decision variables without affecting the conservatism. The relevance of this lemma is from the derivation of Jensen’s inequality for stability conditions. • By handling the discrete-time-varying delay terms in our impulsive BAM neural networks with timevarying leakage delays and uncertain parame-
2573 Table 1 Comparisons of upper bounds of time delays Methods
Lower bounds
Upper bounds
In Ref. [23]
σm = τm = 2
σM = τM = 4
In Ref. [49]
σm = τm = 2
σM = τM = 4
In Ref. [26]
σm = τm = 2
σM = τM = 4
In Ref. [32]
σm = τm = 2
σM = τM = 2
In Ref. [48]
σm = τm = 3
σM = τM = 5
In Ref. [50]
σm = τm = 2
σM = τM > 0
In Ref. [22]
σm = τm = 2
σM = τM = 6
In this paper
αm = βm > 0
αM = βM > 0
ters, the allowable upper bounds of discrete-timevarying delays are very large, when compared with the previous results, see Table 1 in Examples 5.1, 5.2. This explores the approach developed in this paper, which is effective and less conservative than some existing literature, and also ensures the novelty of the research work. • In general, leakage delay gives some prominent change in the dynamic behaviors of the neural networks. If the leakage delay is constant or timevarying [42], the system leads to be unstable. In our proposed work, time-varying leakage delays are considered along with parameters such as uncertain, impulse. If all the parameters affect the system simultaneously, then the system becomes unstable. This can be avoided with the help of augmented type Lyapunov–Krasovskii functional and some inequality techniques, which leads the system to be stable, and it also be very challenged. The remaining of this manuscript is summarized as follows: In Sect. 2, problem formulation and preliminaries will be employed for the main results. The main results of discrete-time BAM NNs with time-varying leakage delays to check the asymptotic stability conditions are derived in Sect. 3. With the help of Theorem 1, Sect. 4 explains the concept of uncertain discretetime BAM neural networks with time-varying leakage delays. Section 5 demonstrates our derived results with four illustrative examples. At last, conclusions are drawn in Sect. 6. Notations Throughout this manuscript, the notations are quite standard. Rn be the n-dimensional Euclidean space and Rn×m be the set of all n × m real matrices. I refers the identity matrix with appropriate dimensions. Diag (·) denotes the diagonal matrix. AT means
123
2574
C. Sowmiya et al.
the transpose of matrix A. k denotes the set of positive integers. For real symmetric matrices X and Y, the notation X ≥ Y (r esp., X > Y ) means that the matrix X − Y is positive semidefinite (resp., positive definite). N = {1, 2, . . . , n} and || · || stands for the Euclidean norm in Rn . (−1) be the matrix inverse. The mathematical expectation be E. The symbol ∗ within a matrix represents the symmetric term of the matrix.
2 Problem description and preliminaries In this section, with the help of some zero inequalities and convex combination approach, the asymptotic stability condition is derived for impulsive discretetime BAM neural networks with time-varying leakage delays. The impulsive discrete-time BAM neural network systems with time-varying leakage delays are as follows: n m ji fˆ v j (k) u i (k + 1) = li u i k − (k) + j=1
+
n
n ji gˆ v j (k − α(k)) + I
j=1
u i (kr ) = u i (kr− ) + eˆir (u i (kr− )) r = 1, 2, 3, . . . v j (k + 1) = a j v j k − ρ(k) +
n
n bi j f¯ˆ u i (k) + ci j g¯ˆ u i (k − β(k)) + J
i=1
i=1
v j (kr ) = v j (kr− ) + e¯ˆr (v j (kr− )) r = 1, 2, 3, . . . (1)
Condition 1 fˆj (.), gˆ j (.), f¯ˆi (.), g¯ˆi (.), j, i = 1, 2, 3, . . . , n are the neuron activation functions which are continuous and bounded. Condition 2 For j = 1, 2, 3, . . . , n and i = 1, 2, 3, . . . , n and any s1 , s2 , t1 , t2 ∈ R, s1 = s2 and t1 = t2 , the neuron activation function in condition 1 satisfies F j− ≤
u(kr ) = u(kr− ) + eˆr (u(kr− )) r = 1, 2, 3, . . . v(k + 1) = Av k − ρ(k) + B f¯ˆ u(k) + C g¯ˆ u(k − β(k)) + J (2)
where u(.) = (u 1 (.), u 2 (.), . . . , u n (.))T ∈ Rn , v(.) = (v1 (.), v2 (.), . . . , vn (.))T ∈ Rn , are the state vectors. T The activation functions be fˆ(.)= fˆ1 (.), fˆ2 (.), . . . , fˆn (.) T ∈ Rn , g(.) ˆ = gˆ 1 (.), gˆ 2 (.), . . . , gˆ n (.) ∈ Rn , f¯ˆ(.) =
123
Rn and the impulse instant kr is assumed to satisfy 0 = k0 < k1 < k2 1 for (r = 1, 2, . . .). The following activation functions help to satisfy the following conditions throughout this paper.
fˆj (s1 ) − fˆj (s2 ) ≤ F j+ ; s1 − s2 f¯ˆi (t1 ) − f¯ˆi (t2 ) ≤ Di+ . Di− ≤ t1 − t2 gˆ j (s1 ) − gˆ j (s2 ) ≤ G +j ; G −j ≤ s1 − s2 g¯ˆi (t1 ) − g¯ˆi (t2 ) E i− ≤ ≤ E i+ . t1 − t2
and the vector form is u(k + 1) = Lu k − (k) + M fˆ v(k) + N gˆ v(k − α(k)) + I
v(kr ) = v(kr− ) + e¯ˆr (v(kr− )) r = 1, 2, 3, . . .
T ¯ˆ ∈ Rn , g(.) = g¯ˆ 1 (.), f¯ˆ1 (.), fˆ¯2 (.), . . . , f¯ˆn (.) T T g¯ˆ 2 (.), . . . , g¯ˆ n (.) ∈ Rn . I = I1 , I2 , . . . , In and T J = J1 , J2 , . . . , Jn are the input vectors from the external source. (k) and ρ(k) denote the leakage delays satisfying 0 < m ≤ (k) ≤ M and 0 < ρm ≤ ρ(k) ≤ ρ M , respectively, where m , ρm , ρ M , M denote the lower and upper bounds of (k) and ρ(k), respectively. α(k) and β(k) represent the transmission delays satisfying 0 < αm ≤ α(k) ≤ α M and 0 < βm ≤ β(k) ≤ β M , respectively, αm , βm > 0 and α M , β M > 0 denote the lower and upper bounds of α(k) and β(k), respectively. The conditions u(kr ) = u(kr− ) + eˆr (u(kr− )) and v(kr ) = v(kr− ) + e¯ˆr (v(kr− )), r = 1, 2, 3, . . . are the impulse dynamical activity caused by abrupt jumps at certain instants during the evolutionary process and u(kr− ) = limk→kr− u(k), v(kr− ) = limk→kr− v(k), eˆr = Rn → R, eˆ¯r = Rn → R is continuous over
where G −j , G +j , F j− , F j+ , Di− , Di+ , E i− , E i+ are known constants. Remark 2.1 In [44,45], authors proposed the aforementioned Condition 2, respectively. Here, the constants are allowed to be positive, negative or zero. We
Impulsive discrete-time BAM neural networks
2575
use generalized activation functions which are more general than sigmoid and Lipschitz activation functions. It is very helpful for using LMI-based technique to reduce the possible conservatism. In order to simplify our proof, we shift the equiT and v ∗ = librium point u ∗ = u ∗1 , u ∗2 , . . . , u ∗n T v1∗ , v2∗ , . . . , vn∗ of system (2) to origin. Let xi (k) = u i (k) − u i∗ , y j (k) = v j (k) − v ∗j , f j (y j (k)) = fˆj (v j (k)) − fˆj (v ∗j ), g j (y j (k)) = gˆ j (v j (k)) − gˆ j (v ∗j ), f (x (k)) = f¯ˆ (u (k)) − f¯ˆ (u ∗ ), g (x (k)) = g¯ˆ (u (k)) i
i
i
i
i
i
i
i
i
i
− g¯ˆi (u i∗ ), After transformation, the neural network of system (2) can be rewritten as: x(k + 1) = L x k − (k) + M f y(k) + N g y(k − α(k)) x(kr ) = x(kr− ) + er x(kr− ) y(k + 1) = Ay k − ρ(k) + B fˆ x(k) + C gˆ x(k − β(k))
y(kr ) = y(kr− ) + eˆr y(kr− ). (3) T ∈ Rn , where x(k) = x1 (k), x2 (k), . . . , xn (k) T ∈ Rn are the y(k) = y1 (k), y2 (k), . . . , yn (k) state vectors, x(k − (k)) = x1 (k − (k)), x2 (k − T (k)), . . . , xn (k − (k)) , y(k − ρ(k)) = y1 (k − T ρ(k)), y2 (k − ρ(k)), . . . , yn (k − ρ(k)) , f (y(k) = T f 1 (y(k)), f 2 (y(k)), . . . , f n (y(k)) , g(y(k − α(k))) = g1 (y(k − α(k))), g2 (y(k − α(k))), . . . , gn (y(k − T fˆ1 (y(k)), fˆ2 (y(k)), . . . , α(k))) , fˆ(y(k)) = T ˆ − β(k))) = gˆ 1 (y(k − β(k))), gˆ 2 fˆn (y(k)) g(x(k T (y(k − β(k))), . . . , gˆ n (y(k − β(k))) be the activation functions. From Condition 2, the activation functions f j (.), g j (.), fˆi (.), gˆi (.), j=1, 2, 3, . . . , n, i=1, 2, 3, . . . , n gratify F j− ≤
f j (s1 ) − f j (s2 ) ≤ F j+ ; s1 − s2
fˆi (t1 ) − fˆi (t2 ) ≤ Di+ . t1 − t2 g j (s1 ) − g j (s2 ) G −j ≤ ≤ G +j ; s1 − s2 gˆi (t1 ) − gˆi (t2 ) E i− ≤ ≤ E i+ . t1 − t2 Di− ≤
For any t1 = t2 , s1 = s2 and f j (0) = g j (0) = 0, fˆi (0) = gˆi (0) = 0 The initial conditions are as follows: x(s) = (s), s = −ω, −ω + 1, . . . , 0 y(t) = τ (t), t = −δ, −δ + 1, . . . , 0. (4) where ω = max M , α M and δ = max ρ M , β M . To organize our theorems, the following lemmas will be needful. Lemma 2.2 [46] let the functions h 1 , h 2 , . . . , h N : Rm → R in an open subset D of Rm have been positive values. Then the conditions satisfied by reciprocally convex combination of h i over D as follows 1 min h i (k) = h i (k) γi {γi /γi >0, i γi =1} i i + max z i j (k), (5) z i j (k)
i= j
subject to
h i (k) z i j : R → R, z j,i (k), z i, j (k) m
z i, j (k) ≥ 0. (6) h i (k)
Proof The constraint in (6) implies that ⎛ ⎞T ⎛ ⎞ γj γj
h (k) z (k) γ i i, j i ⎝ ⎠ ⎝ γi ⎠ ≥ 0. z i, j (k) h j (k) − γγij − γγij Then we have 1 γj h i (k) = h i (k) γi γi i i, j = h i (k) i
γi 1 γj h i (k) + h j (k) + 2 γi γj i= j h i (k) + z i, j (k). ≥ i
i= j
Note that the inequality holds for √ h i (k) γi = , z i, j (k) = h i (k)h j (k) j h j (k)
123
2576
C. Sowmiya et al.
which completes the proof.
Lemma 2.3 [47] Given constant matrices 1 , 2 and 3 with appropriate dimensions where T1 = 1 and T2 = 2 > 0, then + T3 −1 2 3 < 0 if and only if
− 2 1 T3 < 0 or ∗ − 2 ∗
3 1
< 0.
3 Asymptotic stability results To check the stability in the sense of asymptotic for the discrete-time BAM neural networks (3) with impulse and time-varying leakage delays, some sufficient conditions are obtained in this section. Theorem 3.1 Suppose that Condition 2 holds, NNs (3) are asymptotically stable, if ∃ symmetric matrices Q i > 0 (i = 1 to 8), and P j > 0 ( j = 1 to 8), O1 > 0, O2 > 0, O3 > 0, O4 > 0, Z 1 > 0, Z 2 > 0, Z 3 > 0, Z 4 > 0, H j > 0, T j > 0, and Hi > 0, Ti > 0, diagonal matrices ζ1 > 0, ζ2 > 0, ζ3 > 0, ζ4 > 0, and matrices Ri (i = 1 to 4), R j ( j = 5 to 8) and Y1 , Y2 , Y3 , Y4 of appropriate dimensions and positive constant ( ≤ 1), then that the following LMIs hold. (i)
m
ln(1 + ki )2 (1 + ωi )2
i=1
+ kln(1 − ) ≤ ϑ(k), f or any k ∈ kr , kr +1 , then lim ϑ(k) = +∞,
H1 H2 > 0, (ii) H3 H4
H5 H6 (iii) > 0, H7 H8
T1 T2 (iv) > 0, T3 T4
T5 T6 (v) > 0, T7 T8 k→+∞
(vi) = (vii) χ =
10
ϕr < 0,
r =1 10 r =1
123
θr < 0,
(7) (8) (9) (10) (11) (12)
(13)
where ϕ1 = e8T Q 1 e8 + 2e8T Q 1 e1 + 2e8T Q 2 e2 − e4 + 2e8T Q 2 e11 + e12 + 2e8T Q 3 e5 − e7 + 2e8T Q 3 × e13 + e14 + 2e1T Q 2 e2 − e4 + 2e1T Q 3 e5 − e7 T + e2 − e4 Q 4 e2 − e4 + 2 e2 − e4 × Q 4 e11 + e12 T + 2 e2 − e4 Q 5 e5 − e7 T + 2 e2 − e4 Q 5 e13 + e14 T + 2 e11 + e12 T ×Q 5 e13 + e14 + e5 − e7 Q 6 e5 − e7 T + 2 e5 − e7 Q 6 e13 + e14 , ϕ2 = 1 + 1 e1T Q 7 e1 − e3T Q 7 e3 , ϕ3 = α1 + 1 e1T Q 8 e1 − e6T Q 8 e6 , ϕ4 = 12 e1T O1 e1 + 1 e2T R3 e2 + 1 e3T (R1 − R3 )e3 − 1 e4T R1 e4 + 12 e8T O2 e8 , T T O1 e11 + 2e11 R 1 e3 − e4 ϕ5 = − e11 T T + 2e11 H1 e12 + 2e11 H2 e2 − e3 T + e3 − e4 O2 + R1 T × e3 − e4 + 2 e3 − e4 H3 e12 T + 2 e3 − e4 H4 e2 − e3 T T + e12 O1 e12 + 2e12 R 3 e2 − e3
T + e2 − e3 O 2 + R 3 e2 − e3 ,
ϕ6 = α12 e1T Z 1 e1 + α1 e5T e5 R7 + α1 e6T R5 − R7 e6 − α1 e7T R5 e7 + α12 e8T Z 2 e8 , T T ϕ7 = − e13 Z 1 e13 + 2e13 R 5 e6 − e7 T T + 2e13 T1 e14 + 2e13 T2 e5 − e6 T + e6 − e7 Z 2 + R5
Impulsive discrete-time BAM neural networks
T × e6 − e7 + 2 e6 − e7 T3 e14 T T + 2 e6 − e7 T4 e5 − e6 + e14 Z 1 e14 T + 2e14 R 7 e5 − e6 T Z 2 + R 7 e5 − e6 , + e5 − e6
2577
θ6 = β12 e1∗ Z 3 e1∗ T
+ β1 e5∗ T e5 R8 + β1 e6∗ T R6 − R8 e6∗
− β1 e7∗ T R6 e7∗ + β12 e8∗ Z 4 e8∗ , ∗ T ∗ ∗ T Z 3 e13 + 2e13 R6 e6∗ − e7∗ θ7 = − e13 T
∗ T ∗ + 2e13 T5 e14 T ∗ T Z 4 + R6 + 2e13 T6 e5∗ − e6∗ + e6∗ − e7∗ × e6∗ − e7∗ T ∗ + 2 e6∗ − e7∗ T7 e14 T ∗ T + 2 e6∗ − e7∗ T8 e6∗ − e7∗ + e14 Z 3 e14 ∗ T + 2e14 R8 e5∗ − e6∗ T + e5∗ − e6∗ Z 4 + R8 e5∗ − e6∗ ,
ϕ8 = −e1T F1 ζ1 e1 + 2e1T F2 ζ1 e9 − e9T ζ1 e9 , T ϕ9 = −e6T G 1 ζ2 e6 + 2e6T G 2 ζ2 e10 − e10 ζ2 e10 , ϕ10 = 2 e8T Y1T + e1T Y2T Le3 + Me9 + N e10 − e8 − e1 , θ1 = e8∗ T P1 e8∗ + 2e8∗ T P1 e1∗ ∗ ∗ + 2e8∗ T P2 e2∗ − e4∗ + 2e8∗ T P2 e11 + e12 + 2e8∗ T P3 e5∗ − e7∗ + 2e8∗ T ∗ ∗ × P3 e13 + e14 + 2e1∗ T P2 e2∗ − e4∗ + 2e1∗ T P3 e5∗ − e7∗ T + e2∗ − e4∗ P4 e2∗ − e4∗ + 2 e2∗ − e4∗ T ∗ ∗ × P4 e11 + e12 + 2 e2∗ − e4∗ P5 e5∗ − e7∗ T T ∗ ∗ ∗ ∗ + 2 e2∗ − e4∗ P5 e13 + e14 + e12 P5 + 2 e11 T ∗ ∗ × e13 + e14 + e5∗ − e7∗ P6 e5∗ − e7∗ T ∗ ∗ + 2 e5∗ − e7∗ P6 e13 + e14 , θ2 = ρ1 + 1 e1∗ T P7 e1∗ − e3∗ T P7 e3∗ , θ3 = β1 + 1 e1∗ T P8 e1∗ − e6∗T P8 e6∗ , θ4 = ρ12 e1∗ T O3 e1∗
+ ρ1 e2∗ T R4 e2∗ + ρ1 e3∗ T R2 − R4 e3∗ − ρ1 e4∗ T R2 e4∗
+ ρ12 e8∗ T O4 e8∗ , ∗ T ∗ O3 e11 θ5 = − e11 ∗ T ∗ T ∗ + 2e11 R2 e3∗ − e4∗ + 2e11 H5 e12 T ∗ T + 2e11 H6 e2∗ − e3∗ + e3∗ − e4∗ T ∗ × O4 + R2 e3∗ − e4∗ + 2 e3∗ − e4∗ H7 e12 T + 2 e3∗ − e4∗ H8 e2∗ − e3∗ ∗ T ∗ ∗ T + e12 O3 e12 + 2e12 T × R4 e2∗ − e3∗ + e2∗ − e3∗ O4 + R4 e2∗ − e38 ,
T θ8 = −e1∗ T D1 ζ3 e1∗ + 2e1∗ T D2 ζ3 e9∗ − e9∗ ζ3 e9∗ , T ∗ ∗T ∗ θ9 = −e6∗ E 1 ζ4 e6∗ + 2e6∗ E 2 ζ4 e10 − e10 ζ4 e10 , ∗ θ10 = 2 e8∗ T Y3T + e1∗ T Y4T Ae3∗ + Be9∗ + Ce10 − e8∗ − e1∗ .
Proof Consider the Lyapunov–Krasovskii functionals for neural networks (3) as follows: ⎛
⎞T ⎛ ⎞ x(k) Q1 Q2 Q3 k− −1 m ⎜ ⎟ Q4 Q5⎠ V1 (k) = ⎝ i=k− M x(i)⎠ ⎝ ∗ k−βm −1 ∗ ∗ Q6 i=k−β M x(i) ⎞T ⎞ ⎛ ⎛ x(k) y(k) ⎟ ⎟ ⎜ k−ρm −1 ⎜ k−m −1 × ⎝ i=k− M x(i)⎠ + ⎝ j=k−ρ M y( j)⎠ k−αm −1 k−βm −1 j=k−α M y( j) i=k−β M x(i) ⎞ ⎛ ⎞⎛ y(k) P1 P2 P3 k−ρm −1 ⎟ ⎜ × ⎝ ∗ P4 P5 ⎠ ⎝ j=k−ρ M y( j)⎠ , k−αm −1 ∗ ∗ P6 j=k−α y( j)
(14)
M
k−1
V2 (k) =
x T (i)Q 7 x(i)
i=k−(k)
+
− m
k−1
x T (i)Q 7 x(i)
j=− M +1 i=k+ j
+
k−1
y T ( j)P7 y( j)
j=k−ρ(k)
+
−ρ m
k−1
y T ( j)P7 y( j),
(15)
i=−ρ M +1 j=k+i
123
2578
C. Sowmiya et al. k−1
V3 (k) =
j=k−α(k) −α m
+
k−1
y T ( j)P8 y( j)
i=−α M +1 j=k+i k−1
+
x T (i)Q 8 x(i)
i=k−β(k) −β m
+
k−1
x T (i)Q 8 x(i),
(16)
j=−β M +1 i=k+ j
V4 (k) = 1
− m −1 k−1
i=− M j=k+i
+ ρ1
−ρ m −1 k−1
j=−ρ M i=k+ j
x(i) μ(i)
T
T
y( j) O3 ν( j) 0
O1 0
0 O2
0 O4
x(i) μ(i)
y( j) , ν( j)
(17) V5 (k) = α1
T
−α m −1
y( j) Z1 i=−α M j=k+i
+ β1
−β m −1 k−1
j=−β M i=k+ j
ν( j)
T
x(i) Z3 μ(i) 0
0 0 Z4
0 Z2
y( j) ν( j)
where μ(k) = x(k +1)−x(k), ν(k) = y(k +1)− y(k). Calculating the difference of V (k) = V1 (k) + V2 (k) + V3 (k) + V4 (k) + V5 (k).
along the trajectories of neural networks system (3) ⎞T ⎛ μ(k) + x(k) Q1 k− −1 m ⎟ ⎜ V1 (k) = ⎝ i=k− M (μ(i) + x(i))⎠ ⎝ ∗ k−βm −1 ∗ i=k−β M (μ(i) + x(i)) ⎞ ⎛ η(k) + x(k) ⎜ k−m −1 (μ(i) + x(i))⎟ × ⎝ i=k−σ M ⎠ k−βm −1 (μ(i) + x(i)) i=k−β ⎛
⎞ Q3 Q5⎠ Q6
M
P2 P4 ∗
P2 0 P4 P5T 0
P3 P3 P5 P6 P6
⎞ P3 0⎟ ⎟ P5 ⎟ ⎟ P6 ⎠ 0
(19)
− x T (k − (k))Q 7 x(k − (k)) + ρ1 + 1 y T (k)P7 y(k) − y T (k − ρ(k))P7 y(k − ρ(k)), = ϒ T (k) ϕ2 ϒ(k) +T (k) θ2 (k), V3 (k) = α1 + 1 y T (k)Q 8 y(k)
(20)
− y T (k − α(k))Q 8 y(k − α(k)) + β1 + 1 x T (k)P8 x(k) − x T (k − β(k))P8 x(k − β(k)), = ϒ T (k) ϕ3 ϒ(k) + T (k) θ3 (k),
T
0 x(k) O1 x(k) V4 (k) = 12 0 O2 μ(k) μ(k) T
k− m −1
0 x(i) O1 x(i) −1 0 O2 μ(i) μ(i)
M
⎞T ⎛ ν(k) + y(k) P1 k−ρ −1 m ⎜ ⎟ + ⎝ j=k−ρ M (ν( j) + y( j))⎠ ⎝ ∗ k−αm −1 ∗ j=k−α M (ν( j) + x( j)) ⎛ ⎞ ν(k) + y(k) ⎜ k−ρm −1 ⎟ × ⎝ j=k−ρ M (ν( j) + y( j))⎠ k−αm −1 j=k−α M (ν( j) + y( j)) ⎞T ⎛ μ(k) ⎟ ⎛Q ⎜ Q1 1 ⎟ ⎜ x(k) ⎜ k−m −1 μ(i)⎟ ⎜ Q 0 ⎟ ⎜ 1 ⎜ i=k− M ⎟ ⎜ T T Q Q = ⎜ k−m −1 x(i) ⎟ ⎜ 2 2 ⎟ ⎜ ⎜ i=k− M T ⎝ ⎟ ⎜ k−βm −1 Q3 QT 3 ⎟ ⎜ μ(i) ⎠ ⎝ i=k−β M QT 0 3 k−βm −1 i=k−β x(i)
123
Q2 Q4 ∗
P2 P2 P4 P5T P5T
V2 (k) = 1 + 1 x T (k)Q 7 x(k)
x(i) , μ(i)
(18)
⎛
⎞ μ(k) ⎟ ⎜ ⎟ ⎜ x(k) ⎜ k−m −1 μ(i)⎟ ⎟ ⎜ i=k− M ⎟ ⎜ × ⎜ k−m −1 x(i) ⎟ ⎟ ⎜ i=k− M ⎟ ⎜ k−βm −1 ⎟ ⎜ μ(i) ⎠ ⎝ i=k−β M k−βm −1 i=k−β M x(i) ⎛ ⎞T ν(k) ⎜ ⎟ ⎛P P1 1 ⎜ y(k) ⎟ ⎜ k−ρm −1 ⎟ P1 0 ⎜ j=k−ρ M ν( j)⎟ ⎜ ⎜ ⎜ ⎟ T T P P + ⎜ k−ρm −1 y( j)⎟ ⎜ 2 2 ⎜ j=k−ρ M ⎟ ⎜ ⎜ k−αm −1 ⎟ ⎝ P3T P3T ⎜ ⎟ ⎝ j=k−α M ν( j)⎠ P3T 0 k−αm −1 y( j) j=k−α M ⎛ ⎞ ν(k) ⎜ ⎟ ⎜ y(k) ⎟ ⎜ k−ρm −1 ⎟ ⎜ j=k−ρ M ν( j)⎟ ⎜ ⎟ × ⎜ k−ρm −1 y( j)⎟ ⎜ j=k−ρ M ⎟ ⎜ k−αm −1 ⎟ ⎜ ⎟ ν( j) ⎝ j=k−α M ⎠ k−αm −1 i=k−α M y( j) = ϒ T (k) ϕ1 ϒ(k) + T (k) θ1 (k), ⎛
y T ( j)P8 y( j)
⎞ P3 P5 ⎠ P6
(21)
i=k− M
T
0 y(k) O3 y(k) 0 O4 ν(k) ν(k)
−1 k−ρ m T y( j) O3 y( j) 0 − ρ1 ν( j) ν( j) 0 O4 j=k−ρ M
T
0 y(k) Z1 y(k) V5 (k) = α12 0 Z2 ν(k) ν(k)
−1 k−α m T 0 y(i) Z1 y(i) − α1 0 Z2 ν(i) ν(i) + ρ12
Q2 Q2 Q4 QT 5 QT 5
Q2 0 Q4 QT 5 0
Q3 Q3 Q5 Q6 Q6
⎞ Q3 0 ⎟ ⎟ Q5⎟ ⎟ Q6⎠ 0
i=k−α M
(22)
Impulsive discrete-time BAM neural networks
0 x(k) Z4 μ(k)
−1 k−β m T 0 x( j) x( j) Z3 − β1 , 0 Z4 μ( j) μ( j) + β12
x(k) μ(k)
T
2579
0 = β1 x T (k − βm )R8 x(k − βm )
Z3 0
−x T (k − β(k))R8 x(k − β(k)) (23) −
j=k−β M
where 1 = M − m , ρ1 = ρ M − ρm and α1 = α M − αm , β1 = β M − βm . It is easy to see that 0 = 1 x T (k − (k))R1 x(k − (k)) −x T (k − M )R1 x(k − M ) −
k−(k)−1
μ (i)T1 μ(i) + 2x(i) , T
(24)
k−β m −1
Adding (25)–(28) to the second term of Eq. (23) and also adding (28)–(31) to the second term of Eq. (23), then we acquired.
T
x(k) O1 V4 (k) = 12 0 μ(k) −1
0 = ρ1 y (k − ρ(k))R2 y(k − ρ(k)) T
T
−
k−ρ(k)−1
ν T ( j)T2 ν( j) + 2y( j) ,
− 1
j=k−ρ M
0 = 1 x T (k − m )R3 x(k − m ) −x T (k − (k))R3 x(k − (k)) −
μT (i)T3 μ(i) + 2x(i) ,
(26)
−
k−ρ m −1
j=k−ρ(k) 0 = α1 y T (k − τ (k))R5 y(k − α(k))
ν T ( j)R5 ν( j) + 2y( j) ,
(28)
j=k−α M
0 = β1 x T (k − β(k))R6 x(k − β(k)) −x T (k − β M )R6 x(k − β M ) k−β(k)−1
μT (i)R6 μ(i) + 2x(i) ,
(29)
−
−
x(i) μ(i)
μT (i)R3 μ(i) + 2x(i)
T
O3 0
ν T ( j)R7 ν( j) + 2y( j) ,
k−ρ k −1
y( j) ν( j)
T
O3 0
0 O4
y( j) ν( j)
k−ρ(k)−1
ν T ( j)R4 ν( j) + 2y( j)
k−ρ m −1
ν T ( j)R4 ν( j) + 2y( j)
j=k−ρ(k)
−y T (k − α(k))R7 y(k − α(k))
j=k−α(k)
j=k−ρ M
0 = α1 y T (k − αm )R7 y(k − αm )
−
0 O2
x(i) μ(i)
+ ρ1 y T k − ρ(k) R2 y k − ρ(k) − y T k − ρ M R2 y k − ρ M + y T k − ρm R4 y k − ρm − y T k − ρ(k) R4 × y k − ρ(k)
i=k−β M
k−α m −1
O1 0
i=k−(k)
j=k−ρ(k)
−
0 O2
y(k) 0 ν(k) O4
k−ρ(k)−1 T 0 y( j) O3 y( j) − ρ1 0 O4 ν( j) ν( j) − ρ1
−y (k − α M )R5 y(k − α M ) −
T
O1 0
j=k−ρ M
T
k−α(k)−1
k− m −1
y(k) + ρ12 ν(k)
(27)
x(i) μ(i)
i=k− M
ν T ( j)T4 ν( j) + 2y( j) ,
T
x(k) μ(k)
k−(k)−1 × x k − (k) − μT (i)R1 μ(i)
+ 2x(i) −
−y T (k − ρ(k))R4 y(k − ρ(k))
+ 1 x T k − (k) R1 x k − (k) − x T k − M T1 x k − M + x T k − m R3 x k − m − x T k − (k) R3
i=k−(k)
0 = ρ1 y T (k − ρm )R4 y(k − ρm )
0 O2
x(i) μ(i)
k− m −1
i=k−(k)
(25)
k− m −1
k−(k)−1
i=k− M
−y (k − ρ M )R2 y(k − ρ M )
(31)
i=k−β(k)
i=k− M
μT (i)R8 μ(i) + 2x(i) ,
(30)
< ϒ T (k) ϕ4 ϒ(k)
k−(k)−1 x(i) T 1 − i=k− M μ(i) M − (k)
123
2580
C. Sowmiya et al.
k−(k)−1 x(i) R1 i=k− M μ(i) O2 + R1
T k−m −1 x(i) 1 − i=k−k μ(i) (k) − m
k−m −1 x(i) R3 O1 × i=k−k μ(i) ∗ O2 + R3
×
⎛
R2 H5 O4 + R2 H7 ∗ O3 ∗ ∗ ≤ ϒ T (k) ϕ4 + ϕ5 ϒ(k) +T (k) θ4 + θ5 (k)
O1 ∗
O3 ⎜∗ ⎜ ×⎝ ∗ ∗
+ T (k)θ4 (k)
k−ρ(k)−1 y( j) T ρ1 − j=k−ρ M ν( j) ρ M − ρ(k)
R2 O3 k−ρ(k)−1 y( j) × j=k−ρ M ν( j) ∗ O3 + R2 ρ1 − ρk − ρm
k−ρm −1 y( j) T × j=k−ρk ν( j)
k−ρm −1 y( j) R4 O3 × j=k−ρ k ν( j) ∗ O3 + R4
Let η1 =
M −(k) , 1
η2 =
ρ(k)−ρm . ρ1 Since η1 + η2
ρ M −ρ(k) , ρ1
η3 =
(k)−m , 1
V5 (k) ≤ ϒ T (k) ϕ6 ϒ(k) ⎛
⎞T 3 (k) ⎜ y(k − α(k)) − y(k − α M )⎟ ⎟ −⎜ ⎝ ⎠ 4 (k) y(k − αm − y(k − α(k))) ⎞⎛ ⎛ ⎞ 3 (k) Z1 R5 T1 T2 ⎟ ⎜ ⎜∗ Z 2 + R5 T3 T4 ⎟ ⎜ y(k − α(k)) − y(k − α M )⎟ ⎟ ×⎜ ⎝∗ ⎠ 4 (k) ∗ Z1 R7 ⎠ ⎝ y(k − σm − y(k − σ (k))) ∗ ∗ ∗ Z 2 + R7 ⎛ ⎞T 7 (k) ⎜ x(k − β(k)) − x(k − β ) M ⎟ ⎟ +T (k)θ6 (k) − ⎜ ⎝ ⎠ 8 (k) x(k − βm − x(k − β(k))) ⎞⎛ ⎛ ⎞ 7 (k) Z3 R6 T5 T6 ⎜ ⎜∗ ⎟ Z 4 + R6 T7 T8 ⎟ ⎟ ⎜x(k − β(k)) − x(k − β M )⎟ ×⎜ ⎝∗ ⎠ 8 (k) ∗ Z3 R8 ⎠ ⎝
η4 =
∗
∗ ≤ ϒ (k) ϕ6 + ϕ7 ϒ(k) +T (k) θ6 + θ7 (k). T
such that
V4 (k) = ϒ (k) ϕ4 ϒ(k) T
O1 ⎜∗ ×⎜ ⎝∗ ∗
R1 O2 + R1 ∗ ∗
⎞
x(k − m − x(k − σ (k)) ⎞⎛ ⎞ 1 (k) H2 ⎜x(k − (k)) − x(k − M )⎟ H4 ⎟ ⎟⎜ ⎟ ⎠ 2 (k) R3 ⎠ ⎝ x(k − m − x(k − σ (k)) O2 + R3
H1 H3 O1 ∗
+T (k)θ4 (k) ⎞T ⎛ 5 (k) ⎜ y(k − ρ(k)) − y(k − ρ M )⎟ ⎟ −⎜ ⎠ ⎝ 6 (k) y(k − ρm − y(k − ρ(k))
123
∗
Z 4 + R8
x(k − βm − x(k − β(k)))
(33)
From Condition 2, we get ⎛
R1 H1 H2 T O1 ⎜∗ x(i)μ(i) O2 + R1 H3 H4 ⎟ i=k− M ⎜ ⎟ − k−m −1 ⎝∗ ∗ O R3 ⎠ 1 x(i)μ(i) i=k−k ∗ ∗ ∗ O2 + R3 k−(k)−1 x(i)μ(i) M × i=k− k−m −1 x(i)μ(i) i=k−k k−ρ(k)−1 T T j=k−ρ M y( j)ν( j) + (k)θ4 (k) − k−ρm −1 j=k−ρk y( j)ν( j) ⎞ ⎛ R2 H5 H6 O3 k−ρ(k)−1 ⎜∗ O4 + R2 H7 H8 ⎟ ⎟ j=k−ρ M y( j)ν( j) ×⎜ k−ρm −1 ⎝∗ ∗ O3 R4 ⎠ j=k−ρk y( j)ν( j) ∗ ∗ ∗ O4 + R4 ⎞T ⎛ 1 (k) ⎜x(k − (k)) − x(k − M )⎟ ⎟ ≤ ϒ T (k) ϕ4 ϒ(k) − ⎜ ⎠ ⎝ 2 (k) ⎛
(32)
Similarly,
= 1, η3 + η4 = 1. By Lemma 2.2, there ∃ a matrix
H5 H6 H1 H2 > 0, and > 0. H3 H4 H7 H8
k−(k)−1
⎞⎛ ⎞ H6 5 (k) ⎟ ⎜ H8 ⎟ ⎜ y(k − ρ(k)) − y(k − ρ M )⎟ ⎟ ⎠ 6 (k) R4 ⎠ ⎝ O4 + R4 y(k − ρm − y(k − ρ(k))
T
x(k) x(k) −D2 ζ1 D 1 ζ1 (34) ≤ 0, −D2 ζ1 ζ1 fˆ(x(k)) fˆ(x(k))
T
x(k − β(k)) E 1 ζ2 x(k − γ (k)) −E 2 ζ2 ≤ 0, −E 2 ζ2 ζ2 g(x(k ˆ − β(k))) g(x(k ˆ − β(k)))
(35)
T
−F2 ζ3 y(k) F1 ζ3 y(k) (36) ≤ 0, f (y(k)) f (y(k)) −F2 ζ3 ζ3
T
−G 2 ζ4 y(k − α(k)) G 1 ζ4 y(k − α(k)) ≤ 0, −G 2 ζ4 ζ4 g(y(k − α(k))) g(y(k − α(k)))
(37) where ζ1 = diag ζ11 , ζ12 , . . . , ζ1n ; ζ2 = diag ζ21 , ζ22 , . . . , ζ2n ζ3 = diag ζ31 , ζ32 , . . . , ζ3n ; ζ4 = diag ζ41 , ζ42 , . . . , ζ4n . F1 = diag F1− F1+ , F2− F2+ , . . . , Fn − Fn+ ; F2 = diag F1− F1+ , F2− F2+ , . . . , Fn− Fn+
Impulsive discrete-time BAM neural networks
2581
D1 = diag D1− D1+ , D2− D2+ , . . . , Dn− Dn+ ; D2 = diag D1− D1+ , D2− D2+ , . . . , Dn− Dn+ E 1 = diag E 1− E 1+ , E 2− E 2+ , . . . , E n− E n+ ; E 2 = diag E 1− E 1+ , E 2− E 2+ , . . . , E n− E n+ + − + − + G 1 = diag G − 1 G1 , G2 G2 , . . . , Gn Gn ; + − + − + G 2 = diag G − G , G G , . . . , G G n n 1 1 2 2
Next, we are in position to prove the impulsive effect of NNs (3). From the definition of the operator , one can observe that V k, x(k), y(k) = V k + 1, x(k + 1), y(k + 1) −V k, x(k), y(k) < 0
We know that, μ(k) = x(k + 1) − x(k), and ν(k) = y(k + 1) − y(k), and we attain the following zero equations by introducing relaxation matrices as Y1 , Y2 , Y3 , Y4 . 2 μT (k)Y1T + x T (k)Y2T L x k − (k) + M f y(k) + N g y(k − α(k)) − x(k) − μ(k) = 0,
2 ν T (k)Y3T + y T (k)Y4T
(38)
Ay k − ρ(k) + B fˆ y(k)
+ Cg y(k − β(k)) − y(k) − ν(k) = 0.
(39)
By combining the above equations, we acquired V (k) ≤ ϒ (k) T
10
ϕr ϒ(k)
r =1
+ (k) T
10
θr (k) < 0.
(40)
r =1
Let ξ ∗ = ξmax () < 0 and λ∗ = λmax (χ ) < 0. Then we easily achieve V (k) ≤ ξ ∗ ||x(k)||2 + λ∗ ||y(k)||2 , x(k) = 0, y(k) = 0. Put κ = max(λ∗ , ξ ∗ ) V (k) ≤ κ ||x(k)||2 + ||y(k)||2
(41)
(42)
Therefore, the neural networks (3) are asymptotically stable.
Remark 3.2 Leakage delay or forgetting term sometimes leads worse dynamical behavior of neural networks system. very few results are proposed on neural networks with leakage delay in discrete time. The authors in [40,46] considered the leakage term as constant delay. But in our proposed work the leakage delay is considered as time-varying.
Then ≤ 1 such that there exists a positive constant V k+1, x(k+1), y(k+1) ≤ 1− V k, x(k), y(k) , k = kr Then V k0 + 1, x(k0 + 1), y(k0 + 1) ≤ 1 − V k0 , x(k0 ), y(k0 ) V k0 + 2, x(k0 + 2), y(k0 + 2) 2 ≤ 1 − V k0 , x(k0 ), y(k0 ) . . . V k1 , x(k1 ), y(k1 ) = V k0 + k1 − k0 , x(k0 +k1 − k0 ), y(k0 + k1 − k0 ) k1 −k0 V k0 , x(k0 ), y(k0 ) ≤ 1− Similarly, V k2 , x(k2 ), y(k2 ) = V k1 + k2 − k1 , x(k1 + k2 − k1 ), y(k1 + k2 − k1 ) k2 −k1 V k1 , x(k1 ), y(k1 ) ≤ 1− k1 −k0 k2 −k1 1− V k0 , x(k0 ), y(k0 ) ≤ 1− k2 −k0 V k0 , x(k0 ), y(k0 ) (43) = 1− and hence k−kr V k2 , x(k), y(k) ≤ 1 − V kr , x(kr ), y(kr ) , k ∈ [kr , kr +1 ).
(44)
Here, we note that V k, x(kr ), y(kr ) ⎛ ⎜ =⎜ ⎝
x(kr− )
kr− −m −1 i=kr− − M kr− −βm −1 i=kr− −β M
⎞T
⎛ ⎞ Q1 Q2 Q3 ⎟ x(i)⎟ ⎝ ∗ Q 4 Q 5 ⎠ ⎠ ∗ ∗ Q6 x(i)
123
2582
C. Sowmiya et al.
⎛
⎞
x(kr− )
⎛
y(kr− )
⎞T
⎟ ⎟ ⎜ kr− −ρm −1 ⎜ kr− −m −1 x(i)⎟ + ⎜ j=k y( j)⎟ − − ×⎜ i=k − ⎠ ⎠ ⎝ − r −ρ M ⎝ − r M kr −αm −1 kr −βm −1 y( j) x(i) − − j=kr −α M i=kr −β M ⎞ ⎛ − ⎞ ⎛ y(kr ) P1 P2 P3 − ⎟ ⎜ kr −ρm −1 y( j)⎟ − × ⎝ ∗ P4 P5 ⎠ × ⎜ r −ρ M ⎠ ⎝ j=k kr− −αm −1 ∗ ∗ P6 y( j) − j=kr −α M
+
x T (i)Q 7 x(i)
i=kr− −(kr− ) − m
+
kr− −1
x T (i)Q 7 x(i)
j=− M +1 i=kr− + j kr− −1
+
y T ( j)P7 y( j)
+
kr− −1
y T ( j)P7 y( j)
i=−ρ M +1 j=kr− +i kr− −1
+
y T ( j)P8 y( j)
j=kr− −α(kr− ) −α m
+
kr− −1
y T ( j)P8 y( j)
i=−α M +1 j=kr− +i kr− −1
+
x T (i)Q 8 x(i)
i=kr− −β(kr− ) −β m
+
kr− −1
x T (i)Q 8 x(i).
j=−β M +1 i=kr− + j
+1
− m −1
T
k−1
x(i) O1 0 x(i) μ(i) μ(i) 0 O2 −
i=− M j=kr +i
+ ρ1
− −ρ m −1 k r −1
j=−ρ M i=kr− + j
+ α1
−α m −1
y( j) ν( j)
T
y( j)T Z 1 0 ν( j) 0 Z2 −
i=−α M j=kr +i
− −β m −1 k r −1 y( j) × + β1 ν( j) −
j=−β M i=kr + j
123
O3 0 0 O4
By applying Eqs. (44) and (45) successively in each interval of [kr , kr +1 ), the following equations yield: For k ∈ [k0 , k1 ),
For k ∈ [k1 , k2 ), V k, x(k), y(k) k−k1 V k1 , x(k1 ), y(k1 ) ≤ 1− 2 2 k−k0 1 + ω1 1− V k0 , x(k0 ), y(k0 ) ≤ 1 + K1 and V k2 , x(k2 ), y(k2 ) 2 2 = 1 + K2 1 + ω2 V k2− , x(k2− ), y(k2− ) 2 2 2 k2 −k0 ≤ 1 + K1 (1 + K2 )2 1 + ω1 1 + ω2 1− × V k0− , x(k0− , y k0− )
j=kr− −ρ(kr− ) −ρ m
(45)
V k, x(k), y(k) k ≤ 1 − V k0 , x(k0 ), y(k0 ) and V k1 , x(k1 ), y(k1 ) 2 = 1 + K1 (1 + ω1 )2 V k1− , x(k1− ), y(k1− ) 2 2 1 + ω1 V k1− , x(k1− ), y(k1− ) ≤ 1 + K1
kr− −1
T
Z3 0 x(i) x(i) × 0 Z4 μ(i) μ(i) 2 2 1 + ωr V kr− , x(kr− , y(kr− )) . = 1 + kr
y( j) ν( j)
For k ∈ [k2 , k3 ), V k, x(k), y(k) k−k2 ≤ 1− V k2 , x(k2 ), y(k2 ) 2 2 2 ≤ 1 + K1 1 + K2 1 + ω1 2 k−k0 1− × 1 + ω2 × V k0 , x(k0 ), y(k0 ) and V k3 , x(k3 ), y(k3 ) 2 2 = 1 + K3 1 + ω3 × V k3− , x(k3− ), y(k3− ) 2 2 1 + K2 ≤ 1 + K1 2 2 × 1 + K3 1 + ω1 2 2 k3 −k0 1 + ω3 1− × 1 + ω2
Impulsive discrete-time BAM neural networks
+ N + υ(k)N (k) g(y(k − α(k)))
V k0− , x(k0− ), y(k0− )
Therefore by induction, for any [kr , kr +1 ), (r = 0, 1, 2, . . .) we obtain k−kr V k, x(k), y(k) ≤ 1 − V kr , x(kr ), y(kr ) k ≤ ri=1 (1 + Ki )2 (1 + ωi )2 1 − V k0 , x(k0 ), y(k0 ) = V k0 , x(k0 ), y(x(k0 ) m ln(k + Ki )(1 + ωi ) + kln(1 − ) × exp 2 i=1
≤ V k0 , x(k0 ), y(k0 ) exp ϑ(k) .
2583
Remark 3.3 In our paper, some zero inequalities and convex combination technique are approached. Because convex combination lemma helps to reduce the decision variables in LMI, which is the relevance lemma of Jensen’s inequality and zero inequalities help to reduce the conservatism of stability criterion than the existing literature. Remark 3.4 In [22], Guo et al. delivered the exponential stability analysis of discrete-time BAM neural networks with uncertain parameters where the uncertainties are taken in the form of linear fractional form. Consequently, by the aid of some novel Lyapunov– Krasovskii functional, the stability in exponential sense is investigated for BAM neural networks with usual robustness in [48]. Different from the above literature, the uncertain parameters designed in this research are assumed to be randomly occurring type. By this consideration in our addressed BAM neural networks, one can obtain the allowable upper bounds of discrete-time delay are maximum when compared with the abovementioned literature [22,48].
4 Randomly occurring uncertain neural networks The following neural networks (46) with randomly occurring uncertain parameters are the extension of NNs (3). Now the remodified neural networks of (3) are as follows: x(k + 1) = L + ι(k)L(k) x(k − (k)) + M + π(k)M(k) f (y(k))
x(kr ) = x(kr− ) + er x(kr− ) y(k + 1) = A + ι˘(k)A(k) y(k − ρ(k)) + B + π˘ (k)B(k) fˆ(x(k)) + C + υ(k)C(k) ˘ g(x(k ˆ − β(k))) y(kr ) = y(kr− ) + eˆr y(kr− ).
(46)
where L(k), M(k), N (k), A(k), B(k), C(k) are the uncertain parameters which are defined as L(k), M(k), N (k), A(k), B(k), C(k) (47) = X J (k) Wl , Wm , Wn , Wa , Wb , Wc From above, X, Wl , Wm , Wn , Wa , Wb , Wc are known constant matrices and J (k) is the unknown timevarying matrix with J T (k)J (k) ≤ I
(48)
where I be the identity matrix. ι(k), δ(k), υ(k), ι˘(k), π˘ (k), υ(k) ˘ are the stochastic variables for the random nature of uncertain parameters. Each of the parameters is independent mutually with Bernoulli-distributed white noise sequences which taking the values 0 and 1. The probabilities are as follows: prob ι(k) = 1 = ι, prob ι(k) = 0 = 1 − ι prob π(k) = 1 = π , prob δ(k) = 0 = 1 − π prob υ(k) = 1 = υ, prob υ(k) = 0 = 1 − υ prob ι˘(k) = 1 = ι˘, prob ι˘(k) = 0 = 1 − ι˘ ˘ prob π(k) ˘ = 1 = π˘ , prob δ(k) = 0 = 1 − π˘ prob υ(k) ˘ = 1 = υ, ˘ prob υ(k) ˘ = 0 = 1 − υ˘ With the help of (47) and (48), system (46) can be written as: x(k + 1) = L x k − (k) + M f y(k) + N g y(k − α(k)) + X ς (k), ς (k) = J (k)℘ (k), ℘ (k) = ι(k)Wl x k − (k) + π(k)Wm f y(k) + υ(k)Wn g y(k − α(k)) , x(s) = (s), s = −ω, −ω + 1, . . . , 0
123
2584
C. Sowmiya et al.
x(kr ) = x(kr− ) + er x(kr− ) y(k + 1) = Ay k − ρ(k) + B fˆ x(k) + C gˆ x(k − β(k)) + X ς ∗ (k),
∗T ∗ ε I e15 , θ12 = −e15 # = 0 0 ιWl 0 0 0 0 0 π Wm υWn 0 0 0 0 0 , #1 = 0 0 ι˘Wa 0 0 0 0 0 π˘ Wb υW ˘ c00000 .
ς ∗ (k) = J (k)℘ ∗ (k), ℘ ∗ (k) = ι˘(k)Wa y k − ρ(k) + π˘ (k)Wb fˆ x(k) + υ(k)W ˘ c g x(k − β(k)) , y(t) = τ (t), t = −δ, −δ + 1, . . . , 0, y(kr ) = y(kr− ) + eˆr y(kr− ).
m
ln(1 + ki )2 (1 + ωi )2 + kln(1 − ) ≤ ϑ(k)
i=1
f or any k ∈ kr , kr +1 , then lim ϑ(k) = +∞.
H1 H2 > 0, (ii) H3 H4
H5 H6 (iii) > 0, H7 H8
T1 T2 (iv) > 0, T3 T4
T5 T6 (v) > 0, T7 T8
12 T r =1 ϕr ε# (vi) < 0, ∗ −ε I
12 θr ε∗ #1T r =1 0 (i = 1 to 8), and P j > 0 ( j = 1 to 8), O1 > 0, O2 > 0, O3 > 0, O4 > 0, Z 1 > 0, Z 2 > 0, Z 3 > 0, Z 4 > 0, H j > 0, T j > 0, and Hi > 0, Ti > 0, diagonal matrices ζ1 > 0, ζ2 > 0, ζ3 > 0, ζ4 > 0, a scalars ε, ε∗ and matrices Ri (i = 1 to 4), R j ( j = 5 to 8) and Y1 , Y2 , Y3 , Y4 of appropriate dimensions and positive constant ( ≤ 1), such that the following LMIs hold. (i)
Proof Replacing (3) with (49) and taking mathematical expectation in (49), we acquired that
(50) (51)
11 ˜ T (k) ˜ +E θr (k) 0 satisfies the inequalities as follows: ˜ − ς T (k)ς (k) ≥ 0, (58) ε ϒ˜ T (k)# T # ϒ(k) ˜ ˜ T (k)#1T #1 (k) − ς ∗T (k)ς ∗ (k) ≥ 0 ε∗ (59) ˜ T = T (k) where ϒ˜ T = ϒ T (k) ς T (k) and ς ∗T (k) . Equations (58) and (59) are added with (57), and applying S-procedure, we have 12 ˜ ϕr + ε# T # ϒ(k) E V (k) ≤ E ϒ˜ T (k) r =1
(52) ˜ T (k)( + (53)
(57)
r =1
12
˜ θr + ε∗ #1T #1 )(k) < 0.
r =1
(54)
By the utilization of Lemma 2.2, we obtain the proof of Theorem 4.1.
(55)
Remark 4.2 If the leakage term in NNs (3) disappears (i.e., (k), ρ(k) = 0), then we get
(56)
x(k + 1) = L x(k) + M f (y(k)) + N g(y(k − α(k))) x(kr ) = x(kr− ) + er x(kr− ) y(k + 1) = Ay(k) + B fˆ(x(k)) + C g(x(k ˆ − β(k))) y(kr ) = y(kr− ) + eˆr y(kr− ).
(60)
The following Corollary helps to check the asymptotic stability of the concerned NNs (60).
Impulsive discrete-time BAM neural networks
2585
Corollary 4.3 Suppose that Condition 2 holds, NNs (60) are asymptotically stable, if ∃ symmetric matrices Q i > 0 (i = 1 to 8), and P j > 0 ( j = 1 to 8), Z 1 > 0, Z 2 > 0, Z 3 > 0, Z 4 > 0, T j > 0, and Ti > 0, diagonal matrices ζ1 > 0, ζ2 > 0, ζ3 > 0, ζ4 > 0, and matrices Ri (i = 1 to 4), R j ( j = 5 to 8) and Y1 , Y2 , Y3 , Y4 of appropriate dimensions and positive constant ( ≤ 1), such that the following LMIs hold. (i)
m
ln(1 + ki )2 (1 + ωi )2 + kln(1 − ) ≤ ϑ(k),
i=1
f or any k ∈ kr , kr +1 , then lim ϑ(k) = +∞,
T1 T2 > 0, (ii) T3 T4
T5 T6 (iii) > 0, T7 T8 k→+∞
(iv) =
5
ℵr < 0,
(61) (62) (63)
(v) χ =
h¯ r < 0.
(64)
(65)
where ℵ1 = e5T Q 1 e5 + 2e5T Q 1 e1 + 2e5T Q 2 e3 − e4 + 2e5T Q 2 e8 + e9 T + 2e1T Q 2 e3 − e4 e3 − e4 T + Q 3 e3 − e4 + 2 e3 − e4 Q 3 e8 + e9 , ℵ2 = β1 + 1 e1T Q 4 e1 − e3T Q 4 e3 , + β1 e3T (R5 − R7 )e3 − β1 e4T R5 e4 + β1 e2T R7 e2 − e8T Z 1 e8 − 2e8T R5 e3 − e4 − 2e8T T1 e9 − 2e8T T2 e2 − e3 T − e3 − e4 Z 2 + R5 e3 − e4 T T − 2 e3 − e4 T3 e9 − 2 e3 − e4 × T3 e2 − e3 − e9T Z 1 e9 T −2e9 R7 e2 − e3 − e2 − e3
+ Me6 + N e7 − e1 − e5 , ℵ5 = −e1T F1 ζ1 e1 + 2e1T F2 ζ1 e6 − e6T ζ1 e6 − e3T G 1 ζ2 e3 + 2e3T G 2 ζ2 e7 − e7T ζ2 e7 , T T T h¯ 1 = e5∗ P1 e5∗ + 2e5∗ P1 e1∗ + 2e5∗ P2 e3∗ − e4∗ T T + 2e5∗ P2 e8∗ + e9∗ + 2e1∗ P2 e3∗ − e4∗ T + e3∗ − e4∗ P3 e3∗ − e4∗ T + 2 e3∗ − e4∗ P3 e8∗ + e9∗ , T h¯ 2 = α1 + 1 e1∗ P4 e1∗ − e3∗ P4 e3∗ , T
h¯ 3 = α12 e1∗ Z 3 e1∗ + α12 e5∗ Z 4 e5∗ T T + α1 e3∗ R6 − R8 e3∗ − α1 e4∗ R6 e4∗ T
+ α1 e2∗ R8 e2∗ − e8∗ Z 3 e8∗ − 2e8∗ T T ×R6 e3∗ − e4∗ − 2e8∗ T5 e9∗ 2e8∗ T6 e2∗ − e3∗ T − e3∗ − e4∗ Z 3 + R6 e3∗ − e4∗ T T − 2 e3∗ − e4∗ T7 e9∗ − 2 e3∗ − e4∗ T7 e2∗ − e3∗ T
r =1
ℵ3 = β12 e1T Z 1 e1 + β12 e5T Z 2 e5
Z 2 + R7 e2 − e3 , ℵ4 = 2 e5T Y1T + e1T Y2T Le1
T
r =1 5
T
T
− e9∗ Z 3 e9∗ T − 2e9∗ R8 e2∗ − e3∗ − e2∗ − e3∗ × Z 4 + R8 e2∗ − e3∗ , T T h¯ 4 = 2 e5∗ Y3T + e1∗ Y4T Ae1∗ + Be6∗ + Ce7∗ − e1∗ − e5∗ , T T h¯ 5 = −e1∗ D1 ζ3 e1∗ + 2e1∗ D2 ζ3 e6∗ T
− e6.∗T ζ3 e6∗ T − e3∗ E 1 ζ4 e3∗ T + 2e3∗T E 2 ζ4 e7∗ − e7∗ ζ4 e7∗ . Proof The proof of this corollary follows from Theorem 3.1.
123
2586
C. Sowmiya et al.
Remark 4.4 If the leakage term and impulse in NNs (3) disappears (i.e., (k), ρ(k) = 0), then we get x(k + 1) = L x(k) + M f (y(k)) + N g(y(k − α(k))) y(k + 1) = Ay(k) + B fˆ(x(k)) + C g(x(k ˆ − β(k))) (66) The following Corollary helps to check the stability of the neural networks system (66). Corollary 4.5 Suppose that Condition 2 holds, NNs (66) are asymptotically stable, if ∃ symmetric matrices Q i > 0 (i = 1 to 8), and P j > 0 ( j = 1 to 8), Z 1 > 0, Z 2 > 0, Z 3 > 0, Z 4 > 0, T j > 0, and Ti > 0, diagonal matrices ζ1 > 0, ζ2 > 0, ζ3 > 0, ζ4 > 0, and matrices Ri (i = 1 to 4), R j ( j = 5 to 8) and Y1 , Y2 , Y3 , Y4 of appropriate dimensions such that the following LMIs hold.
T1 T2 > 0, (67) (i) T3 T4
T5 T6 (ii) > 0, (68) T7 T8 (iii) =
5
ℵr < 0,
(69)
r =1
(iv) χ =
5
h¯ r < 0.
r =1
where ℵ1 = e5T Q 1 e5 + 2e5T Q 1 e1 + 2e5T Q 2 e3 − e4 + 2e5T Q 2 e8 + e9 + 2e1T Q 2 e3 − e4 T + e3 − e4 Q 3 e3 − e4 T + 2 e3 − e4 Q 3 e8 + e9 , ℵ2 = β1 + 1 e1T Q 4 e1 − e3T Q 4 e3 , ℵ3 = β12 e1T Z 1 e1 + β12 e5T Z 2 e5 + β1 e3T R5 − R7 e3 − β1 e4T R5 e4 + β1 e2T R7 e2 − e8T Z 1 e8 − 2e8T R5 (e3 − e4 ) − 2e8T T1 e9 − 2e8T T2 e2 − e3 T Z 2 + R 5 e3 − e4 − e3 − e 4 T T − 2 e3 − e4 T3 e9 − 2 e3 − e4 ×T3 e2 − e3 − e9T Z 1 e9
123
(70)
T Z2 − 2e9 R7 e2 − e3 − e2 − e3 +R7 e2 − e3 Z 2 e8 , ℵ4 = 2(e5T Y1T + e1T Y2T )(Le1 + Me6 + N e7 − e1 − e5 ), ℵ5 = −e1T (F1 ζ1 )e1 + 2e1T (F2 ζ1 )e6 − e6T ζ1 e6 − e3T G 1 ζ2 e3 + 2e3T G 2 ζ2 e7 − e7T ζ2 e7 ,
T T T h¯ 1 = e5∗ P1 e5∗ + 2e5∗ P1 e1∗ + 2e5∗ P2 e3∗ − e4∗ T T + 2e5∗ P2 e8∗ + e9∗ + 2e1∗ P2 e3∗ − e4∗ T + e3∗ − e4∗ P3 e3∗ − e4∗ T + 2 e3∗ − e4∗ P3 e8∗ + e9∗ , T T h¯ 2 = α1 + 1 e1∗ P4 e1∗ − e3∗ P4 e3∗ , h¯ 3 = α12 e1∗ Z 3 e1∗ + α12 e5∗ Z 4 e5∗ T T + α1 e3∗ R6 − R8 e3∗ − α1 e4∗ R6 e4∗ T
T
+ α1 e2∗ R8 e2∗ − e8∗ Z 3 e8∗ − 2e8∗ T T × R6 e3∗ − e4∗ − 2e8∗ T5 e9∗ 2e8∗ T6 e2∗ e3∗ T T Z 3 + R6 e3∗ − e4∗ − 2 e3∗ − e4∗ T7 e9∗ − e3∗ − e4∗ T T − 2 e3∗ − e4∗ T7 e2∗ − e3∗ − e9∗ Z 3 e9∗ − 2e9∗ R8 e2∗ T Z 4 + R8 e2∗ − e3∗ Z 4 e8 , −e3∗ − e2∗ − e3∗ T T h¯ 4 = 2 e5∗ Y3T + e1∗ Y4T Ae1∗ + Be6∗ + Ce7∗ − e1∗ − e5∗ , T T h¯ 5 = −e1∗ D1 ζ3 e1∗ + 2e1∗ D2 ζ3 e6∗ T − e6∗T ζ3 e6∗ − e3∗ E 1 ζ4 e3∗ T + 2e3∗T E 2 ζ4 e7∗ − e7∗ ζ4 e7∗ . T
T
T
Proof The proof of Corollary 4.3 is omitted. Hence, it follows from Theorem 3.1.
5 Illustrative examples In this section, we will present four numerical examples with their simulations to guarantee the superiority and validity of our theoretical results. Example 5.1 Consider a two-dimensional impulsive discrete-time BAM neural networks with time-varying leakage delays. The parameters are as follows:
0.09 0 0.5 0.6 A= , B= , 0 0.08 0.9 0.5
0.01 0.6 C= , 0.5 −0.55
Impulsive discrete-time BAM neural networks
2587
0.05 0 1.05 2.86 , N= , 0 0.07 0.59 0.5
0.4 0.8 M= , 0.9 1.05 Im = −1, Jm = −2, (k) = = 2;
Q5 = Q7 =
ρ(k) = ρ = 3;
Q8 =
The activation functions are taken as
tanh(−0.2y) f (y(k)) = g(y(k)) = , tanh(−0.5y)
tanh(−0.9x) fˆ(x(k)) = g(x(k)) ˆ = tanh(−0.9x)
O1 = O3 = H1 =
Our main aim in this example is to estimate the maximum allowable upper bound delay α M , β M , for given lower bound αm , βm . By solving MATLAB LMI Toolbox, one can easily obtain the feasible solution for any time delay satisfying 0 < α(k) < α M = for any large finite value and 0 < β(k) < α M = for any large finite value. For example, if we take αm = βm = 13, α M = β M = 17, f 1+ = 2, f 1− = −4, f 2+ = 1, f 2− = −1, g1+ = 0.2, g1− = 0.2, g2+ = 0.2, g2− = 0.2, − + − + u+ 1 = 4, u 1 = −0.2, u 2 = 4, u 2 = 0.2, d1 = 3, − + − d1 = 0.2, d2 = 3, d2 = −0.2. with
−8 0 −1 0 , F2 = , F1 = 0 −1 0 0
0.04 0 0.2 0 G1 = , G2 = , 0 0.04 0 0.2
0.6 0 , D1 = 0 −0.6
−0.8 0 E1 = , 0 0.8
P1 = P3 = P4 = P6 = P7 = Q1 = Q2 =
−1.6270 , 291.3657 −1.6260 270.1301 −3.5484 , 367.0428 −6.1157 903.5726 5.3396 , 368.9104 6.4160 332.0618 9.4023 , 319.1253
P2 =
P5 =
H5 = H7 = H8 = Z2 = Z3 = T1 =
T5 =
172.8637 −1.4644 , −1.4644 170.3510
681.5702 −28.7999 P8 = , −28.7999 365.0425
Q3 =
H4 =
T4 =
271.1228 −1.8430 , −1.8430 269.4223
H2 =
T2 =
1.6 0 D2 = , 0 1.4
1.9 0 E2 = , 0 2.1
Now we obtain the feasible solution as follows
292.8888 −1.6270
272.0117 −1.6260
374.8426 −3.5484
911.9019 −6.1157
537.5761 5.3396
288.9321 6.4160
268.5954 9.4023
339.6602 −38.3858 −38.3858 359.9275
159.4793 0.4181 962.0885 −149.2273 , Q6 = , 0.4181 230.7959 −149.2273 481.8919
425.3142 −35.9647 −35.9647 474.0118
110.9821 −54.9147 50.7973 0.5655 , O2 = , −54.9147 197.9181 0.5655 54.6328
144.9491 −4.2889 −4.2889 139.9814
269.0453 −4.0024 51.5348 1.7405 , O4 = , −4.0024 289.3900 1.7405 50.1377
145.1633 0.1545 0.1545 150.1215
702.6359 80.9577 380.3071 0.0000 , H3 = , 80.9577 406.3829 0.0000 380.3071
374.7986 −14.0906 −14.0906 360.5113
150.5293 −2.0946 380.5555 −14.4297 H6 = −2.0946 151.8571 −14.4297 394.1536
298.7321 −24.2027 −24.2027 331.7226
298.7321 −24.2027 112.3577 20.6775 , Z1 = , −24.2027 331.7226 20.6775 70.0531
146.5759 38.9148 38.9148 74.3640
1.4291 −0.5006 350.1442 −47.7362 , Z4 = , −0.5006 0.8734 −47.7362 215.5953
269.6468 −32.0684 −32.0684 245.0229
264.3967 −152.9522 380.5093 0.1144 , T3 = , −152.9522 370.2717 0.1144 380.8204
220.5596 −59.3386 −59.3386 170.2364
320.0325 −64.8638 375.1138 −6.9526 , T6 = , −64.8638 270.5913 −6.9526 374.0131
379.6538 −0.2340 −0.2340 379.6389
402.5238 −9.3036 89.5992 −0.0391 , R1 = , −9.3036 306.2595 −0.0391 93.3572
87.6563 −1.9292 −1.9292 90.0645
266.0215 −152.5991 290.5417 58.3600 , R4 = , −152.5991 380.5480 58.3600 275.6008
784.4856 153.3889 153.3889 659.7415
715.2169 69.9024 271.3480 107.7069 , R7 = , 69.9024 518.2362 107.7069 121.4036
275.6604 144.2375 144.2375 169.5516
Q4 =
L=
265.0284 −0.2672 , −0.2672 303.7453
T7 = T8 = R2 = R3 = R5 = R6 = R8 =
123
2588
C. Sowmiya et al. 1
0.6
0.4 0.2 0 −0.2
0.4 0.2 0 −0.2
−0.4
−0.4
−0.6
−0.6
−0.8
−0.8
−1
0
20
40
60
80
x1 x2 y1 y2
0.8
State response
0.6
State response
1
x1 x2 y1 y2
0.8
100
−1
0
20
40
k
60
80
100
k
Fig. 1 State response of (3) with impulsive and non-impulsive effects
Y1 = Y3 = Y4 = ζ2 = ζ3 =
26.1885 −40.4006 27.6678 −42.2200 , Y2 = , −40.4006 62.3269 −42.2200 64.4275
239.2988 122.3856 122.3856 174.2126
239.2988 122.3856 79.9344 0 , ζ1 = , 122.3856 174.2126 0 205.9890
63.4866 0 0 254.7480
170.4501 0 231.7904 0 , ζ4 = 0 216.2830 0 165.9446
Based on Theorem 3.1, we can attain that the neural networks system (3) with above given parameters are asymptotically stable with the help of Lyapunov– Krasovskii functionals, and state trajectories x1 (k), x2 (k), y1 (k), y2 (k) of above impulsive discrete-time BAM neural networks with time-varying leakage delays are depicted in Fig. 1. Example 5.2 Consider a two-dimensional impulsive discrete-time BAM neural networks with randomly occurring uncertainties and time-varying leakage delays. The following parameters help to check the feasibility of neural networks system (46) A= L = Wl = Wa = X =
0.09 0 0.5 0.6 0.01 0.6 , B= , C= , 0 0.08 0.9 0.5 0.5 −0.55
0.05 0 1.05 2.86 0.4 0.8 , N= , M= 0 0.07 0.59 0.5 0.9 1.05
0.5 0.6 0.6 0.7 0.1 0.2 , Wm = , Wn = 0.6 0.8 0.8 0.9 0.3 0.4
0.9 0.4 0.8 0.7 0.8 0.9 , Wb = , Wc = 0.6 0.3 0.6 0.5 0.6 0.7
0.8 0 10 , I = 0 0.8 01
123
Im = −2,
Jm = −2,
(k) = = 5; δ = 0.5; δ˘ = 0.3;
ρ(k) = ρ = 5;
υ = 0.6;
ι = 0.7;
υ˘ = 0.8;
ι˘ = 0.9;
The activation functions are taken as
exp(−0.99y) f (y(k)) = g(y(k)) = , exp(−0.99y)
exp(−0.99x) fˆ(x(k)) = g(x(k)) ˆ = exp(−0.99x) Our main aim in this example is to estimate the maximum allowable upper bound delay α M , β M , for given lower bound αm , βm . By solving MATLAB LMI Toolbox, one can easily obtain the feasible solution for any time delay satisfying 0 < α(k) < α M = for any large finite value and 0 < β(k) < α M = for any large finite value. For example, if we take αm = βm = 7, α M = β M = 13, f 1+ = 2, f 1− = −4, f 2+ = 1, f 2− = −1, g1+ = 0.2, g1− = 0.2, g2+ = 0.2, g2− = 0.2, − + − + u+ 1 = 4, u 1 = −0.2, u 2 = 4, u 2 = 0.2, d1 = 3, − + − d1 = 0.2, d2 = 3, d2 = −0.2. with
−8 0 −1 0 F1 = , F2 = , 0 −1 0 0
0.04 0 0.2 0 G1 = , G2 = , 0 0.04 0 0.2
0.6 0 1.6 0 D1 = , D2 = , 0 −0.6 0 1.4
−0.8 0 1.9 0 E1 = , E2 = , 0 0.8 0 2.1
Impulsive discrete-time BAM neural networks
Now we obtain the feasible solution as follows: P1 = P3 = P4 = P6 = P7 = Q1 = Q2 = Q4 = Q5 = Q7 = Q8 = O1 = O3 = H1 = H2 = H4 = H5 = H7 = H8 = Z2 = Z3 = T1 = T2 = T4 = T5 =
204.5727 −1.1637 189.1772 −1.3159 , P2 = , −1.1637 203.4828 −1.3159 187.9568
189.8024 −1.1654 −1.1654 188.4585
262.5994 −2.4950 119.8775 −1.0533 , P5 = , −2.4950 257.1018 −1.0533 118.0974
635.0379 −4.3866 −4.3866 629.1128
376.5259 3.7361 476.0092 −21.5589 , P8 = , 3.7361 258.4162 −21.5589 252.0999
202.0052 4.6153 4.6153 232.2483
187.6723 6.7504 185.2238 0.0350 , Q3 = , 6.7504 223.0553 0.0350 212.2623
237.7200 −27.1031 −27.1031 252.0790
110.9435 0.5117 671.6334 −102.7249 , Q6 = , 0.5117 160.3469 −102.7249 336.8742
297.9458 −25.1869 −25.1869 332.1454
76.5089 −38.4080 36.1348 0.4562 , O2 = , −38.4080 137.3497 0.4562 38.0656
102.4601 −2.5851 −2.5851 97.4345
180.3395 −2.5069 36.0224 1.2426 , O4 = , −2.5069 193.8166 1.2426 35.1141
102.8293 0.2484 0.2484 104.7745
489.9939 56.2302 266.4602 −0.0000 , H3 = , 56.2302 284.6589 −0.0000 266.4602
262.6814 −9.8082 −9.8082 252.5437
105.3490 −1.4314 266.6557 −10.0629 , H6 = , −1.4314 106.3900 −10.0629 276.2196
209.1056 −16.6238 −16.6238 233.6975
209.1056 −16.6238 77.7712 15.0609 Z1 = −16.6238 233.6975 15.0609 49.0337
101.7746 27.7696 27.7696 52.0144
993.6567 −341.6832 242.1107 −29.0098 , Z4 = , −341.6832 598.1055 −29.0098 144.8830
189.0556 −22.3030 −22.3030 171.2480
185.1430 −107.2418 266.6014 0.0799 , T3 = , −107.2418 259.5594 0.0799 266.8213
154.5714 −41.3974 −41.3974 118.9310
224.3986 −46.1427 262.8330 −4.8681 T6 = , −46.1427 190.3638 −4.8681 262.0440
2589
T7 =
T8 = R2 = R3 = R5 = R6 = R8 = Y1 = Y3 = Y4 = ζ2 = ζ3 =
266.0039 −0.1635 −0.1635 265.9902
277.9033 −3.6029 63.6434 0.0822 , R1 = , −3.6029 210.9432 0.0822 65.0703
61.3013 −1.3008 −1.3008 63.2007
186.1789 −106.5294 203.3079 41.6707 R4 = −106.5294 266.4642 41.6707 192.2140
548.9885 108.0646 108.0646 460.9957
492.9769 55.8313 187.7680 75.9434 R7 = 55.8313 354.9638 75.9434 85.2374
193.2514 101.1284 101.1284 117.8250
26.8546 −41.4341 27.9808 −42.6842 Y2 = −41.4341 63.9299 −42.6842 65.1151
81.4225 24.2249 24.2249 67.4518
81.4225 24.2249 53.9465 0 ζ1 = 24.2249 67.4518 0 142.5026
42.6444 0 0 173.4502
119.3567 0 162.0177 0 , ζ4 = 0 151.3998 0 115.4791
ς = 222.6251; ς 1 = 215.3682
According to Theorem 4.1, we can acquire that the neural networks system (46) with above given parameters is asymptotically stable with the help of Lyapunov–Krasovskii functionals, and state trajectories x1 (k), x2 (k), y1 (k), y2 (k) of above impulsive discrete-time BAM neural networks with time-varying leakage delays and randomly occurring parameter uncertainties are depicted in Fig. 2. Remark 5.3 If the impulse disappears, then the neural networks system (3) is rewritten as follows: x(k + 1) = L x k − (k) + M f y(k) + N g y(k − α(k)) y(k + 1) = Ay k − ρ(k) + B fˆ x(k) + C gˆ x(k − β(k))
(71)
From the above three diagrams in Fig. 3, the state trajectories of time-varying leakage delays are depicted. That is, the state trajectory of first diagram represents that NNs (71) have no leakage delay. The state trajectory of second diagram shows that NNs (71) with upper bound of leakage delay as (k) = = 5 and ρ(k) = ρ = 5 which leads the system with some
123
2590
C. Sowmiya et al. 12 10 8
10
6 4 2
8 6 4 2
0
0
−2 −4
x1 x2 y1 y2
12
State response
State response
14
x1 x2 y1 y2
−2 0
10
20
30
40
50
60
70
−4
80
0
10
20
30
k 1
x1 x2 y1 y2
0.8
60
70
80
0.6 0.4 0.2
x1 x2 y1 y2
0.8 0.6
State response
State response
50
k
1
0.4 0.2 0
−0.2 −0.4
0
−0.6
−0.2 −0.4
40
−0.8 0
50
100
−1
150
0
k
50
100
150
k
Fig. 2 State trajectories of (46) with impulsive and non-impulsive effects 10
x1 x2 y1 y2
State response
8 6 4 2
disturbances and finally leads to stable. But the state trajectory of third diagram depicts that NNs (71) to be unstable with (k) = = 8 and ρ(k) = ρ = 8. From the three state trajectories, we conclude that if the leakage is high the system becomes worse and leads to unstable.
0 −2
6 Conclusions
−4 −6 −8 0
50
100
k Fig. 3 State responses of (71)
123
150
In this research work, we have explored the robust stability problem for impulsive discrete-time BAM neural networks with randomly occurring uncertainties and time-varying leakage delays. With the help of Lyapunov–Krasovskii functional technique, a novel conditions have been entrenched to guarantee the robust stability of the designed discrete-time neural
Impulsive discrete-time BAM neural networks
networks in terms of LMIs, which can be adequately checked by MATLAB LMI Toolbox. In addition, convex combination lemma is used which helps to reduce the decision variables without affecting the conservatism. At last, four numerical simulations have been delivered to illustrate the effectiveness of our developed theoretical work. It would be interesting to extend the results proposed in this paper to extend further to H∞ control for neural networks with mode stochastic noise and Markovian jumping parameters and corresponding results will appear in near future.
References 1. Chen, P., Tang, X.: Existence and multiplicity of solutions for second-order impulsive differential equations with Dirichlet problems. Appl. Math. Comput. 218, 11775– 11789 (2012) 2. Lu, J., Ho, D., Cao, J., et al.: Exponential synchronization of linearly coupled neural networks with impulsive disturbances. IEEE Trans. Neural Netw. 22(2), 329–336 (2011) 3. Liu, X., Wu, M., Martin, R., Tang, M.: Delay-dependent stability analysis for uncertain neutral systems with timevarying delays. Math. Comput. Simul. 75, 15–27 (2007) 4. Liu, X., Wu, M., Martin, R., Tang, M.: Stability analysis for neutral systems with mixed delays. J. Comput. Appl. Math. 202, 478–497 (2007) 5. Liu, Z., Chen, A., Cao, J., Huang, L.: Existence and global exponential stability of periodic solution for BAM neural networks with periodic coefficients and time-varying delays. IEEE Trans. Circuits Syst. I(50), 1162–1173 (2003) 6. Zhang, H., Ye, R., Cao, J. et al.: Lyapunov functional approach to stability analysis of Riemann–Liouville fractional neural networks with time-varying delays. Asian J. Control (2017). https://doi.org/10.1002/asjc.1675 7. Li, X., Zhang, X., Song, S.: Effect of delayed impulses on input-to-state stability of nonlinear systems. Automatica 76, 378–382 (2017) 8. Li, X., Wu, J.: Stability of nonlinear differential systems with state-dependent delayed impulses. Automatica 64, 63– 69 (2016) 9. Pan, L., Cao, J.: Robust stability for uncertain stochastic neural networks with delays and impulses. Neurocomputing 94, 102–110 (2012) 10. Li, X., Song, S.: Impulsive control for existence, uniqueness andglobal stability of periodic solutions of recurrent neural networks with discrete and continuously distributed delays. IEEE Trans. Neural Netw. Learn. Syst. 24, 868–877 (2013) 11. Li, X., Rakkiyappan, R., Sakthivel, N.: Non-fragile synchronization control for Markovian jumping complex dynamical networks with probabilistic time-varying coupling delay. Asian J. Control 17, 1678–1695 (2015) 12. Cao, J., Li, R.: Fixed-time synchronization of delayed memristor-based recurrent neural networks. Sci. China Inf. Sci. 60(3), 032201 (2017)
2591 13. Li, X., Cao, J.: Delay-dependent stability of neural networks of neutral type with time delay in the leakage term. Nonlinearity 23(7), 1709 (2010) 14. Zhao, H., Cao, J.: New conditions for global exponential stability of cellular network with delays. Neural Netw. 18, 1332–1340 (2005) 15. Li, X., Rakkiyappan, R.: Delay-dependent global asymptotic stability criteria for stochastic genetic regulatory networks with Markovian jumping parameters. Appl. Math. Model. 36, 1718–1730 (2012) 16. Zhang, B., Xu, S., Zou, Y.: Improved delay-dependent exponential stability criteria for discrete-time recurrent neural networks with time-varying delays. Neurocomputing 72, 321–330 (2008) 17. Kosko, B.: Bidirectional associative memories. IEEE Trans. Syst. Man Cybern. 18(1), 49–60 (1988) 18. Sree Hari Rao, V., Phaneendra, B.R.M.: Global dynamics of bidirectional associative memory neural networks involving transmission delays and dead zones. Neural Netw. 12(3), 455–465 (1999) 19. Cao, J., Wan, Y.: Matrix measure strategies for stability and synchronization of inertial BAM neural network with time delays. Neural Netw. 53, 165–172 (2014) 20. Wang, L., Zou, X.: Capacity of stable periodic solutions in discrete-time bidirectional associative memory neural networks. IEEE Trans. Circuits Syt. II(51), 315–319 (2004) 21. Mohamad, S.: Global exponential stability in continuoustime and discrete-time delayed bidirectional neural networks. Physica D 159(3–4), 233–251 (2001) 22. Guo, L., Nie, J., Zhang, Y.: Robust exponential stability of stochastic discrete-time BAM neural networks with Markovian jumping parameters and delays. Adv. Mater. Res. 989– 994, 1877–1882 (2014) 23. Gao, M., Cui, B.: Global robust exponential stability of discrete-time interval BAM neural networks with timevarying delays. Appl. Math. Model. 33(3), 1270–1284 (2009) 24. Kosko, B.: Neural Networks and Fuzzy Systems. Prentice Hall, New Delhi (1992) 25. Li, X., Fu, X.: Effect of leakage time-varying delay on stability of nonlinear differential systems. J. Frankl. Inst. 350, 1335–1344 (2013) 26. Li, X., Rakkiyappan, R.: Stability results for Takagi–Sugeno fuzzy uncertain BAM neural networks with time delays in the leakage term. Neural Comput. Appl. 22, 203–219 (2013) 27. Gopalsamy, K.: Stability and Oscillations in Delay Differential Equations of Population Dynamics. Kluwer, Dordrecht (1992) 28. Gopalsamy, K.: Leakage delays in BAM. J. Math. Anal. Appl. 325, 1117–1132 (2007) 29. Li, X., Song, S.: Stabilization of delay systems: delaydependent impulsive control. IEEE Trans. Autom. Control 62(1), 406–411 (2017) 30. Li, Y.: Global exponential stability of BAM neural networks with delays and impulses. Chaos Solitons Fractals 24, 279– 285 (2005) 31. Raja, R., Sakthivel, R., Anthoni, S.M.: Stability analysis for discrete-time stochastic neural networks with mixed time delays and impulsive effects. Can. J. Phys. 88(12), 885–898 (2010)
123
2592 32. Raja, R., Karthick Raja, U., Samidurai, R., Leelamani, A.: Dynamic analysis for discrete-time BAM neural networks with stochastic perturbations and impulses. Neurocomputing 5(1), 39–50 (2014) 33. Li, X., Bohner, M., Wang, C.: Impulsive differential equations: periodic solutions and applications. Automatica 52, 173–178 (2015) 34. Li, X., Cao, J.: An impulsive delay inequality involving unbounded time-varying delay and applications. IEEE Trans. Autom. Control 62(7), 3618–3625 (2017) 35. Lakshmanan, S., Park, JuH, Jung, H.Y., Balasubramaniam, P.: Design of state estimator for neural networks with leakage, discrete and distributed delays. Appl. Math. Comput. 218, 297–310 (2012) 36. Luo, M., Zhong, S., Wang, R., Kang, W.: Robust stability analysis for discrete-time stochastic neural networks systems with time-varying delays. Appl. Math. Comput. 209, 305–313 (2009) 37. Hou, L., Zhu, H., Zhong, S., Zhang, Y., Zeng, Y.: Less conservative stability criteria for stochastic discrete-time recurrent neural networks with the time-varying delay. Neurocomputing 115, 72–80 (2013) 38. Shi, P., Zhang, Y., Agarwal, R.K.: Stochastic finite-time state estimation for discrete time-delay neural networks with Markovian jumps. Neurocomputing 151, 168–174 (2015) 39. Cao, J., Rakkiyappan, R., Maheswari, K., et al.: Exponential H ∞ filtering analysis for discrete-time switched neural networks with random delays using sojourn probabilities. Sci. China Technol. Sci. 59(3), 387–402 (2016) 40. Mathiyalagan, K., Su, H., Sakthivel, R.: Robust stochastic stability of discrete-time Markovian jump neural networks with leakage delay. Z. Naturforsch. 69, 70–80 (2014) 41. Hou, L., Zhu, H.: Stability of stochastic discrete-time neural networks with discrete delays and the leakage delay. Mathematical Problems in Engineering 2015, p. 306806 (2015)
123
C. Sowmiya et al. 42. Jarina Banu, L., Balasubramaniam, P., Ratnavelu, K.: Robust stability analysis for discrete-time uncertain neural networks with leakage time-varying delay. Neurocomputing 151, 808–816 (2015) 43. Liu, M.: Stability analysis of discrete-time recurrent neural networks based on standard neural network models. Neural Comput. Appl. 18(8), 861–874 (2009) 44. Liu, Y., Wang, Z., Liu, X.: Global exponential stability of generalized recurrent neural networks with discrete and distributed delays. Neural Netw. 19, 667–675 (2006) 45. Wang, Z., Shu, H., Liu, Y., Ho, D.W.C., Liu, X.: Robust stability analysis of generalized neural networks with discrete and distributed time delays. Chaos Solitons Fractals 30, 886–896 (2006) 46. Park, M.J., Kwon, O.M., Park, Ju H., Lee, S.M., Cha, E.J.: Robust synchronization criterion for coupled stochastic discrete-time neural networks with interval time-varying delays, leakage delay and parameter uncertainties. Abstract and Applied Analysis, vol. 2013, p. 814692 (2013) 47. Boyd, S., El Ghaoui, L., Feron, E., Balakrishnan, V.: Linear Matrix Inequalities in System and Control Theory. Society for Industrial and Applied Mathematics Philadelphia (1994) 48. Li, Y., Lu, Q., Song, Q.: Robust stability of discretetime uncertain stochastic BAM neural networks with timevarying delays. Int. J. Comput. Sci. Netw. Secur. 8(8), 255– 263 (2008) 49. Liu, X., Tang, M., Martin, R., Liu, X.: Discrete-time BAM neural networks with variable delays. Phys. Lett. A 367, 322–330 (2007) 50. Raja, R., Karthick Raja, U., Samidurai, R., Leelamani, A.: Passivity analysis for uncertain discrete-time stochastic BAM neural networks with time varying delays. Neurocomputing 25(3–4), 751–766 (2014)