Hierarchical Type Stability Criteria for Delayed Neural Networks via ...

1 downloads 0 Views 699KB Size Report
Xian-Ming Zhang, Member, IEEE, Qing-Long Han , Senior Member, IEEE, ... X.-M. Zhang and Q.-L. Han are with the School of Software and Electrical.
This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. IEEE TRANSACTIONS ON CYBERNETICS

1

Hierarchical Type Stability Criteria for Delayed Neural Networks via Canonical Bessel–Legendre Inequalities Xian-Ming Zhang, Member, IEEE, Qing-Long Han , Senior Member, IEEE, and Zhigang Zeng , Senior Member, IEEE

Abstract—This paper is concerned with global asymptotic stability of delayed neural networks. Notice that a Bessel–Legendre inequality plays a key role in deriving less conservative stability criteria for delayed neural networks. However, this inequality is in the form of Legendre polynomials and the integral interval is fixed on [−h, 0]. As a result, the application scope of the Bessel–Legendre inequality is limited. This paper aims to develop the Bessel–Legendre inequality method so that less conservative stability criteria are expected. First, by introducing a canonical orthogonal polynomial sequel, a canonical Bessel–Legendre inequality and its affine version are established, which are not explicitly in the form of Legendre polynomials. Moreover, the integral interval is shifted to a general one [a, b]. Second, by introducing a proper augmented Lyapunov–Krasovskii functional, which is tailored for the canonical Bessel–Legendre inequality, some sufficient conditions on global asymptotic stability are formulated for neural networks with constant delays and neural networks with time-varying delays, respectively. These conditions are proven to have a hierarchical feature: the higher level of hierarchy, the less conservatism of the stability criterion. Finally, three numerical examples are given to illustrate the efficiency of the proposed stability criteria. Index Terms—Bessel–Legendre inequality, delayed neural networks, global asymptotic stability, hierarchy, Lyapunov–Krasovskii functional (LKF).

I. I NTRODUCTION ROM the system engineering perspective, a neural network behaves as “a nonlinear black box,” which can model and describe nonlinear dynamics effectively. Due to such a conspicuous feature, neural networks have found a wide range of applications in several areas [1], e.g., image processing, associate memory, optimization, intelligent control, and so on [2]–[10]. It should be mentioned that most of

F

Manuscript received August 24, 2017; revised November 9, 2017; accepted November 18, 2017. This work was supported by the Australian Research Council Discovery Project under Grant DP160103567. This paper was recommended by Associate Editor Y. Xia. (Corresponding author: Qing-Long Han.) X.-M. Zhang and Q.-L. Han are with the School of Software and Electrical Engineering, Swinburne University of Technology, Melbourne, VIC 3122, Australia (e-mail: [email protected]; [email protected]). Z. Zeng is with the School of Automation, Huazhong University of Science and Technology, Wuhan 430074, China, and also with the Key Laboratory of Image Processing and Intelligent Control of Education Ministry of China, Huazhong University of Science and Technology, Wuhan 430074, China (e-mail: [email protected]). Digital Object Identifier 10.1109/TCYB.2017.2776283

those applications of neural networks heavily depend on their global asymptotic stability. Hence, during the past decades, global asymptotic stability of neural networks has received considerable attention (see [11]–[16]). Usually, a neural network is integrated in hardware circuits and implemented by electronic components, such as amplifiers. Since the switching speed of amplifiers is limited and due to inherent communication time, time delays are inevitable in a neural network [17]–[20], leading to a delayed neural network. The effects of time delays on stability of the neural network lie in twofold. If the neural network with delay free is stable, the delayed neural network may be unstable unless the delay size is less than a certain upper bound. If the neural network with delay free is unstable, the delayed neural network may become stable for some delays within a certain range. Such a certain upper bound or range reflects the endurability of the neural network on time delays. Thus, it is significant to determine the certain upper bound or range of the delay such that the delayed neural network is globally asymptotically stable. In the last decade, much effort has been made on this topic, and a great number of results have been reported in [21]–[28]. The Lyapunov–Krasovskii functional (LKF) method is a powerful tool for deriving the admissible delay upper bound for a neural network to maintain its stability [29], [30]. The basic idea is to construct a proper LKF positive definite such that its time-derivative along with the trajectory of the neural network is negative definite [31]–[33]. Clearly, the construction of a proper LKF and the estimation on its time-derivative are two fundamental issues for the LKF method. It is com0 t mon that a double-integral term as −h t+θ x˙ T (s)R˙x(s)dsdθ is included in an LKF, where h is a positive scalar and R is a positive definite  t matrix [34]. As a result, such an integral term I (t)  − t−h x˙ T (s)R˙x(s)ds appears in the derivative of the LKF. Recently, a Bessel–Legendre inequality is proposed [35], which reads as  0 N 1 ˜ Tk R ˜k x˙ T (s)R˙x(s)ds ≥ (2k + 1) (1) h −h k=0

0

˜k = where  x(s)ds and Lk (s) (k = 0, 1, . . . , N) are −h Lk (s)˙ Legendre polynomials. This inequality can provide an upper bound as tight as possible for I (t) if N approaches infinity. However, it is shown in [35] that, in order to derive

c 2017 IEEE. Personal use is permitted, but republication/redistribution requires IEEE permission. 2168-2267  See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 2

IEEE TRANSACTIONS ON CYBERNETICS

less conservative stability criteria using the Bessel–Legendre inequality, the chosen LKF should be dependent on Legendre polynomials. If h is a time-varying function, it is complicated to calculate the time-derivative of the Legendre-polynomialbased LKF. The larger N, the higher complexity. This can explain why most existing results on stability of delayed neural networks are based on a special Bessel–Legendre inequality with N = 1 or N = 2 [36]–[38]. Another observation of the Bessel–Legendre inequality is that its integral interval is fixed on [−h, 0], which makes the inequality not convenient to use. Thus, it is a challenging problem how to exploit the Bessel–Legendre inequality to derive an N-dependent and less conservative stability criterion for delayed neural networks, which is the main motivation of this paper. This paper provides a solution to the challenging problem mentioned above. First, by introducing a canonical orthogonal polynomial sequel, a canonical Bessel–Legendre inequality and its affine version are established. Compared with the Bessel–Legendre inequality, the canonical Bessel–Legendre inequality has two significant merits: 1) the integral interval [−h, 0] is shifted to a general interval [a, b] with b > a and ˜ k in inequality (1) is replaced with a simpler form, which 2)  enables us to construct an LKF independent of Legendre polynomials. Second, by constructing an augmented LKF, which is tailored for the use of the canonical Bessel–Legendre inequality, some N-dependent stability criteria are derived for neural networks with constant delays or time-varying delays. It is proven that these criteria form a hierarchy of LMI conditions: the larger N, the larger delay upper bound, which is demonstrated through three numerical examples. Notations: He{G} = G + GT ; Co{q1 , q2 } is a polytope generated by two vertices q1 and q2 ; ( ij ) = i!/[(i − j)!j!]; and the symmetric term in a symmetric matrix is denoted by “*.”

According to the choice of basic variables, dynamic neural networks can be classified into two categories: 1) local field neural networks and 2) static neural networks. After its equilibrium point is shifted into the origin, either a local field neural network or a static neural network can be expressed as the following generalized form [13]: (2)

where x(t) = col{x1 (t), x2 (t), . . . , xn (t)} ∈ Rn is the neuron state vector with n neurons; f (x(t)) = col{ f1 (x1 (t)), f2 (x2 (t)), . . . , fn (xn (t))} ∈ Rn is the neuron activation function; and A = diag{a1 , a2 , . . . , an }, W0 , W1 , and W2 are known constant real matrices and ai > 0 (i = 1, 2, . . . , n). The time delay h(t) is supposed to be a constant h(t) ≡ h or a time-varying function satisfying ¯ 0 ≤ h(t) ≤ h,

˙ ≤ dM < ∞ dm ≤ h(t)

s1 , s2 ∈ R, s1 = s2

(5)

≥0

(6)

F2i− (s1 , s2 )F2i+ (s1 , s2 ) where

F1i+ (s) := li+ s − fi (s), F1i− (s) := fi (s) − li− s   F2i+ (s1 , s2 ) := li+ (s1 − s2 ) − fi (s1 ) − fi (s2 ) F2i− (s1 , s2 ) := fi (s1 ) − fi (s2 ) − li− (s1 − s2 ).

(7) (8) (9)

Denote η(t) := col{x(t), f (W2 x(t))} and E1 = [I 0], E2 = [0 I]. Then the neural network (2) can be written as x˙ (t) = (−AE1 + W0 E2 )η(t) + W1 E2 η(t − h(t)).

(10)

In this paper, we focus on analyzing global asymptotic stability of the neural network (2) or (10) subject to (3) and (4) using a canonical Bessel–Legendre inequality. In doing so, we first introduce the following lemma. Lemma 1 [40]: Let R1 , R2 ∈ Rm×m be real symmetric positive definite matrices and 1 , 2 ∈ Rm and a scalar α ∈ (0, 1). Then for any Y1 , Y2 ∈ Rm×m , the following inequality holds: 1 1 T 1 R1 1 +  T R2 2 α  1 −α 2

T 1 ≥ 1T R1 + (1 − α) R1 − Y1 R−1 2 Y1 

 + 2T R2 + α R2 − Y2T R−1 1 Y2 2

F (α) :=

+ 21T [αY1 + (1 − α)Y2 ]2 .

(11)

In this section, we develop a canonical Bessel–Legendre inequality, which is convenient to use in the stability analysis of delayed neural networks. Lemma 2: For any constant matrix R > 0, two scalars a and b with b > a, and a vector function ω : [a, b] → Rn such that the integrations below are well defined, the following inequality holds:  b N 1  ωT (s)Rω(s)ds ≥ (2i + 1)Ti Ri (12) b−a a = where

(4)

 i :=

(3)

¯ dm , and dM being real constants. The neuron activation with h, functions fi (xi (t)) (i = 1, 2, . . . , n) satisfy fi (0) = 0 and fi (s1 ) − fi (s2 ) li− ≤ ≤ li+ , s1 − s2

F1i− (s)F1i+ (s) ≥ 0

III. C ANONICAL B ESSEL –L EGENDRE I NEQUALITY AND I TS A FFINE V ERSION

II. P ROBLEM F ORMULATIONS AND P RELIMINARIES

x˙ (t) = −Ax(t) + W0 f (W2 x(t)) + W1 f (W2 x(t − h(t)))

where li− and li+ are known constants that may be positive, negative, or zero. For convenience, we denote L− := diag{l1− , l2− , . . . , ln− }, L+ := diag{l1+ , l2+ , . . . , ln+ }. It follows that from (4), for s, s1 , s2 ∈ R:

b

1 b−a

ˆ Ti R ˆi (2i + 1)

(13)

i=0

ˆ i := L˜ i (s)ω(s)ds, 

a

with

i=0 N 



b

Lˆ i (s)ω(s)ds

(14)

a

⎧  ⎪ ⎨ L˜ i (s) := i

  k+i  b−s k

k i k=0 (−1) k

k

b−a

k

b−a

k     ⎪ ⎩ Lˆ i (s) := ik=0 (−1)k ki k+i s−a .

(15)

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. ZHANG et al.: HIERARCHICAL TYPE STABILITY CRITERIA FOR DELAYED NEURAL NETWORKS

Proof: Let

where



2i + 1 2(b − s) Pi ( f (s)), f (s) = − 1 (16) b−a b−a where Pi (·) (i = 0, 1, 2, . . . , N) are Legendre polynomials satisfying   1 0, i = j Pi (u)Pj (u)du = 2 , i = j. −1 2i+1 gi (s) =

Then, for i, j = 0, 1, 2, . . . , N, i = j  b  b gi (s)gj (s)ds = 0, g2i (s)ds = 1. a

Notice that 

(17)

a

   (s) T R ω(s) −  (s) ds ≥ 0 ω(s) − 

a



N 

 gi (s)

Hence  b N   T ω (s)Rω(s)ds ≥ a

b

gi (u)ω(u)du.



b

gi (u)ω (u)duR T



0 I 0 .. .

0 0 2I .. .

... ... ... .. .

gi (u)ω(u)du.

a

(18) On the other hand, use shifted Legendre polynomials to obtain   2(b − s) − 1 = (−1)i L˜ i (s) Pi ( f (s)) = Pi b−a   2(s − a) Pi ( f (s)) = (−1)i Pi (−f (s)) = (−1)i Pi − 1 = Lˆ i (s) b−a which lead to  b  b 2i + 1 Pi ( f (s))ω(s)ds gi (s)ω(s)ds = b−a a a  2i + 1 b (−1)i L˜ i (s)ω(s)ds = b−a a   2i + 1 b = (19) Lˆ i (s)ω(s)ds. b−a a Substituting (19) into (18) yields (12) and (13). Remark 1: It is clear to see that Lemma 2 is obtained by introducing a canonical orthogonal polynomial sequel {g0 (s), g1 (s), g2 (s), . . . , gN (s)}. Thus, the inequality (12) or (13) is called a canonical Bessel–Legendre inequality, in which the integral interval [0, h] is changed as [a, b]. The difference between (12) and (13) is that L˜ i (s) is a function of (b − s)/(b − a) while Lˆ i (s) is a function of (s − a)/(b − a). Applying Lemma 2, we have the following results, where ˆ i in (14) do not depend on the Legendre polynomials. i and  Corollary 1: For an integer N ≥ 0, a real symmetric matrix R > 0, two scalars a and b with b > a, and a vector-valued differentiable function ω : [a, b] → Rn such that the integrations below are well defined, then  b 1 NT TN TN RN N N N − ω˙ T (s)Rω(s)ds ˙ ≤− b − a a (20)

⎤ 0 0⎥ ⎥ 0⎥ ⎥ .. ⎥ .⎦ NI

(21) ⎤

0 ⎥ 0 ⎥ ⎥ .. ⎦ .    N+N I (−1)N N N N (22)

(23)

(24)

(b − s)k−1 ω(s)ds, (k = 1, 2, . . . , N). (25) (b − a)k a Proof: By the canonical Bessel–Legendre inequality (12), one has  b 1 X T T RN N XN (26) − ω˙ T (s)Rω(s)ds ˙ ≤− b−a N N a where XN := col{ρ0 (ω), ˙ ρ1 (ω), ˙ ρ2 (ω), ˙ . . . , ρN (ω)}, ˙ with  b (b − s)k ρk (ω) ˙ := ω(s)ds, ˙ (k = 0, 1, 2, . . . , N). k a (b − a) Notice that  ω(b) − ω(a), k = 0 ρk (ω) ˙ = (27) −ω(a) + kγk , k ≥ 1. γk :=

a

i=0

i=0

b

−I −I −I .. .

I ⎢0 ⎢ ⎢ N := ⎢0 ⎢ .. ⎣.

with

where

a

RN := diag{R, 3R, . . . , (2N + 1)R} ⎡ I ...  0  1 1 1+1 I ⎢I (−1) ... 1 1 ⎢

N := ⎢ . .. .. . ⎣. . .   I (−1)1 N1 N+1 I . . . 1

0 −I 0 0 . . . N := col{ω(b), ω(a), γ1 , . . . , γN }

b

(s) := 

3

b

Substituting (27) into (26) yields (20). Remark 2: Corollary 1 offers an integral inequality for b the integral term − a ω˙ T (s)Rω(s)ds. ˙ This inequality is not explicitly in the form of Legendre polynomials due to the introduction of N and N . This feature enables us to choose an LKF that is not necessarily dependent on the Legendre polynomials as is done in [35]. Since the integral inequality (20) are derived from Lemma 2, we also call it a canonical Bessel–Legendre inequality. An affine version of the integral inequality (20) can be readily obtained based on the fact that the following inequality holds for any real matrix M with compatible dimensions [41]: 1 − ( N N )T RN ( N N ) ≤ ( N N )T M + M T ( N N ) b−a + (b − a)M T R−1 N M. (28) Corollary 2: For two scalars a and b with b > a, an integral N ≥ 0, an n × n constant real matrix R > 0, an (N + 1)n × (N +1)n matrix M, and a vector-valued differentiable function ω : [a, b] → Rn such that the integrations below are well defined, then  b  ω˙ T (s)Rω(s)ds ˙ ≤ NT TN TN M + M T N N − a

+ (b − a)M T R−1 (29) N M N where N , RN , N , and N are defined in Corollary 1.

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. 4

IEEE TRANSACTIONS ON CYBERNETICS

IV. S TABILITY C RITERIA In this section, we establish some stability criteria by using the canonical Bessel–Legendre inequality in (20). These stability criteria are of general forms due to that they depend on the positive integer N. In doing so, we consider two cases of the time delay: 1) constant delay and 2) time-varying delay satisfying (3).

(43)

A. Constant Delay Case Suppose that h(t) ≡ h > 0. Choose an augmented LKF candidate as Vc (t, xt ) := Vc1 (t, xt ) + Vc2 (t, xt ) + Vc3 (t, xt )

(30)

where Vc1 (t, xt ) := x˜ cT (t)PcN x˜ c (t)  t η˜ T (s)Qc η(s)ds ˜ Vc2 (t, xt ) := t−h  0 t +h x˙ T (s)Rc x˙ (s)dsdθ Vc3 (t, xt ) := 2

−h t+θ W2i x(t−h) 

n   i=1

+2 with

and

0

F1i− (s)

(32)



ˆ1i F1i− (s) + δˆ1i F1i+ (s) ds (33)

being defined in (7), and

x˜ c (t) := col{x(t), x(t − h), 1 (t), 2 (t), . . . , N (t)}  1 t i (t) := i (t − s)i−1 x(s)ds, i = 1, . . . , N (34) h t−h η(s) ˜ := col{x(s), f (W2 x(s)), x˙ (s)}. (35) It is clear to see that the LKF (30) is tailored for the use of the canonical Bessel–Legendre inequality (20). This functional is not in the form of Legendre polynomials as that in [35]. We now establish a stability criterion. Proposition 1: For a given constant h > 0, the neural network (2) with h(t) ≡ h subject to (4) is globally asymptotically stable if there exist real symmetric matrices PcN > 0, ˆ 1i = diag Qc > 0, Rc > 0 with appropriate dimensions and  ˆ 2i = diag{δˆi1 , δˆi2 , . . . , δˆin } ≥ 0 {ˆ i1 , ˆi2 , . . . , ˆin } ≥ 0,  (i = 1, 2), Tˆ k = diag{ˆtk1 , ˆtk2 , . . . , ˆtkn } ≥ 0 (k = 1, 2, 3) such that ˆ 1N +  ˆ 2N +  ˆ3 1 (h¯ − h(t))ς˙i (t)  ¯ + (1 − h(t))x(t ˙ ˙ i=1 −x(t − h) − h(t)) + h(t)ς 1 (t), = ¯ ˙ ˙ −x(t − h) + (i − 1)(1 − h(t))ςi−1 (t) + ih(t)ς i (t), i > 1

C1N := col{E1 e1 , E1 e2 , E1 e3 , ρ(t)e5 , . . . , ρ(t)eN+4 }   ˙ C21N := col C0 , (1 − h(t))E 1 e4 , E2 e4 , ζ1N   ˙ C22N := col h(t)C0 , h(t)(1 − h(t))E 1 e4 , h(t)E2 e4 , ζ2N  ˙ C23N := col (h¯ − h(t))C0 , (h¯ − h(t))(1 − h(t))E 1 e4  × (h¯ − h(t))E2 e4 , ζ3N

(66)

d ˙ (h(t)τi (t)) = h(t)τ˙i (t) + h(t)τ i (t) dt   d ¯ ˙ (h − h(t))ςi (t) = (h¯ − h(t))ς˙i (t) − h(t)ς i (t). dt

(67)

Denote

(68)

 (79) × η(t), ˆ 1 (t), 2 (t), . . . , N (t) ¯ := col{˙x(t − h(t)), x˙ (t − h)}, and i (t) (i = where η(t) ˆ 1, 2, . . . , N) are defined in (55). Then, we have

C31N := col{E1 e1 , E1 e2 , E1 e3 , E1 e5 , . . . , E1 eN+4 }

(70)

x˜ 1 (t) = C1N ξ(t), x˜ 2 (t) = C31N ξ(t)

C32N := col{E1 e1 , E1 e2 , E1 e3 , E2 e5 , . . . , E2 eN+4 }   ζ1N := col{C1 , C2 , . . . , CN }, Cj = col C1j , C2j

(71)

x˜ 3 (t) = C32N ξ(t) x˙˜ 1 (t) = C21N ξ(t), h(t)x˙˜ 2 (t) = C22N ξ(t) ! " e ˙ ¯ (h − h(t))x˜ 3 (t) = C23N ξ(t), η(t) ˜ = 1 ξ(t) C0 " ! " ! e2 ¯ = e3 ξ(t) ξ(t), η(t ˜ − h) η(t ˜ − h(t)) = E1 e4 E2 e4 where C1N , C2iN , C3jN (i = 1, 2, 3, j = 1, 2) are defined in (66)–(71). Hence, one has  t ˙ xt ) = ξ T (t)(1N + 2N )ξ(t) − h¯ x˙ T (θ )R˙x(θ )dθ V(t,

(69)

ζ2N := col{D11 , D12 , . . . , D1N } ζ3N := col{D21 , D22 , . . . , D2N } #   $ W2T 2i L+ − 1i L− W2 i := , i ∈ {1, 2, 3} (1i − 2i )W2 F1 := E2 − L− W2 E1 , F2 := L+ W2 E1 − E2 % ˙ −(1 − h(t))E 1 e2 + E1 e1 ,   C1j := ˙ ˙ −(1 − h(t))E 1 e2 + (j − 1)E1 ej+3 − h(t)e j+4 , % ˙ −E1 e3 + (1 − h(t))E 1 e2 ,   := C2j ˙ ˙ −E1 e3 + (j − 1)E2 (1 − h(t))e j+3 + h(t)e j+4 ,

(72) (73) j=1 j>1 j=1 j>1

˙ ˙ D1j := C1j − h(t)E 1 ej+4 , D2j := C2j + h(t)E 2 ej+4

where e1 , e2 , . . . , eN+4 are block-row matrices such that col{e1 , e2 , . . . , eN+4 } = I. Proof: Taking the derivative of V(t, xt ) along the trajectory of (2) yields 5  ˙ xt ) = V(t, V˙ i (t, xt ) i=1

where V˙ 1 (t, xt ) = 2˜x1T (t)P1N x˙˜ 1 (t)

(74) 

T T ˙ ˙ ˙ V2 (t, xt ) = h(t)˜x2 (t)P2N x˜ 2 (t) + 2˜x2 (t)P2N h(t)x˜ 2 (t) 

˙ xT (t)P3N x˜ 3 (t) + 2˜xT (t)P3N (h¯ − h(t))x˙˜ 3 (t) − h(t)˜ 3 3 (75)

 ¯ ξ(t) = col η(t), η(t − h(t)), η(t − h)

− h¯



t−h(t) t−h(t) t−h¯

x˙ T (θ )R˙x(θ )dθ

(80)

where 1N and 2N are defined in (60) and (61), respectively. Apply Corollary 1 to get  t h¯ T ξ(t)T 1N x˙ T (θ )R˙x(θ )dθ ≤ − RN 1N ξ(t) −h¯ h(t) t−h(t)  t−h(t) h¯ T ¯ ξ T (t)2N −h x˙ T (θ )R˙x(θ )dθ ≤ − RN 2N ξ(t) h¯ − h(t) t−h¯ where 1N , 2N , and RN are defined in (64), (65), and (21), respectively. It follows that from Lemma 1:  t−h(t)  t x˙ T (θ )R˙x(θ )dθ − h¯ x˙ T (θ )R˙x(θ )dθ −h¯ ¯ t−h(t) t−h  T T T ≤ ξ (t) 3N + (1 − α)1N Y1N R−1 N Y1N 1N T T + α2N Y2N R−1 N Y2N 2N ξ(t)

¯ and 3N is defined in (62). where α := h(t)/h,

(81)

This article has been accepted for inclusion in a future issue of this journal. Content is final as presented, with the exception of pagination. ZHANG et al.: HIERARCHICAL TYPE STABILITY CRITERIA FOR DELAYED NEURAL NETWORKS

Notice that from (5) and (6), the following inequalities hold for Tk := diag{tk1 , tk2 , . . . , tkn } ≥ 0 (k = 1, 2, . . . , 6):  T   −2 f (ym ) − L− ym Tm f (ym ) − L+ ym ≥ 0  T − 2 f (yi ) − f (yj ) − L− (yi − yj ) T2j−i+1   × f (yi ) − f (yj ) − L+ (yi − yj ) ≥ 0 where m = 1, 2, 3, i = 1, 2; j = 2, 3, j > i, y1 = W2 x(t), ¯ Then, we have y2 = W2 x(t − h(t)) and y3 = W2 x(t − h). # 3  0 ≤ 2ξ T (t) eTi FT1 Ti F2 ei i=1

+

2 3  

⎤ (ei − ej )T FT1 T2j−i+1 F2 (ei − ej )⎦ξ(t)

i=1 j=2,j>i

where F1 and F2 are defined in (73), which leads to ξ T (t)4 ξ(t) ≥ 0

(82)

where 4 is defined in (63). Finally, by substituting (81) into (80), together with (82), after some algebraic manipu˙ N (h(t), h(t))ξ(t), ˙ xt ) ≤ ξ T (t) where lations, one has V(t, T T ˙ ˙ N (h(t), h(t))  = N (h(t), h(t)) + α2N Y2N R−1 Y2N 2N N

T T + (1 − α)1N Y1N R−1 N Y1N 1N ξ(t).

˙ ¯ and on h(t) ˙ (h(t), h(t)) Since  is affine on h(t) ∈ [0, h] ∈ [dm , dM ], by the Schur complement, if LMIs in (57) and (58) ˙ ˙ (h(t), h(t)) are satisfied, one has  < 0 for (h(t), h(t)) ∈ ¯ × [dm , dM ]. Thus, there exists a scalar κ > 0 such that [0, h] ˙ xt ) ≤ −κ x(t) 2 < 0 for x(t) = 0, which draws a conV(t, clusion that the neural network (2) subject to (3) and (4) is globally asymptotically stable. If using the affine canonical Bessel–Legendre inequality (29) to bound the integral terms in (80), we have the following result. ¯ dm , and dM , the neural Proposition 3: For given scalars h, network (2) subject to (3) and (4) is globally asymptotically stable if there exist P1N > 0, P2N > 0, P3N > 0, Q1 > 0, Q2 > 0, R > 0 and 1i = diag{ i1 , i2 , . . . , in } ≥ 0, 2i = diag{δi1 , δi2 , . . . , δin } ≥ 0 (i = 1, 2, 3), Tk = diag{tk1 , tk2 , . . . , tkn } ≥ 0 (k = 1, 2, . . . , 6), Y1N and Y2N such that, for d ∈ {dm , dM } " ! T ˜ Y2N 1N (0, d) := N (0, d) ϒ

Suggest Documents