Exponential Stability of Stochastic Neural Networks ...

1 downloads 0 Views 195KB Size Report
Abstract. This paper investigates the exponential stability of stochastic neural networks with unbounded discrete delays and infinitely distributed delays. By using ...
Exponential Stability of Stochastic Neural Networks with Mixed Time-Delays Xuejing Meng1, , Maosheng Tian2 , Peng Hu1 , and Shigeng Hu1 1 2

School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan 430074, P.R. China State Engineering Research Center of Numerical Control System of Huazhong University of Science and Technology, Wuhan 430074, P.R. China [email protected] http://www.springer.com/lncs

Abstract. This paper investigates the exponential stability of stochastic neural networks with unbounded discrete delays and infinitely distributed delays. By using Lyapunov functions, the semi-martingale convergence theorem and some inequality techniques, the exponential stability in mean square and almost sure exponential stability are obtained. To overcome the difficulties from unbounded delays, some new techniques are introduced. Some earlier results are improved and generalized. An example is given to illustrate the results. Keywords: Stochastic neural networks; Unbounded delay; Distributed delay; Stability.

1

Introduction

In recent years, there have been an increasing research interest in the study of stochastic neural networks, see, for example, [3,6,7,11] and the references therein. Raska et al. [12] introduced a cellular neural network with discrete delays to deal with motion-related signal processing problems. In real neural networks, there usually have a spatial extent due to the presence of an amount of parallel pathways with a variety of axon sizes and lengths [2]. Therefore, there will be a distribution of conduction velocities along these pathways. Tank and Hopfield [14] proposed a neural circuit with distributed delays described by intergodifferential equations for solving a general problem of recognizing patterns in a time-dependent signal. To date, many researchers studied the stochastic neural networks with discrete delays and distributed delays (c.f. [1,5,8,9,15]). However, the discrete time delays they discussed are bounded in the papers mentioned above. In delayed neural networks, in most situations, delays are  

This research was supported by the fundamental research funds for the central universities under grant No.2010MS130. Corresponding author.

D. Liu et al. (Eds.): ISNN 2011, Part I, LNCS 6675, pp. 194–202, 2011. c Springer-Verlag Berlin Heidelberg 2011 

Exponential Stability of Stochastic Neural Networks

195

variable, and in fact unbounded [16]. In this paper, we study a generalized stochastic neural networks as follows:  0   dx(t) = −Bx(t)+AG(t, y(t))+ Dx(t+θ)dμ(θ) dt+σ(t, x(t), y(t), xt )dw(t), −∞

(1) where x(t) ∈ Rn , y(t) = (y1 (t), . . . , yn (t))T , yi (t) = xi (t − δi (t)), δi (t) is delay function which may be unbounded. xt = {x(t + θ) : −∞ < θ ≤ 0} is regarded as a Cb ((−∞, 0], Rn )-valued stochastic process. B = diag(bi ) with bi > 0. A = (aij ) ∈ Rn×n , D = (dij ) ∈ Rn×n . Both functions G(t, y) : R+ × Rn → Rn×n and σ(t, x, y, ϕ) : R+ × Rn × Rn × Cb → Rn×m are Borel measurable and locally Lipschitz continuous. If 0 ≤ δi (t) ≤ τ < ∞ (1 ≤ i ≤ n), Eq.(1) covers the system which studied in [1] as a special case. If 0 ≤ δi (t) ≤ τ < ∞ and remove the distributed delays, Eq.(1) becomes dx(t) = [−Bx(t) + AG(t, y(t))]dt + σ(t, x(t), y(t))dw(t),

(2)

which is studied by Huang et al. in [4,13]. Stability is a basic knowledge for dynamical systems and is useful in application to the real life system. The main aim of this paper is to study the exponential stability in mean square and almost sure exponential stability. The main results are provided in Section 3. We also give an example to illustrate the results.

2

Preliminaries

Notations. Denote by Cb = Cb ((−∞, 0], Rn ) the family of all bounded continuous functions ϕ from (−∞, 0] to Rn with the norm ϕ = sup−∞ 0, t≥0

(1 ≤ i ≤ n)

(4)

196

X. Meng et al.

which clearly shows that Δi (t) is a strictly increasing function on [0, ∞) and has the inverse function Δ−1 i (s) defined on [−δi (0), ∞) with the following property  [Δ−1 i (s)] =

1 ≤ ηi−1 . (s = Δi (t)) Δi (t)

(5)

It is easy to obtain the following lemma. Lemma 1. Let ηi be defined by (4), then ηi ≤ 1. 0 For any α, β ≥ 0 and ϕ ∈ Cb , define Tαβ (ϕ) = −∞ eαθ |ϕ(θ)|β dθ and C(α, β) = {ϕ ∈ Cb : Tαβ (ϕ) < ∞}. Denote by M0 the family of all probability measures on (−∞, 0]. For any μ ∈ M0 and ε ≥ 0, define  0 με := −∞ e−αθ dμ(θ); (6) Mε = {μ ∈ M0 : με < ∞}. This paper often uses the following function Φε :  0  n n 

2 2 Φε := Φε (t, x, y, ϕ) = Ki ϕi (θ)dμ(θ)−με xi + Li e−εδi (t) yi2 −ηi−1 x2i , −∞

i=1

i=1

(7) for any t ≥ 0, x, y ∈ Rn and ϕ ∈ Cb , where με is defined by (6), ε > 0 and Ki , Li (1 ≤ i ≤ n) are nonnegative constants. Lemma 2. Let Φε be defined by (7) and 0 ≤ q ≤ ε. If x(t) = x(t, ξ)(−∞ < t < σ) is a solution to Eq.(1) with initial data ξ ∈ C(q, 2), then  t eqs Φε (s, x(s), y(s), xs )ds ≤ const. (0 ≤ t < σ). 0

Proof. By the Fubini theorem and a substitution technique, the result can be easily derived. 2

3

Stability Results

This section aims to establish the stability results for Eq.(1). Theorem 1. Assume that there exist positive constants ε and mi (1 ≤ i ≤ n) such that the function V (x) = |x|2 satisfies LV (t, x, y, ϕ) ≤ Φε −

n 

mi x2i

(8)

i=1

for any t ≥ 0, x, y ∈ Rn and ϕ ∈ Cb , where Φε is defined by (7). Then there exists q ∈ (0, ε], for any given ξ ∈ C(q, 2), Eq.(1) admits a unique global solution x(t, ξ) which satisfies lim sup t−1 ln(E|x(t, ξ)|2 ) ≤ −q,

(9)

t→∞

lim sup t−1 ln |x(t, ξ)| ≤ −q/2, t→∞

a.s..

(10)

Exponential Stability of Stochastic Neural Networks

197

Proof. The proof will be divided into two steps. Step 1. Existence of the global solution Fix ξ ∈ C(q, 2), there exists a unique maximal local solution x(t) = x(t, ξ) for −∞ < t < σ to Eq.(1), where σ is the explosion time. Let k0 be a sufficiently large positive number such that ξ ≤ k0 . For each integer k ≥ k0 , define the stopping time σk = inf{0 ≤ t < σ : V (x(t)) ≥ k}. Clearly, σk is increasing and so σk → σ∞ ≤ σ as k → ∞. If we can show σ∞ = ∞, a.s., then σ = ∞ a.s., which implies the desired result. This is also equivalent to proving that, for any t > 0, P(σk ≤ t) → 0 as k → ∞. Letting tk = t ∧ σk , by condition (8), applying the Itˆ o formula and Lemma 2 yields  tk LV (x(s))ds kP(σk ≤ t) = E[I{σk ≤t} V (x(tk ))] ≤ EV (x(tk )) ≤ const + E 

tk

≤ const + E

Φε (s, x(s), y(s), xs ) −

0

≤ const −

n  i=1



mi E

n 

0

mi |xi (s)| ds 2

i=1 tk

|xi (s)|2 ds

0

≤ const. This implies P(σk ≤ t) → 0 as k → ∞, as required. Step 2. Stability Choose q > 0 and q is sufficiently small. Fix ξ ∈ C(q, 2). o formula, we have Let h(t) = eqt V (x(t)). Applying the Itˆ h(t) = h(0) + I + M (t), t where M (t) = 0 eqs Vx (x(s))σ(s, x(s), y(s), xs )dw(s) is a continuous local martingale with M (0) = 0, and  t eqs [LV (x(s)) + qV (x(s))]ds I= 0

 ≤

t

n  eqs Φε (s, x(s), y(s), xs ) − mi |xi (s)|2 + qV (x(s)) ds

0

i=1

≤ const. Applying the semi-martingale convergence theorem (see [10], P.44, Theorem 7.4) gives lim sup Eeqt V (x(t)) < ∞, t→∞

lim sup eqt V (x(t)) < ∞. a.s.. t→∞

This directly implies assertions (9) and (10), as required.

2

Next, we will apply Theorem 1 to get some more applicable criteria. Before giving the main results, some conditions and notations will be presented firstly.

198

X. Meng et al.

For any t ≥ 0, x, y ∈ Rn and ϕ ∈ Cb , assume that Gi (t, y) and σ(t, x, y, ϕ) satisfy |Gi (t, y)| ≤ βi |yi |e−εδi (t) ,  n   2 2 ˜ λi xi + λi |σ(t, x, y, ϕ)| ≤

0

−∞

i=1

(11)  ¯ i y 2 e−εδi (t) + μi G2 (t, y) ϕ2i (θ)dμ(θ) + λ i i (12)

˜i , λ ¯i and μi (1 ≤ i ≤ n) are nonnegative constants. where μ ∈ Mε . βi , λi , λ For the sake of simplicity, we introduce the following notations: A˜ = (˜ aij )n×n , a ˜ij = |aij |βj , qi = q˜i + q¯i ηi−1 , ˜i + λ ¯ i η −1 , ˆi = λi + λ ρˆi = ρi + ρ˜i + ρ¯i η −1 λ i

si = μi βi2 ηi−1

i

(13) (14)

Theorem 2. Under conditions (11) and (12), if there exist constants q˜i , q¯i , ρi , ρ¯i , ρ˜i (1 ≤ i ≤ n) such that the following conditions are satisfied: ρ˜i ≤ q˜i , q¯i ≥ ρ¯i , ˆ i + si < ρˆi , λ ⎛ ⎞ n x 

T T T x z y H ⎝z⎠ ≤ − (ρi x2i + ρ˜i zi2 + ρ¯i yi2 ), i=1 y

(15) (16) (17)



⎞ diag(qi − 2bi ) D A˜ ⎠, H=⎝ DT −diag(˜ qi ) 0 T ˜ A 0 −diag(¯ qi )

where

(18)

then there exists q > 0, for any given initial data ξ ∈ C(q, 2), Eq.(1) admits a unique global solution x(t, ξ) which satisfies (9) and (10). 0 Proof. For t ≥ 0, x, y ∈ Rn and ϕ ∈ Cb , let V (x) = |x|2 and z = −∞ ϕ(θ)dμ(θ), applying the Itˆ o formula to V (x) and using conditions (11) (12) and (15)-(17) give LV (t, x, y, ϕ) = 2xT (−Bx + AG(t, y) + Dz) + |σ(t, x, y, ϕ)|2 ⎛ ⎞ n x 

T T T = x z y H ⎝z⎠ + (˜ qi zi2 − qi x2i + q¯i yi2 ) + 2xT AG(t, y) i=1 y T ˜ 2 −2x Ay + |σ(t, x, y, ϕ)| ≤ −

n 

(ρi x2i + ρ˜i zi2 + ρ¯i yi2 ) +

i=1

 n   ˜i λi x2i + λ + i=1

n  i=1

0

−∞

(˜ qi zi2 − qi x2i + q¯i yi2 )

 ¯ i y 2 e−εδi (t) + μi G2 (t, y) ϕ2i (θ)dμ(θ) + λ i i

Exponential Stability of Stochastic Neural Networks



n 

(λi − ρi − qi )x2i +

i=1

+

n  i=1

:= Φε −

n 

 ˜ i − ρ˜i + q˜i ) (λ

i=1

199

0 −∞

ϕ2i (θ)dμ(θ)

¯ i − ρ¯i + q¯i + μi β 2 )y 2 e−εδi (t) (λ i i

n 

mi x2i ,

i=1

where Φε =

n 

 ˜ (λi − ρ˜i + q˜i )

i=1 n 

+

i=1

0

−∞

ϕ2i (θ)dμ(θ) − με x2i



  ¯ i − ρ¯i + q¯i + μi β 2 ) y 2 e−εδi (t) − η −1 x2 (λ i i i i

is a function in the form of (7), and ˜ i + ρ˜i − q˜i − (λ ¯ i − ρ¯i + q¯i + μi β 2 )η −1 mi |ε=0 = ρi + qi − λi − λ i i ˆ ≥ ρˆi − λi − si > 0. This shows condition (8) is satisfied. Applying Theorem 1 gives the desired results. 2 Choose ρi = ρ¯i = ρ˜i ≡ ρ, we can derive the following result. Corollary 1. Under conditions (11) and (12), if there exist constants q˜i , q¯i (1 ≤ i ≤ n) and ρ such that λM (H) ≤ −ρ, and moreover, ρ ≤ q˜i ∧ q¯i , ˆ i + si < ρ(2 + η −1 ), (1 ≤ i ≤ n), λ i

(19) (20)

where H is defined by (18), then the conclusions of Theorem 2 hold. Corollary 2. Under conditions (11) and (12), if 2bi >

n  j=1

|dij | +

n  j=1

|˜ aij | +

n  j=1

|dji | +

n  j=1

ˆi , |˜ aji |ηi−1 + si + λ

(21)

then the conclusions of Theorem 2 hold. Proof. Choosing q˜i = q¯i = 0 and using (21) we can testify condition (15)-(17) in Theorem 2. Applying Theorem 2 gets the desired results. Here we omit its process. 2

200

X. Meng et al.

Remark 1. The application of Theorem 1 and Corollary 1 need choosing some parameters such as q˜i , q¯i , etc.. Corollary 2 is described only in terms of the system parameters hence it can be verified easily. In the following, we consider a simple case. Simplify condition (12) and assume that n  μi G2i (t, y). (22) |σ(t, x, y, ϕ)|2 ≤ i=1

Corollary 3. Under conditions (11) and (22), if there exist constants q˜i , q¯i (1 ≤ i ≤ n) and ρ such that λM (H) ≤ −ρ and condition (19) is satisfied, and moreover, si < ρ(2 + ηi−1 ), (1 ≤ i ≤ n),

(23)

where H is defined by (18), then the conclusions of Theorem 2 hold. Corollary 4. Under conditions (11) and (22), if 2bi >

n  j=1

|dij | +

n  j=1

|˜ aij | +

n 

|dji | +

j=1

n  j=1

|˜ aji |ηi−1 + si ,

(24)

then the conclusions of Theorem 2 hold. Remark 2. In [1,2,5], it is all assumed that the discrete delays are bounded, here, by introducing the decay factor e−εδi (t) , our results can be used in the case of unbounded delays.

4

A Two-Dimensional Example

Consider the following two-neuron stochastic neural networks:  0  Dx(t + θ)  dθ dt + EG(t, y(t))dw(t), (25) dx(t) = − Bx(t) + AG(t, y(t)) + 2 −∞ (1 − θ) where y(t) = x(t − δ(t)), δ(t) ∈ C 1 (R+ ; R+ ) is variable delay, and







60 1 −1 01 a 0 B= ,A = ,D = ,E = , 08 1 1 00 0 0.2 where a is a positive constant. Let dμ(θ) = (1 − θ)−2 dθ, then μ(θ) ∈ Mε (0 < ε < 1). For simplicity, let ηi = 1/2, βi = 1(i = 1, 2). From (13) we have s1 = 2a2 , s2 = 0.08. (i) Application of Corollary 3. Choose q¯1 = q˜1 = 3, q¯2 = q˜2 = 4. By (18) we can compute ⎛ ⎞ −3 0 0 1 1 1 ⎜ 0 −4 0 0 1 1 ⎟ ⎜ ⎟ ⎜ 0 0 −3 0 0 0 ⎟ ⎜ ⎟. H=⎜ ⎟ ⎜ 1 0 0 −4 0 0 ⎟ ⎝ 1 1 0 0 −3 0 ⎠ 1 1 0 0 0 −4

Exponential Stability of Stochastic Neural Networks

201

This can derive λM (H) = −1.2375, which implies ρ = 1.2375. Substituting the parameters into (23) gives 2a2 ∨ 0.08 < 4ρ, which shows 0 < a < 1.5732. Applying Corollary 3 we have, when 0 < a < 1.5732, there exists q > 0, for any given initial data ξ ∈ C(q, 2), Eq.(25) admits a unique global solution x(t, ξ) which satisfies (9) and (10). (ii) Application of Corollary 4. Substituting the parameters into (24) gives 0 < a < 1.5811. Applying Corollary 4 we have, when 0 < a < 1.5811, there exists q > 0, for any given initial data ξ ∈ C(q, 2), Eq.(25) admits a unique global solution x(t, ξ) which satisfies (9) and (10). Obviously, Corollary 4 is more convenient in application. Choose the initial data ξ(s) = 1, s ∈ (−∞, 0], ε = 0.5 and G(t, x) = x(0.5t)e−0.5×0.5t , Fig.1 shows the trivial solution of Eq. (25) is exponentially stable in mean square. 1 0.9 0.8 0.7

E(X)2

0.6 0.5 0.4 0.3 0.2 0.1 0

0

0.5

1

1.5

2 Time: t

2.5

3

3.5

4

Fig. 1. Numerical simulation of (25)

Acknowledgments. The authors would like to thank the referees for their helpful suggestions and comments.

References 1. Balasubramaniam, P., Rakkiyappan, R.: Global Asymptotic Stability of Stochastic Recurrent Neural Networks with Multiple Discrete Delays and Unbounded Distributed Delays. Applied Mathematics and Computation 204, 680–686 (2008) 2. Cao, J., Yuan, K., Li, H.: Global Asymptotic Stability of Recurrent Neural Networks with Multiple Discrete Delays and Distributed Delays. IEEE Trans. Neural Networks 17, 1646–1651 (2006) 3. Cao, J., Zhou, D.: Stability Analysis of Delayed Cellular Neural Networks. Neural Networks 11, 1601–1605 (1998)

202

X. Meng et al.

4. Huang, C., He, Y., Wang, H.: Mean Square Exponential Stability of Stochastic Recurrent Neural Networks with Time-varying Delays. Computers and Mathematics with Applications 56, 1773–1778 (2008) 5. Li, X., Fu, X.: Stability Analysis of Stochastic Functional Differential Equations with Infinite Delay and its Application to Recurrent Neural Networks. Journal of Computational and Applied Mathematics 234(2), 407–417 (2010) 6. Liao, X., Mao, X.: Exponential Stability and Instability of Stochastic Neural Networks. Stochastic. Anal. Appl. 14(2), 165–185 (1996) 7. Liao, X., Mao, X.: Stability of Stochastic Neural Networks. Neural, Parallel Sci. Comput. 4(2), 205–224 (1996) 8. Liu, Y., Wang, Z., Liu, X.: Stability Criteria for Periodic Neural Networks with Discrete and Distributed Delays. Nonlinear Dyn. 4, 93–103 (2007) 9. Liu, Y., Wang, Z., Liu, X.: Design of Exponential State Estimators for Neural Networks with Mixed Time Delays. Phys. Lett. A 364, 401–412 (2007) 10. Mao, X.: Stochastic Differential Equations and Applications. Horwood, Chichester (1997) 11. Mohamad, S., Gopalsamy, K.: Exponential Stability of Continuous-time and Discrete-time Cellular Neural Networks with Delays. Appl. Math. Comput. 135, 17–38 (2003) 12. Roska, T., Wu, C.W., Balsi, M., Chua, L.O.: Stability and Dynamics of Delaytype General Cellular Neural Networks. IEEE Trans. Circuit Syst. I 39(6), 487–490 (1992) 13. Sun, Y., Cao, J.: pth Moment Exponential Stability of Stochastic Recurrent Neural Networks with Time-varying Delays. Nonlinear Analysis: Real Word Applications 8, 1171–1185 (2007) 14. Tank, D.W., Hopfield, J.J.: Neural Computation by Concentrating Information in Time. Proc. Acad. Sci. USA 84, 1896–1900 (1987) 15. Wang, Z., Liu, Y., Liu, X.: On Global Asymptotic Stability of Neural Netwoks with Discrete and Distributed Delays. Phys. Lett. A 345, 299–308 (2005) 16. Zeng, Z., Wang, J., Liao, X.: Global Asymptotic Stability and Global Exponential Stability of Neural Networks with Unbounded Time-varying Delays. IEEE Trans. Circuits Syst. II, Express Briefs 52(3), 168–173 (2005)