RAZUMIKHIN-TYPE THEOREMS ON STABILITY ... - Semantic Scholar

6 downloads 0 Views 151KB Size Report
Steve Blythe, Xuerong Mao1 and Anita Shah2. Department of Statistics and ...... tems,” Morgan Kaufmann, San Mateo U.S.A. (1989), 553–559. [4] Hopfield, J.J. ...
RAZUMIKHIN-TYPE THEOREMS ON STABILITY OF STOCHASTIC NEURAL NETWORKS WITH DELAYS

Steve Blythe, Xuerong Mao1 and Anita Shah2 Department of Statistics and Modelling Science University of Strathclyde Glasgow G1 1XH, Scotland, U.K.

ABSTRACT Although the stability of neural networks has been studied by many authors, the problem of stochastic effects on the stability has not been investigated until recently by Liao and Mao [7, 8]. In this paper we shall investigate the stability problem for stochastic neural networks with time-varying delay. The main technique employed in this paper is the well-known Razumikhin argument, which is completely different from those used in Liao and Mao [7, 8].

1. INTRODUCTION Theoretical understanding of neural-network dynamics has advanced greatly in the past ten years (cf. Coben and Crosshery [1], Denker [2], Hopfield [4,5], Hopfield and Tank [6], Quezz et al. [13]). In many networks, time 1 2

For any correspondence regarding this paper please address it to this author. Supported by the RD fund of Strathclyde University.

delays can not be avoided. For example, in electronic neural networks, time delays will be present due to the finite switching speed of amplifiers. Marcus and Westervelt [11] proposed, in a similar way as Hopfield [4], a model for a network with delay as follows: n X 1 Tij gj (xj (t − τ )), Ci x˙ i (t) = − xi (t) + Ri j=1

1 ≤ i ≤ n,

(1.1)

on t ≥ 0. The variable xi (t) represents the voltage on the input of the ith neuron. Each neuron is characterized by an input capacitance Ci , a time delay τ and a transfer function gi (·). The connection matrix element Tij has a value +1/Rij when the noninverting output of the jth neuron is connected to the input of the ith neuron through a resistance Rij , and a value −1/Rij when the inverting output of the jth neuron is connected to the input of the ith neuron through a resistance Rij . The parallel resistance at the input of Pn each neuron is defined Ri = ( j=1 |Tij |)−1 . The nonlinear transfer function gi (u) is sigmoidal, saturating at ±1 with maximum slope at u = 0. That is, in term of mathematics, gi (u) is nondecreasing and |gi (u)| ≤ 1 ∧ βi |u|

for all − ∞ < u < ∞,

(1.2)

where βi is the finite slope of gi (u) at u = 0. By defining bi =

1 , Ci Ri

aij =

Tij Ci

equation (1.1) can be re-written as x˙ i (t) = −bi xi (t) +

n X

aij gj (xj (t − τ )),

1 ≤ i ≤ n,

(1.3)

j=1

or, in form, x(t) ˙ = −Bx(t) + Ag(x(t − τ )),

t ≥ 0,

(1.4)

where x(t) = (x1 (t), · · · , xn (t))T , A = (aij )n×n , with bi =

n X

|aij |,

g(x) = (g1 (x1 ), · · · , gn (xn ))T , B = diag.(b1 , · · · , bn ),

1 ≤ i ≤ n.

(1.5)

j=1

It is clear that for any given initial data x(θ) = ξ(θ) on −τ ≤ θ ≤ 0, which is in C([−τ, 0]; Rn ), equation (1.4) has a unique global solution on t ≥ 0. Suppose that there exists a stochastic perturbation to the neural network and the stochastically perturbed network may be described by a stochastic differential delay equation dx(t) = [−Bx(t) + Ag(x(t − τ ))]dt + σ(x(t), x(t − τ ))dw(t).

(1.6)

Moreover, under even closer scrutiny, it turns out that the time delay is often time-dependent rather than constant. Then, a more realistic model for a stochastic neural network would be dx(t) = [−Bx(t) + Ag(x(t − δ(t)))]dt + σ(x(t), x(t − δ(t)))dw(t).

(1.7)

Liao and Mao [7, 8] have investigated the exponential stability of equation (1.6) via the method of Lyapunov functionals. However, their technique does not work for the more general equation (1.7). The aim of this paper is to develop the Razumikhin argument (cf Razumikhin [14, 15]) in the study of stability of deterministic differential delay equation to cope with the stability of the stochastic delay neural network (1.7). 2. MAINRESULTS Throughout this paper, unless otherwise specified, we let τ > 0 and δ : R+ → [0, τ ] be a continuous function. Denote by C([−τ, 0]; Rn ) the family of continuous functions ϕ from [−τ, 0] to Rn with the norm ||ϕ|| = supτ ≤θ≤0 |ϕ(θ)|,

where | · | is the Euclidean norm in Rn . If A is a vector or matrix, its transpose is denoted by AT . If A is a matrix, its operator norm ||A|| is defined by ||A|| = sup{|Ax| : |x| = 1} (without any confusion with ||ϕ||). Moreover, let w(t) = (w1 (t), · · · , wm (t))T be an m-dimensional Brownian motion defined on a complete probability space (Ω, F, P ) with the natural filtration {Ft }t≥0 (i.e. Ft = σ{w(s) : 0 ≤ s ≤ t}). For every t ≥ 0, denote by L2Ft ([−τ, 0]; Rn ) the family of all Ft -measurable C([−τ, 0]; Rn )-valued random variables φ = {φ(θ) : −τ ≤ θ ≤ 0} such that ||φ||2L2 := sup−τ ≤θ≤0 E|φ(θ)|2 < ∞. Let L2 (Ω; Rn ) denote the family of all Rn -valued random variables X such that E|X|2 < ∞. Let σ : Rn × Rn → Rn×m (i.e. σ(x, y) = (σij (x, y))n×m ) be locally Lipschitz continuous and satisfy the linear growth condition (cf. Mao [9, 10] or Mohammed [12]). Let ξ = {ξ(θ) : −τ ≤ θ ≤ 0} ∈ L2F0 ([−τ, 0]; Rn ). Consider the stochastic delay neural network (1.7), namely dx(t) = [−Bx(t) + Ag(x(t − δ(t)))]dt + σ(x(t), x(t − δ(t)))dw(t)

(2.1)

on t ≥ 0 with initial data x(θ) = ξ(θ) for θ ∈ [−τ, 0], where B, A, g have been defined before. It is well-known (cf. Mao [9, 10] or Mohammed [12]) that equation (2.1) has a unique global solution on t ≥ 0, which is denoted by x(t; ξ) in this paper, and that the second moment of the solution is continuous. We will also assume σ(0, 0) = 0 for the stability purpose of this paper. So equation (2.1) has the solution x(t; 0) ≡ 0 corresponding to the initial data x(θ) = 0 on [−τ, 0]. This solution is called the trivial solution or equilibrium position. Let C 2 (Rn ; R+ ) denote the family of all C 2 -functions from Rn to R+ . If V ∈ C 2 (Rn ; R+ ), define an operator LV : Rn × Rn → R by   1 LV (x, y) = Vx (x)[−Bx + Ag(y)] + trace σ T (x, y)Vxx (x)σ(x, y) , 2

where Vx (x) =

 ∂V (x) ∂x1

,···,

∂V (x)  ∂xn

and

Vxx (x) =

 ∂ 2 V (x)  ∂xi ∂xj

. n×n

The main idea of the Razumikhin technique is to use a Lyapunov function, rather than a functional, to investigate the stability of the delay system: Applying the well-known Itˆ o formula to eλt V (x(t)) we have Z t h i λt λs e EV (x(t)) = EV (ξ(0)) + e λV (x(s)) + LV (x(s), x(s − δ(s))) ds. 0

Exponential stability in mean square required that ELV (x(t), x(t − δ(t))) ≤ −λEV (x(t))

(2.2)

for all t ≥ 0. As a result, one would be forced to impose very severe restrictions on the functions g and σ to the extent that the state x(t) plays a dominant role but the history x(t − δ(t)) has little impact. Therefore, the results will apply only to networks that are very similar to non-delay ones. Fortunately, by the Razumikhin argument, one needs (2.2) to hold only for those t ≥ 0 for which EV (x(t − δ(t)) ≤ qEV (x(t)), where q > 1 is a constant, but not necessarily for all t ≥ 0. Hence the restrictions on the functions can be much weakened. This is the basic idea used in this paper. Let us now start to establish the Razumikhin-type theorems. Theorem 2.1 Let λ, c1 , c2 all be positive numbers, and q > 1. Assume that there exists a function V ∈ C 2 (Rn ; R+ ) such that c1 |x|2 ≤ V (x) ≤ c2 |x|2

for all x ∈ Rn

(2.3)

and, moreover, that ELV (X, Y ) ≤ −λEV (X)

(2.4)

for those X, Y ∈ L2 (Ω; Rn ) which satisfy EV (Y ) ≤ qEV (X). Then, for all ξ ∈ L2F0 ([−τ, 0]; Rn ), we have E|x(t; ξ)|2 ≤

c2 ||ξ||2L2 e−γt c1

on t ≥ 0,

(2.5)

where γ = min{λ, log(q)/τ }. In other words, the trivial solution of equation (2.1) is exponentially stable in mean square. Proof. Fix any initial data ξ ∈ L2F0 ([−τ, 0]; Rn ) and write x(t; ξ) = x(t) to simplify notation. Without loss of generality, we may assume that ||ξ||2L2 > 0; otherwise ξ = 0 a.s., hence x(t) ≡ 0, and (2.5) holds already. Therefore, by condition (2.3), we see that sup−τ ≤θ≤0 V (ξ(θ)) > 0. Let γ¯ ∈ (0, γ) be arbitrary. It is easy to show that 0 < γ¯ < λ

and

q > eγ¯ τ .

(2.6)

We now claim that eγ¯ t EV (x(t)) ≤

sup V (ξ(θ))

for all t ≥ 0.

(2.7)

−τ ≤θ≤0

Suppose this is not true. Then there is a number ρ ≥ 0 such that eγ¯ t EV (x(t)) ≤ eγ¯ ρ EV (x(ρ)) =

sup V (ξ(θ))

(2.8)

−τ ≤θ≤0

for all 0 ≤ t ≤ ρ and, further, there is a sequence of {tk }k≥1 such that tk ↓ 0 and eγ¯ tk EV (x(tk )) > eγ¯ ρ EV (x(ρ)). Noting that (2.8) holds for −τ ≤ t ≤ 0 as well, we find eγ¯ (ρ−δ(ρ)) EV (x(ρ − δ(ρ))) ≤ eγ¯ ρ EV (x(ρ)). Therefore, by (2.6), EV (x(ρ − δ(ρ))) ≤ eγ¯ δ(ρ) EV (x(ρ)) ≤ eγ¯ τ EV (x(ρ)) ≤ qEV (x(ρ)).

(2.9)

By condition (2.4), ELV (x(ρ), x(ρ − δ(ρ))) ≤ −λEV (x(ρ)). Recall the fact that γ¯ < λ and EV (x(ρ)) > 0. Using the continuity of the solution and of the functions V, δ etc., one then sees that for all sufficiently small h > 0, ELV (x(t), x(t − δ(t)) ≤ −¯ γ EV (x(t))

if ρ ≤ t ≤ ρ + h.

Now, by Itˆo’s formula, we can derive that, for all sufficiently small h > 0, eγ¯ (ρ+h) EV (x(ρ + h)) − eγ¯ ρ EV (x(ρ)) Z ρ+h h i = eγ¯ t ELV (x(t), x(t − δ(t)) + γ¯ EV (x(t)) dt ≤ 0. ρ

But this is in contradiction with (2.9) so (2.7) must hold. Finally, we obtain from (2.7) and condition (2.3) that E|x(t)|2 ≤

c2 ||ξ||2L2 e−¯γ t , c1

and the desired assertion (2.5) follows by letting γ¯ → γ. The proof is complete. Corollary 2.2 Assume that there exists a symmetric positive-definite n×nmatrix Q, and constants λ > 0, q > 1, such that   E 2X T Q[−BX + Ag(Y )] + trace[σ T (X, Y )Qσ(X, Y )] ≤ −λE(X T QX) for all of those X, Y ∈ L2 (Ω; Rn ) satisfying E(Y T QY ) ≤ qE(X T QX). Then, for all ξ ∈ L2F0 ([−τ, 0]; Rn ), we have E|x(t; ξ)|2 ≤

λmax (Q) ||ξ||2L2 e−γt λmin (Q)

on t ≥ 0,

where γ = min{λ, log(q)/τ }. Here λmax (Q) and λmin (Q) denote the largest and smallest eigenvalue of Q, respectively. This corollary follows from Theorem 2.1 by using V (x) = xT Qx. We now establish a theorem on the almost sure exponential stability of the stochastic delay neural network (2.1) Theorem 2.3 Assume that there is a positive constant K such that trace[σ T (x, y)σ(x, y)] ≤ K(|x|2 + |y|2 )

for all x, y ∈ Rn .

(2.10)

Then (2.5) implies that lim sup t→∞

γ 1 log |x(t; ξ)| ≤ − t 2

a.s.

(2.11)

In particular, if all of the assumptions of Theorem 2.1 are fulfilled and in addition (2.10) holds, then the trivial solution of equation (2.1) is almost surely exponentially stable. This theorem can be proved in the same way as Lemma 4.6 of Liao and Mao [8] so the details are omitted. 3. USEFULCOROLLARIES We shall now employ the main results obtained in the previous section to establish a number of useful corollaries. Corollary 3.1 Let λ1 > λ2 > 0 and c2 ≥ c1 > 0. Assume that there exists a function V ∈ C 2 (Rn ; R+ ) such that c1 |x|2 ≤ V (x) ≤ c2 |x|2

for all x ∈ Rn

(3.1)

and LV (x, y) ≤ −λ1 V (x) + λ2 V (y)

for all (x, y) ∈ Rn × Rn .

(3.2)

Then, for all ξ ∈ L2F0 ([−τ, 0]; Rn ), we have E|x(t; ξ)|2 ≤

c2 ||ξ||2L2 e−(λ1 −qλ2 )t c1

on t ≥ 0,

(3.3)

where q ∈ (1, λ1 /λ2 ) is the unique root to the equation λ1 − qλ2 = log(q)/τ . Proof. For any pair of X, Y ∈ L2 (Ω; Rn ) with EV (Y ) ≤ qEV (X), we have from condition (3.2) that ELV (X, Y ) ≤ −λ1 EV (x) + λ2 EV (y) ≤ −(λ1 − qλ2 )EV (X). An application of Theorem 2.1 (with λ = λ1 −qλ2 ) yields the desired assertion (3.3). Corollary 3.2 Assume that there exists a symmetric positive-definite n×nmatrix Q, and constants λ1 > λ2 > 0, such that 2xT Q[−Bx + Ag(y)] + trace[σ T (x, y)Qσ(x, y)] ≤ −λ1 xT Qx + λ2 y T Qy (3.4) for all (x, y) ∈ Rn × Rn . Then, for all ξ ∈ L2F0 ([−τ, 0]; Rn ), E|x(t; ξ)|2 ≤

λmax (Q) ||ξ||2L2 e−(λ1 −qλ2 )t λmin (Q)

on t ≥ 0,

(3.5)

where q ∈ (1, λ1 /λ2 ) is as defined in Corollary 3.1. This corollary follows from Corollary 3.1 by using V (x) = xT Qx. So far we have not used conditions (1.2) and (1.5) explicitly but we shall do so from now on. To make the statements more clear, we will mention these conditions when they are used explicitly although they are the standing hypotheses. Corollary 3.3 Let (1.2) hold. Assume that there are nonnegative constants νi , µi , 1 ≤ i ≤ n, such that n   X trace σ T (x, y)σ(x, y) ≤ (νi x2i +µi yi2 ) i=1

for all (x, y) ∈ Rn ×Rn . (3.6)

If, furthermore, there is a matrix (εij )n×n with all the elements positive such that 

λ1 := min 2bi − νi − 1≤i≤n

 > λ2 := max µj + 1≤j≤n

n X

|aij |βj εij

j=1 n X

|aij |

i=1



βj  , εij

(3.7)

then the trivial solution of equation (2.1) is exponentially stable in mean square and is also almost surely exponentially stable. Proof. Let V (x) = |x|2 . Then   LV (x, y) = 2xT [−Bx + Ag(y)] + trace σ T (x, y)σ(x, y) n n n X X X 2 ≤ −2 bi xi + 2 xi aij gj (yj ) + (νi x2i + µi yi2 ) ≤−

i=1 n X

i,j=1

(2bi −

νi )x2i

i=1 n X

+2

i=1

|aij |βj |xi ||yj | +

i,j=1

µj yj2 .

j=1

Noting that 2|xi ||yj | ≤

n X

εij x2i

yj2 + , εij

we obtain that LV (x, y) ≤ −

n X

(2bi −

i=1

≤−

n  X

2bi − νi −

i=1

νi )x2i

+

n X



|aij |βj εij x2i

i,j=1

n yj2  X + + µj yj2 εij j=1

n X

n  n  X X βj  2 |aij |βj εij x2i + µj + |aij | yj ε ij j=1 j=1 i=1

≤ −λ1 |x|2 + λ2 |y|2 . By Corollary 3.1, we see that for all ξ ∈ L2F0 ([−τ, 0]; Rn ), E|x(t; ξ)|2 ≤ ||ξ||2L2 e−(λ1 −qλ2 )t

on t ≥ 0,

(3.8)

where q ∈ (1, λ1 /λ2 ) is the unique root to the equation λ1 − qλ2 = log(q)/τ . That is, the trivial solution is exponentially stable in mean square. Finally,

almost sure exponential stability follows from Theorem 2.3, (3.8) and condition (3.6). The use of this corollary depends on the choice of the matrix (εij )n×n , which should be selected based on the structure of the given stochastic delay neural network. Although it can give better conditions on stability, it is somewhat inconvenient in application when the dimension n of the network is large. We shall now establish a number of easier-to-use criteria. Corollary 3.4 Let (1.2), (1.5) and (3.6) hold. Assume that the network is symmetric in the sense that |aij | = |aji | If  max

0 ζ + ν + µ + ζ[ζ + 2(ν + µ)] , 2

(3.11)

where κ = min bi

and

1≤i≤n

ζ = max bj βj2 . 1≤j≤n

Proof. By (3.10), one can find an ε ∈ (0, 2) for which    bj βj2  min (2 − ε)bi − νi > max µj + . 1≤i≤n 1≤j≤n ε

(3.12)

Set the elements of the matrix (εij ) by εij = ε/βj . Then, using (1.5) and (3.9), we have 

λ1 := min 2bi − νi − 1≤i≤n

n X j=1

|aij |βj εij





= min 2bi − νi − ε

n X

1≤i≤n

 |aij |

j=1

  = min (2 − ε)bi − vi , 1≤i≤n

and n  X βj  λ2 := max µj + |aij | 1≤j≤n εij i=1



= max µj + 1≤j≤n

|aij |

i=1



= max µj + 1≤j≤n

n X

βj2  ε

bj βj2  . ε

By (3.12), we see that λ1 > λ2 . So the stability assertions follow from Corollary 3.3. In the case νi = ν and µi = µ, we have     bj βj2  max min (2 − ε)bi − νi − max µj + 0 0 for which λ1 > λ2 , where λ1 := min (2bi − νi ) − ε 1≤i≤n

and   1 λ2 := max µj + ||A||2 βj2 . 1≤j≤n ε Let V (x) = |x|2 . Then   LV (x, y) = 2xT [−Bx + Ag(y)] + trace σ T (x, y)σ(x, y) n n X X 2 ≤ −2 bi xi + 2|x| ||A|| |g(y)| + (νi x2i + µi yi2 ) ≤−

≤−

i=1 n X

i=1

(2bi −

i=1 n X

νi )x2i

(2bi − νi −

n X 1 2 + ε|x| + ||A|| |g(y)| + µj yj2 . ε j=1

ε)x2i

i=1

2

+

n  X j=1

2

2

≤ −λ1 |x| + λ2 |y| .

 1 µj + ||A||βj2 yj2 ε (3.15)

Therefore mean square exponential stability follows from Corollary 3.1, while almost sure exponential stability follows from Theorem 2.3. In the case

µj = µ for all 1 ≤ j ≤ n, we have   1 2 2 min max ε + µj + ||A|| βj ε>0 1≤j≤n ε   1 2 2 = µ + min ε + ||A|| max βj ε>0 1≤j≤n e = µ + 2||A|| max βj . 1≤j≤n

So (3.13) becomes min (2bi − νi ) > µ + 2||A|| max βj

1≤i≤n

1≤j≤n

as required. The proof is complete. Before a discussion of examples, let us establish one more corollary. Corollary 3.6 Let (1.2) hold. Assume that there is a symmetric nonnegative definite n × n matrix H, and nonnegative constants µi , 1 ≤ i ≤ n, such that n X   trace σ T (x, y)σ(x, y) ≤ xT Hx + µi yi2 for all (x, y) ∈ Rn × Rn . i=1

(3.16) If there is an ε > 0 such that T

λmin (2B − H − εAA ) > max

 β2



i

+ µi ,

ε

1≤i≤n

(3.17)

then the trivial solution of equation (2.1) is exponentially stable in mean square and is also almost surely exponentially stable. Proof. Let V (x) = |x|2 . Then   LV (x, y) = 2xT [−Bx + Ag(y)] + trace σ T (x, y)σ(x, y) n X T T T ≤ −2x Bx + 2x Ag(y) + x Hx + µi yi2 i=1 n X 1 2 ≤ −x (2B − H)x + εx AA x + |g(y)| + µi yi2 ε i=1 T

T

T

T

T

≤ −x (2B − H − εAA )x +

n  2 X β i

i=1 T

ε

 + µi yi2

2

≤ −λmin (2B − H − εAA )|x| + max

1≤i≤n

 β2 i

ε



+ µi |y|2 .

Therefore the conclusions follow from Corollary 3.1 and Theorem 2.3. 4. EXAMPLES In this section we shall discuss a number of examples in order to illustrate the theory. We shall not mention the initial data since they always belong to L2F0 ([−τ, 0]; Rn ). Example 4.1 Let us first consider a 3-dimensional symmetric stochastic delay neural network dx(t) = [−Bx(t) + Ag(x(t − δ(t)))]dt + σ(x(t), x(t − δ(t)))dw(t).

(4.1)

Here we choose 

B = diag.(4, 4, 4), gi (xi ) = (0.5xi ∧ 1) ∨ (−1),

1 A = 2 1

2 1 1

 1 1 2

g(x) = (g1 (x1 ), g2 (x2 ), g3 (x3 ))T .

Moreover, σ : R3 × R3 → R3×m satisfies trace[σ T (s, y)σ(x, y)] ≤ ν|x|2 + µ|y|2

(4.2)

for some nonnegative constants ν and µ. Note that (1.2) and (1.5) are satisfied, and particularly, that b1 = b2 = b3 = 4 and β1 = β2 = β3 = 0.5. In this case, criterion (3.11) becomes  p 1 4> 1 + ν + µ + 1 + 2(ν + µ) , 2 which yields ν + µ < 4.

(4.3)

Therefore, by Corollary 3.4, we see that (4.3) is a sufficient condition for mean square and a.s. exponential stability. On the other hand, it is easy to show that ||A|| = 4. Thus criterion (3.14) becomes 2 × 4 − ν > µ + 2 × 4 × 0.5

which yields (4.3) too. In other words, Corollary 3.5 gives the same stability condition as Corollary 3.4 in this example. Example 4.2 Let us still consider network (4.1) but with different A and B as follows 

B = diag.(2, 3, 4),

0  A= 1 1

1 1 1

 1 1. 2

Besides, condition (4.2) is replaced with trace[σ T (s, y)σ(x, y)] ≤ xT Hx + µ|y|2 ,

(4.4)

where µ ≥ 0 and H is a symmetric nonnegative-definite 3 × 3 matrix. Note that the network is still symmetric and β1 = β2 = β3 = 0.5, but b1 = 2, b2 = 3, b3 = 4. It is easy to show that ||A|| = 3.215. If we apply Corollary 3.5 we can show that a sufficient condition for exponential stability is ||H|| + µ < 0.785. On the other hand, by Corollary 3.4, we can obtain a √ better condition ||H|| + µ < 4 − 2 2 = 1.715. However, we shall now apply Corollary 3.6 to obtain an even better condition. According to Corollary 3.6, we need to seek an ε > 0 such that λmin (2B − H − εAAT ) >

0.25 + µ. ε

(4.5)

Since λmin (2B − H − εAAT ) ≥ λmin (2B − εAAT ) − ||H||, it is enough to find an ε > 0 for λmin (2B − εAAT ) − ||H|| >

0.25 + µ. ε

(4.6)

Say we choose ε = 0.216 and compute λmin (2B − εAAT ) = 3.25792. Substituting these into (4.6) yields the stability condition ||H|| + µ < 2.1005,

(4.7)

which improves the above conditon by 22%. Let us further specify H as 

0  H= 0 0

0 1.25 1.25

 0 1.25  . 2.5

Compute ||H|| = 3.27254 and we therefore see condition (4.7) is not satisfied. However we can still use (4.5) to show exponential stability if µ is sufficiently small. For example, choose ε = 0.16 and compute λmin (2B − H − εAAT ) = 2.28382. Substituting these into (4.5) we see that a sufficient condition for the stability in this case is now µ < 0.72132.

(4.8)

Note that ||H|| + µ < 3.99386 which improves (4.7) greatly, but of course H is known in this case. Example 4.3 Consider a 2-dimensional stochastic delay neural network dx(t) = [−Bx(t) + Ag(x(t − δ(t)))]dt + σ(x(t), x(t − δ(t)))dw(t).

(4.9)

Here w(t) is an m-dimensional Brownian motion,     5 0 1 4 B= A= 0 3 2 1 0.5(exi − e−xi ) gi (xi ) = , exi + e−xi

g(x) = (g1 (x1 ), g2 (x2 ))T

and, moreover, σ : R2 × R2 → R2×m satisfies trace[σ T (x, y)σ(x, y)] ≤ ν1 x21 + ν2 x22 + 3y12 + y22 . We shall apply our results to obtain restrictions on parameters ν1 and ν2 in order to have required exponential stability. First of all, note that β1 = β2 = 0.5. By Corollary 3.6, we need to seek an ε > 0 such that λmin (2B − H − εAT A) > 3 +

0.25 , ε

(4.10)

where H = diag.(ν1 , ν2 ). It is sufficient to have λmin (2B − εAT A) − ν1 ∨ ν2 > 3 +

0.25 . ε

(4.11)

Numerically we find the optimal ε ≈ 0.155 and compute the corresponding eigenvalue λmin (2B − εAT A) ≈ 4.87733. Hence, (4.11) yields the stability condition ν1 ∨ ν2 < 0.2644.

(4.12)

We now demonstrate how to apply Corollary 3.3 to improve this result. According to this corollary, we need to seek four positive numbers εij , 1 ≤ i, j ≤ 2, such that n o min 10 − ν1 − 0.5(ε11 + 4ε12 ), 6 − ν2 − 0.5(2ε21 + ε22 ) n  1  2 4  1 o > max 3 + 0.5 + , 1 + 0.5 + . ε11 ε12 ε21 ε22

(4.13)

By choosing ε11 = ε22 = 1, (4.13) reduces to o n min 9.5 − ν1 − 2ε12 , 5.5 − ν2 − ε21 n 2 1 o > max 3.5 + , 1.5 + . ε12 ε21

(4.14)

Now look for ε12 and ε21 such that 3.5 +

2 1 = 1.5 + , ε12 ε21

i.e.

ε21 =

ε12 . 2(ε12 + 1)

By setting ε12 = ε, (4.14) becomes  min 9.5 − ν1 − 2ε, 5.5 − ν2 −

ε 2(ε + 1)



2 > 3.5 + . ε

(4.15)

If we do not know which of ν1 and ν2 is larger, it would be better to select ε such that 9.5 − 2ε = 5.5 −

ε 2(ε + 1)

which yields ε = 2.17116. Substituting this into (4.15) gives the stability condition ν1 ∨ ν2 < 0.7364,

(4.16)

which improves condition (4.12) by 178%. Alternatively, if we know that ν1 would be larger than ν2 , then we can choose, for example, ε = 2 to obtain the stability condition

ν1 < 1

and

ν2
0 ε

(4.18)

which means that (4.10) will never hold for any ε > 0. In other words, Corollary 3.6 will not give any better result than Corollary 3.3 in this particular example. However, as demonstrated, it is not easy to select εij even in this example of dimension 2, and it could be very difficult indeed in the case of higher dimensions. To conclude this section let us stress that the examples above show the advantage and disadvantage of different corollaries. In theory, they complement each other. Therefore, in application, one should use one or the other based on the structure of the given network in order to obtain better stability conditions.

REFERENCES [1] Coben, M.A. and Crosshery, S., Absolute stability and global pattern formation and patrolled memory storage by competitive neural networks, IEEE Trans. on Systems, Man and Cybernetics 13 (1983), 815– 826. [2] Denker, J.S.(Ed.), Neural Networks for Computing (Snowbird, UT, 1986), Proceedings of the Conference on Neural Networks for Computing, AIP, New York, 1986. [3] Handler, J., Spreading Activation over Distributed Microfeatures, in D.S. Touretzky (Ed.) “Advances in Neural Information Processing Systems,” Morgan Kaufmann, San Mateo U.S.A. (1989), 553–559. [4] Hopfield, J.J., Neural networks and physical systems with emergent collect computational abilities, Proc. Natl. Acad. Sci. USA 79 (1982), 2554–2558. [5] Hopfield, J.J., Neurons with graded response have collective computational properties like those of two-state neurons, Proc. Natl. Acad. Sci. USA 81 (1984) 3088–3092. [6] Hopfield, J.J. and Tank, D.W., Computing with neural circuits, Model Science, 233 (1986) 3088–3092. [7] Liao, X.X. and Mao, X., Exponential stability and instability of stochastic neural networks, Stochastic Analysis and Applications 14 (1996), 165–185. [8] Liao, X.X. and Mao, X., Stability of stochastic neural networks, Neural, Parallel and Scientific Computations 4 (1996), 205–224. [9] Mao, X., Stability of Stochastic Differential Equations with Respect to Semimartingales, Longman Scientific and Technical, 1991. [10] Mao, X., Exponential Stability of Stochastic Differential Equations, Marcel Dekker, 1994. [11] Marcus, C.M. and Westervelt, R.M., Stability of analog networks with delay, Physical Review A, 39 (1986), 347–359. [12] Mohammed, S-E.A., Stochastic Functional Differential Equations, Longman Scientific and Technical, 1986. [13] Quezz, A., Protoposecu V. and Barben, J., On the stability storage capacity and design of nonlinear continuous neural networks, IEEE Trans. on Systems, Man and Cybernetics 18 (1983), 80–87.

[14] Razumikhin, B.S., On the stability of systems with a delay, Prikl. Mat. Meh. 20 (1956), 500–512. [15] Razumikhin, B.S., Application of Lyapunov’s method to problems in the stability of systems with a delay, Automat. i Telemeh. 21 (1960), 740–749. (Translated into English in Automat. Remote Control 21 (1960), 515–520.)