g(t;x(t);x(t))dw(t) and assume it is exponentially stable which is guaranteed by the existence of the Lyapunov ... is obtained. Key Words: stochastic di erential delay equations, Lyapunov function, Lyapunov exponent, ...... )dt + x(t) ?0:5x(t ? )]dw(t).
Exponential Stability of Stochastic Dierential Delay Equations Xuerong Mao and Anita Shah Department of Statistics and Modelling Science University of Strathclyde Glasgow G1 1XH, Scotland, U.K.
Abstract: In this paper we study both pth moment and almost sure exponential stability of the
stochastic dierential delay equation dx(t)=f (t;x(t);x(t? ))dt+g(t;x(t);x(t? ))dw(t). Introduce the corresponding stochastic dierential equation (without delay) dx(t)=f (t;x(t);x(t))dt+ g(t;x(t);x(t))dw(t) and assume it is exponentially stable which is guaranteed by the existence of the Lyapunov function. We shall show that the original stochastic dierential delay equation remains exponentially stable provided the time lag is suciently small, and a bound for such is obtained.
Key Words: stochastic dierential delay equations, Lyapunov function, Lyapunov exponent, Borel-Cantelli lemma.
AMS 1991 Classi cations: 60H10, 34K30
1. Introduction In many branches of science and industry stochastic dierential delay equations have been used to model the evolution phenomena because the measurements of timeinvolving variables and their dynamics usually contain some delays (cf. Kolmanovskii & Myshkis [4], Mohammed [12]). Stability is one of the most important problems in the study of stochastic dierential delay equations and many authors e.g. Arnold, Kolmanovskii, Mao, Mohammed have devoted their interests to this area. One of the powerful techniques employed in the study of the stability problem is the method of the Lyapunov functions or functionals (cf. Kolmanovskii & Nosov [5], Mao [6, 7]). However, it is generally much more dicult to construct the Lyapunov functionals in the case of delay than the Lyapunov functions in the case of non-delay. Therefore another useful technique has been developed, that is to compare the stochastic dierential delay equations with the corresponding non-delay equations. To explain, let us look at a stochastic dierential delay equation
dx(t) = f (t; x(t); x(t ? ))dt + g(t; x(t); x(t ? ))dw(t): Re-write equation (1.1) as
dx(t) = f (t; x(t); x(t))dt + g(t; x(t); x(t))dw(t) ? f (t; x(t); x(t)) ? f (t; x(t); x(t ? )) dt ? g(t; x(t); x(t)) ? g(t; x(t); x(t ? )) dw(t) Supported by the RD fund of Strathclyde University.
1
(1:1)
and regard it as the perturbed system of the corresponding stochastic dierential equation (without delay)
dx(t) = f (t; x(t); x(t))dt + g(t; x(t); x(t))dw(t):
(1:2)
Obviously, if the time lag is suciently small then the perturbation term
f (t; x(t); x(t)) ? f (t; x(t); x(t ? )) dt + g(t; x(t); x(t)) ? g(t; x(t); x(t ? )) dw(t) could be so small that the perturbed equation (1.1) would behave in a similar way as equation (1.2) asymptotically. For example, one could hope that if equation (1.2) is exponentially stable and the time lag is suciently small, then equation (1.1) will remain exponentially stable. This paper is devoted to proving this fact and giving a bound for such . As a result, to nd out whether the delay equation (1.1) is exponentially stable, one can rst check the exponential stability of the non-delay equation (1.2) and then see if the time lag is suciently small. In other words, we have transferred the complicated problem|the stability of the delay equation into a relatively simpler one|the stability of the non-delay one. The idea to compare the delay equation (1.1) with the corresponding non-delay equation (1.2) was originally developed in the study of asymptotic stability for the deterministic linear dierential delay equation
x_ (t) = Ax(t) + Bx(t ? ) = (A + B )x(t) ? B (x(t) ? x(t ? ));
(1:3)
which was regarded as the perturbed system of
y_ (t) = (A + B )y(t):
(1:4)
Several authors e.g. Hale & Lunel [2], Mori & Kokame [13, 14], Su, Fong & Tseng [15{17] have shown that if A + B is a stable matrix and suciently small, then the delay equation (1.3) is asymptotically stable. It is crucial to nd the best bound for such . In respect of stochastic case, Mao [9] investigated the exponential stability for the linear It^o delay equation
dx(t) = Ax(t)dt + B (x(t) ? x(t ? ))dw(t)
(1:5)
when A is a stable matrix; while Mao [10] compared equation (1.1) with the ordinary dierential equation x_ (t) = f (t; x(t); x(t)) (1:6) and obtained some sucient criteria for the exponential stability of equation (1.1). In this paper we will generalize our previous results to a great degree. Comparisons between our new results and those in Mao [10] will be shown in Section 3, while the linear stochastic dierential delay equation discussed in Example 4.1 below will be a generalization of equation (1.5). 2
2. Exponential Stability Throughout this paper, unless otherwise speci ed let w(t) = (w1 (t); ; wm(t))T be an m-dimensional Brownian motion which is de ned on a complete probability space ( ; F ; P ) with a natural ltration fFtgt0 (i.e. Ft = fw(s) : 0 s tg). Denote by j j the Euclidean norm. If A is a vector or matrix, its transpose is denoted by AT . If A is a matrix, denote by jjAjj its operator norm, i.e. jjAjj = supfjAxj : jxj = 1g. Let C 2;1 (R+ Rn ; R+) denote the family of all functions V (t; x) from R+ Rn to R+ which are continuously twice dierentiable in x and once in t. For any such function V (t; x), de ne
Vx (t; x) = (Vx1 (t; x); ; Vx (t; x))
and
n
Vxx (t; x) = (Vx x (t; x))nn: i j
Let > 0 and p 2. Denote by LpF0 ([?; 0]; Rn) the family of Rn -valued stochastic processes (s; !); ? s 0Rsuch that (s; !) is B([?; 0])F0-measurable (i.e. jointly measurable in s and !) and ?0 E j (s)jpds < 1. Consider the n-dimensional stochastic dierential delay equation
dx(t) = f (t; x(t); x(t ? ))dt + g(t; x(t); x(t ? ))dw(t)
(2:1)
on t 0 with initial data x(t) = (t) for ? t 0. Here := f (s) : ? s 0g 2 LpF0 ([?; 0]; Rn). Moreover, f : R+ Rn Rn ! Rn and g : R+ Rn Rn ! Rnm are locally Lipschitz continuous and satisfy the linear growth condition. That is, for each pair of i > 0 and T > 0 there is a constant Ki;T > 0 such that
jf (t; x; y) ? f (t; x; y)j _ jjg(t; x; y) ? g(t; x; y)jj Ki;T (jx ? xj + jy ? yj) if 0 t T and x; x; y; y 2 Rn with jxj _ jyj _ jxj _ jyj i; moreover, for each T > 0 there is a constant K T such that
jf (t; x; y)j _ jjg(t; x; y)jj KT (1 + jxj + jyj) if 0 t T and x; y 2 Rn. It is well-known (cf. Mao [6, 7], Mohammed [12]) that equation (2.1) has a unique solution which is denoted by x(t; ) in this paper, and the pth moment of the solution is nite. For the purpose of this paper, assume also that g(t; 0; 0) 0 and f (t; 0; 0) 0. So equation (2.1) admits a trivial solution x(t; 0) 0. The trivial solution of equation (2.1), or simply equation (2.1), is said to be pth moment exponential stable if there is a constant > 0 such that lim sup 1t log(E jx(t; )jp) ? t!1
for all 2 LpF0 ([?; 0]; Rn). Introduce the corresponding stochastic dierential equation (without delay)
dx(t) = f (t; x(t); x(t))dt + g(t; x(t); x(t))dw(t): 3
(2:2)
The associated diusion operator L acting on any function V (t; x) 2 C 2;1(R+ Rn ; R+) is given by
LV (t; x) = Vt (t; x) + Vx (t; x)f (t; x; x) + 12 trace gT (t; x; x)Vxx(t; x)g(t; x; x) :
As explained in Section 1, to study the pth moment exponential stability of equation (2.1), we shall compare the equation with equation (2.2). It is therefore natural to assume that equation (2.2) is pth moment exponentially stable. By the well-known Has'minskii's theorem (cf. Has'minskii [3], theorem 5.7.2), this is almost equivalent to the following hypothesis (Has'minskii's conditions are slightly stronger, but for the purpose of this paper we prefer to impose the following one): (H1) There exists a function V (t; x) 2 C 2;1 (R+ Rn ; R+) and positive constants ki ; 1 i 5 such that
k1jxjp V (t; x) k2 jxjp; LV (t; x) ?k3 jxjp ; jVx (t; x)j k4 jxjp?1; jjVxx(t; x)jj k5 jxjp?2 for all (t; x) 2 R+ Rn . Besides, we impose another hypothesis: (H2) There exist non-negative constants i ; 1 i 2 and j ; 1 j 4 such that
jf (t; x; x) ? f (t; x; y)j 1jx ? yj; trace [g(t; x; y) ? g(t; x; x)]T [g(t; x; y) ? g(t; x; x)] 22jx ? yj2; jf (t; x; y)jp 1jxjp + 2 jyjp; 2 T trace g(t; x; y)g (t; x; y) 3jxjp + 4 jyjp p
(2:3) (2:4) (2:5) (2:6)
for all t 0 and x; y 2 Rn. Clearly, hypothesis (H2) is not very restricted. For example, if both f and g are Lipschitz continuous, that is there exists a K > 0 such that
jf (t; x; y) ? f (t; x; y)j K (jx ? xj+ jy ? yj); trace [g(t; x; y) ? g(t; x; y)]T [g(t; x; y) ? g(t; x; y)] K (jx ? xj2 + jy ? yj2);
then conditions (2.3) and (2.4) follows directly, while conditions (2.5) and (2.6) follows from this and our standing hypothesis f (t; 0; 0) 0; g(t; 0; 0) 0. In other words, hypothesis (H2) is weaker than the Lipschitz continuity.
Theorem 2.1. Let hypotheses (H1) and (H2) hold. Then equation (2.1) is pth moment exponentially stable if
+ 2 k k ? 3 5 1 > 1 2 > > < 22k5 2 = > p > k3 > > : if 2 = 0; 1k4 ?2 3 =2 2 [p(p ? 1)] 2 (3 + 4): p
if 2 6= 0;
p
p
The proof is rather technical. To make it more understandable, we divide it into several lemmas. Lemma 2.2. Let u; v; "1 ; "2 be positive numbers. Then p p up?1 v "1 (p ?p 1)u + vp?1 ; (2:8) p"1 p p 2 v " ( p ? 2) u 2 p ? 2 2 (2:9) + (p?2)=2 ; u v p p"2 This lemma can easily be proved by using the elementary inequality u v1? u + (1 ? )v if 0 < < 1 so we omit the details. Lemma 2.3. Let (t) = (ij (t))nm be an n m-matrix-valued progressively measurable stochastic process such that
E for all t 0. Then Z E
t s
p (r)dw(r)
t
Z
0
p
trace((s)T (s)) 2 ds < 1
p(p 2? 1)
p2
(t ? s)
?2
p
2
E
t
Z
s
p
trace((r)T (r)) 2 dr
for all 0 s < t < 1. This lemma can be proved in the same way as Theorem 4.6.3 in Friedman [1], where he showed the inequality for the case when p 2 is an integer and (t) is a real-valued progressively measurable process. The details of the proof can be found in the forthcoming book Mao [11] (Theorem 1.7.1). Lemma 2.4. Let hypothesis (H2) hold and " > 0. Write x(t; ) = x(t) simply. Then t
Z
0
e"s E jx(s) ? x(s ? )jpds c1 + e" (1 + 2 e" )
for all t 0, where
1 = 1 (2 )p?1 + 3(2 ) 2 = 2 (2 )p?1 + 4(2 ) 5
?2
Z
0
t
e"s E jx(s)jpds
[p(p ? 1)] 2 ; ?2 2 [p(p ? 1)] 2 ;
p p
2
p p
(2:10)
and c1 is a constant larger than 0 e"sE jx(s) ? x(s ? )jp ds. Proof. Obviously we need only to show (2.10) for any t . Note that for s , R
Z
s
E jx(s) ? x(s ? )jp 2p?1 E
f (r; x(r); x(r s ? Z s p ? 1 + 2 E g(r; x(r); x(r s?
? ?
p ))dr
p ))dw(r) :
By the Holder inequality, Lemma 2.3 and hypothesis (H2) one can then derive that E jx(s) ? xZ(s ? )jp s (2 )p?1 E 1jx(r)jp + 2jx(r ? )jp dr ?2
+ (2 )
p
s
Z
= 1
2
s?
s?
[p(p ? 1)] 2 E p
s
s?Z
s
Z
E jx(r)jpdr + 2
3jx(r)jp + 4jx(r ? )jp dr
s?
E jx(r ? )jp dr;
(2:11)
where 1 and 2 have been de ned before. Hence for any t , t
Z
=
Z
0
e"sE jx(s) ? x(s ? )jp ds
0
c 2 + 1 where c2 =
R
0
t
Z
e"s
Z
s s?
t
Z
e"s E jx(s) ? x(s ? )jpds +
E jx(r)jpdrds + 2
t
s
Z
e"s
s?
E jx(r)jpdrds =
t
Z
0
t
e"s
On the other hand, =
Z
s? t
Z
0
s
Z
e"s
E jx(r)jp
e" Z
t
Z
e"sE jx(s) ? x(s ? )jpds: But
Z
and similarly
e"sE jx(s) ? x(s ? )jp ds
t
Z
0
Z
s s?
E jx(r ? )jpdrds; (2:12)
Z (r+ )^t
r_
e"sds
dr
e"r E jx(r)jpdr
E jx(r ? )jpdrds e"
t
Z
0
e"r E jx(r ? )jpdr:
e"r E jx(r ? )jp dr
e"r E jx(r ? )jpdr +
Z
t
e"r E jx(r ? )jpdr
Z t p " " e"(r? ) E jx(r ? )jpdr E j (r ? )j dr + e e 0 Z t " e"r E jx(r)jpdr; c3 + e 0
Z
0
6
(2:13) (2:14)
R
where c3 = e" ?0 E j (r)jpdr. Substituting this into (2.14) gives t
Z
e"s
Z
s
s?
E jx(r ? )jpdrds c3 e" + e2"
Finally, substituting (2.13) and (2.15) into (2.12) yields t
Z
0
t
Z
0
e"r E jx(r)jpdr:
(2:15)
e"s E jx(s) ? x(s ? )jpds
c2 + 2 c3 e" + e" (1 + 2 e" )
Z
t
0 c1
e"r E jx(r)jpdr
and the required inequality (2.10) follows by letting = c2 + 2 c3e" . The proof is complete. Lemma 2.5. Let hypotheses (H1) and (H2) hold. Let "1 and "2 be two arbitrary positive constants. Write x(t; ) = x(t) simply. Then dV (t; x(t)) (?k3 + 3 )jx(t)jp + 4 jx(t) ? x(t ? )jp dt + Vx (t; x(t))g(t; x(t); x(t ? ))dw(t) (2:16) for all t 0, where h i " (p ? 2) 2 k 1 " ( p ? 1) 1 3 = p 1k4 + 2 k5(3 + 4 ) + 2 2p 2 5 ; h i 2 4 = p1?1 1k4 + 2k5 (3 + 4) 1 + (p2?k2)5 =2 : p"1 p"2 p
p
Proof. By It^o's formula dV (t; x(t)) = Vt (t; x(t)) + Vx (t; x(t))f (t; x(t); x(t ? )) + 21 trace gT (t; x(t); x(t ? ))Vxx (t; x(t))g(t; x(t); x(t ? )) dt + Vx (t; x(t))g(t; x(t); x(t ? ))dw(t): (2:17) By hypotheses (H1) and (H2), one sees easily that Vx (t; x(t))f (t; x(t); x(t ? )) Vx (t; x(t))f (t; x(t); x(t)) + 1k4 jx(t)jp?1jx(t) ? x(t ? )j: (2:18) Also, writing g(t; x(t); x(t)) = g and g(t; x(t); x(t ? )) = g^, one can show that trace g^T Vxx g^ T T = trace g ? (g ? g^) Vxx g ? (g ? g^) = trace(gT Vxx g) ? 2trace gT Vxx (g ? g^) + trace (g ? g^)T Vxx (g ? g^)
21
trace(gT Vxx g) + 2jjVxxjj trace[gT g] trace (g ? g^)T (g ? g^) + jjVxx jjtrace (g ? g^)T (g ? g^) trace(gT Vxx g) + 22k5(3 + 4 ) 1 jx(t)jp?1jx(t) ? x(t ? )j + 22k5 jx(t)jp?2jx(t) ? x(t ? )j2: p
7
(2:19)
Substituting (2.18) and (2.19) into (2.17) yields
h
i
dV (t; x(t) LV (t; x(t)) + 1 k4 + 2 k5(3 + 4 ) jx(t)jp?1jx(t) ? x(t ? )j 1 2 p ? 2 2 + 2 2 k5jx(t)j jx(t) ? x(t ? )j dt + Vx (t; x(t))g(t; x(t); x(t ? ))dw(t): (2:20) Applying hypothesis (H1) and Lemma 2.2 one obtains the required inequality (2.16). The proof is complete. We can now start to prove Theorem 2.1. Proof of Theorem 2.1. Note that 3 + 4 (1 + 2 ) is a function of "1 > 0 and "2 > 0. It is not dicult to show that when 2 and "2 = [ (1 + 2 )] (2:21) "1 = [ (1 + 2 )] 1 the function 3 + 4 (1 + 2 ) takes its minimum, namely 3 + 4 (1 + 2 ) h i 2 1 1 = [ (1 + 2 )] 1k4 + 2 k5(3 + 4 ) + 12 [ (1 + 2 )] 22k5 : (2:22) 1
p
p
p
p
p
p
Let be the right-hand-side term of (2.7), i.e.
! p2
p
32 + 2p+1 2 (1 + 2) ? 3 : = 2p (1 + 2) One can verify that is the unique root on R+ to the equation (of ) 3 + 4 (1 + 2 ) = k3: By condition (2.7), i.e. < , one then sees that 3 + 4 (1 + 2 ) < k3 (2:23) since 3 + 4 (1 + 2 ) is an increasing function of . Consequently, one can nd an " > 0 such that 3 + "k2 + 4 e" (1 + 2 e" ) = k3: (2:24) Now x the initial data arbitrarily and write x(t; ) = x(t) simply. By It^o's formula, Lemmas 2.4 and 2.5, hypothesis (H1) as well as (2.24) one can then derive that E e"t V (t; x(t))
EV (0; (0)) + (?k3 + 3 + "k2 ) Z
t
+ 4 e"sE jx(s) ? x(s ? )jp ds 0 EV (0; (0)) + 4 c1
t
Z
0
e"s E jx(s)jpds
+ ?k3 + 3 + "k2 + 4 e" (1 + 2 e" )
= EV (0; (0)) + 4 c1 : 8
t
Z
0
e"s E jx(s)jpds
This, together with hypothesis (H1), implies immediately that E jx(t)jp k1 [EV (0; (0)) + 4 c1 ]e?"t ; 1 that is (2:25) lim sup 1t log(E jx(t)jp) ?": t!1 In other words, we have shown that equation (2.1) is pth moment exponentially stable. The proof is now complete. In the proof above we even obtain the estimate (2.25) for the pth moment Lyapunov exponent. In practice, one can determine the " by solving equation (2.24). Theorem 2.6. Let hypotheses (H1) and (H2) hold. If (2.7) is satis ed, then equation (2.1) is almost surely exponentially stable. More precisely, we have lim sup 1t log(jx(t; )j) ? p" a:s: (2:26) t!1 where " > 0 is the unique root to equation (2.24).
This theorem follows from Theorem 2.1 and the following lemma. Lemma 2.7. Assume that there exists a positive constant K such that _
jf (t; x; y)j2 trace g(t; x; y)gT (t; x; y) K (jxj2 + jyj2)
(2:27)
for all (t; x; y) 2 R+ Rn Rn. Let " > 0. If
lim sup 1t log(E jx(t; )jp) ?"; t!1
then
lim sup 1t log(jx(t; )j) ? p" t!1
a:s:
In other words, under hypothesis (2.27), the pth moment exponential stability of the delay equation (2.1) implies the almost sure exponential stability.
This lemma can be proved in the same way as Propositions 8.2.1 and 8.2.3 in Mao [7], where the lemma was shown in the case of p = 2.
3. Comparisons In practice, the exponential stability in mean square (i.e. the case of p = 2) is very important. In the study of mean square stability, it is often to use a quadratic function as the Lyapunov function, that is V (t; x) = xT Gx, where G is a symmetric positive de nite n n matrix. It is easy to verify that the operator L acting on such a quadratic function has the form
LV (t; x) = 2xT Gf (t; x; x) + trace[gT (t; x; x)Gg(t; x; x)]: 9
Let us now form a corollary and then compare it with our previous result. Corollary 3.1. Let hypothesis (H2) hold with p = 2. Assume that there exists a symmetric positive de nite n n matrix G and a positive constant such that 2xT Gf (t; x; x) + trace[gT (t; x; x)Gg(t; x; x)] ?jxj2
(3:1)
for all t 0 and x 2 Rn . Then equation (2.1) is both mean square and almost surely exponentially stable provided p
2 < (3 + 4) +2(2 2 (+1+ )2 ) ? (3 + 4 ) ; 1
where
2
? p 1 =2jjGjj 1 + 2 3 + 4 ; 8 p !2 2 2 > + 4 jj G jj ? > 1 1 2 > > < 2 jjGjj 2 2 2 = > > > 2 > : if 2 = 0: 412jjGjj2
(3:2)
if 2 6= 0;
This corollary follows from Theorems 2.1 and 2.6 directly by noting that hypothesis (H1) holds with
k1 = min (G); k2 = max(G); k3 = ; k4 = k5 = 2jjGjj; where (and in the sequel) min (G) and max(G) represent the smallest and the largest eigenvalue of G respectively. Let us now compare this corollary with our previous result obtained in Mao [10]. The basic idea of Mao [10] was to compare the stochastic dierential delay equation with the (deterministic) ordinary dierential equation (without delay)
x_ (t) = f (t; x(t); x(t)):
(3:3)
Instead of condition (3.1), Mao [10] assumed that there exists a symmetric positive de nite n n matrix G and a positive constant such that 2xT Gf (t; x; x) ?jxj2 (3:4) for all t 0 and x 2 Rn . Under hypotheses (H2) (with p = 2) and (3.4) he showed that equation (2.1) is both mean square and almost surely exponentially stable if p ? jjGjj(3 + 4) > 21jjGjj 8(1 + a2 ) 2 + 2(3 + 4 ):
(3:5)
On the other hand, to apply Corollary 3.1 we note that under hypotheses (H2) and (3.4) 2xT Gf (t; x; x) + trace[gT (t; x; x)Gg(t; x; x)] ? ? jjGjj(3 + 4) jxj2: 10
Regard ?jjGjj(3 + 4 ) as the in Corollary 3.1, and note that (3.2) is equivalent to
? jjGjj(3 + 4) > 422jjQjj[(1 + 2) 2 + (3 + 4 ) ] p ? p + 2jjGjj 1 + 2 3 + 4 2(1 + a2) 2 + 2(3 + 4):
(3:6)
Thus by Corollary 3.1 we can conclude that under hypotheses (H2) and (3.4), equation (2.1) is both mean square and almost surely exponentially stable if (3.6) is satis ed. Comparing (3.6) with (3.5) one sees that in the case when 2 is relatively small, (3.6) gives a better bound; otherwise (3.5) gives a better one. In other words, Corollary 3.1 and the result of Mao [10] complement each other. Moreover, condition (3.5) requires that > jjGjj(3 + 4 ). Sometimes this is not satis ed and hence the result of Mao [10] can not be used. But it is still possible to use Corollary 3.1. To see this point, let us consider a simple one-dimensional delay equation dx(t) = ?x(t ? )dt + [x(t) ? 0:5x(t ? )]dw(t) (4:7) where w(t) is a one-dimensional Brownian motion. That is, we let f (t; x; y) = ?y and g(t; x; y) = x ? 0:5y. It is easy to see that hypothesis (H2) is satis ed with p = 2 and
1 = 1; 2 = 0:5; 1 = 0; 2 = 1; 3 = 4 = 1:25: Clearly, one can let G = 1 in this one-dimensional case. Hence (3.4) is satis ed with = 2, but 6> 3 + 4 . In other words, (3.5) will never be satis ed so the result of Mao [10] can not be used. On the other hand, it is easy to see that condition (3.1) is satis ed with = 1:75 and (3.2) becomes < 0:043. Therefore, by Corollary 3.1, we can conclude that equation (4.7) is both mean square and almost surely exponentially stable provided < 0:043. This example shows another advantage of our new results.
4. Examples For illustration let us discuss some examples in this section.
Example 4.1. First consider a linear stochastic dierential delay equation dx(t) = [A0x(t) + B0 x(t ? )]dt +
m X i=1
[Aix(t) + Bix(t ? )]dwi(t)
(4:1)
on t 0 with initial data x(t) = (t) for ? t 0. Here Ai ; Bi ; 1 i m are all n n matrices, and 2 LpF0 ([?; 0]; Rn). The corresponding non-delay equation is
dx(t) = (A0 + B0)x(t)dt +
m X i=1
(Ai + Bi )x(t)dwi (t):
(4:2)
Assume that there exist two symmetric positive de nite n n matrices G and Q such that m X T (4:3) G(A0 + B0) + (A0 + B0) G + (Ai + Bi )T G(Ai + Bi ) = ?Q: i=1
11
De ne the Lyapunov function V (t; x) = xT Gx. It is easy to verify that hypothesis (H1) is satis ed with
k1 = min (G); k2 = max(G); k3 = min (Q); k4 = k5 = 2jjGjj: In other words, condition (4.3) guarantees both mean square and almost sure exponential stability of equation (4.2). Moreover, noting that f (t; x; y) = A0x + B0y and g(t; x; y) = (A1x + B1y; ; Amx + Bmy); one can easily verify that hypothesis (H2) is satis ed with p = 2 and r Xm
jjB jj2; 2 = i=1 i 2 = 2jjB0jj2;
1 = jjB0jj; 1 = 2jjA0jj2; 3 = 2
m X i=1
jjAijj2;
4 = 2
m X i=1
jjBijj2
Consequently, condition (2.7) becomes p
2 2 2 < 1 +2(3jj(AjjAjj02jj++jjBjjBjj02jj) ) ? 1 ; 0 0
where
1 =
(4:4)
m X i=1
(jjAijj2 + jjBijj2) "
r
2 =2jjGjj jjB0jj + 21
Xm
p
i=1
#
jjBijj2 ; !2
2 + 4min (Q)jjGjj m jjBi jj2 ? 2 2 i=1 Pm 3 = : 2jjGjj i=1 jjBi jj2 Therefore, by Theorems 2.1 and 2.6, we conclude that under conditions (4.3) and (4.4), equation (4.1) is both mean square and almost surely exponentially stable. Example 4.2. Consider a nonlinear stochastic dierential delay equation dx(t) = f(x(t ? ))dt + g(x(t ? ))dw(t) (4:5) on t 0 with initial data x(t) = (t) for ? t 0. Here f : Rn ! Rn; g : Rn ! Rnm and 2 LpF0 ([?; 0]; Rn). The corresponding non-delay equation is dx(t) = f(x(t))dt + g(x(t))dw(t): (4:6) Assume that both f and g are Lipschitz continuous, that is, there exist two positive constants a and b such that jf(x) ? f(y)j ajx ? yj; (4:7) trace (g(x) ? g(y))T (g(x) ? g(y)) b2jx ? yj2 (4:8) P
12
for all x; y 2 Rn. Assume also that f(0) = 0 and g(0) = 0. Besides, assume that there exists a constant such that 2 and xT f (x) ?jxj2 (4:9) > b2 for all x 2 Rn . Now let 2 p < 1 + 2=b2 , and use V (t; x) = jxjp as the Lyapunov function. Note that
Vx = pjxjp?2xT
and
Vxx = pjxjp?2 I + p(p ? 2)jxjp?4xxT
where I is the n n identity matrix. Compute that Ljxjp = Vx f (x) + 21 trace[gT (x)Vxx g(x)] pjxjp?2 xT f (x) + 21 p(p ? 1)jxjp?2trace[gT (x)g(x)] ?p[ ? (p ? 1)b2=2]jxj2: One therefore sees that hypothesis (H1) holds with
k1 = k2 = 1; k3 = p[ ? (p ? 1)b2=2]; k4 = p; k5 = p(p ? 1): Moreover, setting f (t; x; y) = f(y) and g(t; x; y) = g(y), one can easily verify that hypothesis (H2) holds with 1 = a; 2 = b; 1 = 0; 2 = ap ; 3 = 0; 4 = bp : Hence condition (2.7) becomes
where
p < 41a2 2p?2 [p(p ? 1)]p b2p + 2p+1 ap ? 2
=
?2
p
2
[p(p ? 1)] 2 bp p
p2
;
p
(a + b2 (p ? 1))2 + 2b2 (p ? 1)[ ? (p ? 1)b2=2] ? (a + b2 (p ? 1)) b2(p ? 1)
(4:10) !p
:
By Theorem 2.1 we conclude that the delay equation (4.5) is pth moment exponentially stable if conditions (4.7){(4.10) are satis ed. To get the almost sure exponentially stability, it is enough to consider the case of p = 2. In this case, (4.10) becomes
< 21a2
#
"r
2 2 p (a + b2 )2 + 2b2[ ? b2=2] ? (a + b2 ) ? b2 : b4 + 2ba4
(4:11)
Therefore, by Theorems 2.1 and 2.6 we conclude that the delay equation (4.5) is almost surely exponentially stable if conditions (4.7){(4.9) and (4.11) are satis ed. Example 4.3. Finally, consider a stochastic delay oscillator
z(t) + 2z_ (t ? ) + z(t ? ) = 0:1 sin(z(t))w_ (t) 13
(4:12)
on t 0, where w(t) is a one-dimensional Brownian motion, i.e. w_ (t) is a white noise. Introducing x = (x1; x2)T = (z; z_ )T , one can write equation (4.12) as the following stochastic dierential delay equation
dx(t) = 00 10 x(t)dt + ?01 ?02 x(t ? )dt 0 + 0:1 sin(x (t)) dw(t): 1
(4:13)
For the mean square exponentially stability, we try to use the quadratic function V (x) = x21 + ax1 x2 + bx22 as the Lyapunov function. It is easy to show that
LV (x) ?(a ? 0:01b)x21 + 2(1 ? a ? b)x1x2 ? (4b ? a)x22:
(4:14)
Choose a; b such that 1?a?b = 0 and a?0:01b = 4b?a, which imply a = 0:67; b = 0:33. One can then verify that (H1) holds with p = 2 and
k1 = 0:19; k2 = 1:14; k3 = 0:67; k4 = k5 = 2:28: One can also verify that (H2) is satis ed with p = 2 and
p
p
p
1 = 2; 2 = 0; 1 = 1 + 2; 2 = 2 + 2; 3 = 0:01; 4 = 0: Consequently, (2.7) gives < 0:06. By Theorems 2.1 and 2.6, we therefore conclude that the stochastic delay oscillator (4.12) is both mean square and almost surely exponentially stable provided < 0:06.
Acknowledgements The authors would like to thank the anonymous referee for his/her very helpful remarks and suggestions. The authors would also like to acknowledge the nancial support from the University of Strathclyde.
REFERENCES [1] Friedman, A., Stochastic Dierential Equations and Applications, Vol.1, Academic Press, New York, 1975. [2] Hale, J.K. and Lunel, S.M.V., Introduction to Functional Dierential Equations, Springer-Verlag, 1993. [3] Has'minskii, R.Z., Stochastic Stability of Dierential Equations, Sijtho and Noordho, 1981. [4] Kolmanovskii, V.B. and Myshkis, A., Applied Theory of Functional Dierential Equations, Kluwer Academic Publishers, 1992. [5] Kolmanovskii, V.B. and Nosov, V.R., Stability of Functional Dierential Equations, Academic Press, 1986. [6] Mao, X., Stability of Stochastic Dierential Equations with Respect to Semimartingales, Longman Scienti c and Technical, 1991 14
[7] Mao, X., Exponential Stability of Stochastic Dierential Equations, Marcel Dekker, 1994. [8] Mao, X., Robustness of stability of nonlinear systems with stochastic delay perturbations, Systems and Control Letters 19 (1992), 391{400. [9] Mao, X., Exponentially stability for stochastic dierential delay equations in Hilbert space, Quarterly J. Math. Oxford (2), 42 (1991), 77{85. [10] Mao, X., Robustness of exponential stability of stochastic dierential delay equations, IEEE Trans. Automat. Control. AC-41(3) (1996). [11] Mao, X., Stochastic Dierential Equations and Their Applications, Albion Publishing, 1997 (forthcoming book). [12] Mohammed, S-E.A., Stochastic Functional Dierential Equations, Longman Scienti c and Technical, 1986. [13] Mori, T., Criteria for asymptotic stability of linear time-delay systems, IEEE Trans. Automat. Control AC-30 (1985), 185{161. [14] Mori, T. and Kokame, H., Stability of x_ (t) = Ax(t) + Bx(t ? ), IEEE Trans. Automat. Control AC-34 (1989), 460{462. [15] Su, J.-H., Further results on the robust stability of linear systems with a single time delay, Systems and Control Letters 23 (1994), 375{379. [16] Su, J.-H., Fong, I.-K. and Tseng, C.-L., Stability analysis of linear systems with time delay, IEEE Trans. Automat. Control AC-39 (1994), 1341{1344. [17] Tseng, C.-L., Fong, I.-K. and Su, J.-H., Robust stability analysis for uncertain delay systems with output feedback controller, Systems and Control Letters 23 (1994), 271{278.
15