Approximations of Numerical Method for Neutral Stochastic Functional ...

3 downloads 0 Views 2MB Size Report
Sep 17, 2012 - 2 School of Statistics and Mathematics, Zhongnan University of Economics and Law, Wuhan 430073, China. Correspondence should be ...
Hindawi Publishing Corporation Journal of Applied Mathematics Volume 2012, Article ID 675651, 32 pages doi:10.1155/2012/675651

Research Article Approximations of Numerical Method for Neutral Stochastic Functional Differential Equations with Markovian Switching Hua Yang1 and Feng Jiang2 1 2

School of Mathematics and Computer Science, Wuhan Polytechnic University, Wuhan 430023, China School of Statistics and Mathematics, Zhongnan University of Economics and Law, Wuhan 430073, China

Correspondence should be addressed to Feng Jiang, [email protected] Received 9 April 2012; Accepted 17 September 2012 Academic Editor: Zhenyu Huang Copyright q 2012 H. Yang and F. Jiang. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited. Stochastic systems with Markovian switching have been used in a variety of application areas, including biology, epidemiology, mechanics, economics, and finance. In this paper, we study the Euler-Maruyama EM method for neutral stochastic functional differential equations with Markovian switching. The main aim is to show that the numerical solutions will converge to the true solutions. Moreover, we obtain the convergence order of the approximate solutions.

1. Introduction Stochastic systems with Markovian switching have been successfully used in a variety of application areas, including biology, epidemiology, mechanics, economics, and finance 1. As well as deterministic neutral functional differential equations and stochastic functional differential equations SFDEs, most neutral stochastic functional differential equations with Markovian switching NSFDEsMS cannot be solved explicitly, so numerical methods become one of the powerful techniques. A number of papers have studied the numerical analysis of the deterministic neutral functional differential equations, for example, 2, 3 and references therein. The numerical solutions of SFDEs and stochastic systems with Markovian switching have also been studied extensively by many authors. Here we mention some of them, for example, 4–16. moreover, Many well-known theorems in SFDEs are successfully extended to NSFDEs, for example, 17–24 discussed the stability analysis of the true solutions. However, to the best of our knowledge, little is yet known about the numerical solutions for NSFDEsMS. In this paper we will extend the method developed by 5, 16 to

2

Journal of Applied Mathematics

NSFDEsMS and study strong convergence for the Euler-Maruyama approximations under the local Lipschitz condition, the linear growth condition, and contractive mapping. The three conditions are standard for the existence and uniqueness of the true solutions. Although the method of analysis borrows from 5, the existence of the neutral term and Markovian switching essentially complicates the problem. We develop several new techniques to cope with the difficulties which have risen from the two terms. Moreover, we also generalize the results in 19. In Section 2, we describe some preliminaries and define the EM method for NSFDEsMS and state our main result that the approximate solutions strongly converge to the true solutions. The proof of the result is rather technical so we present several lemmas in Section 3 and then complete the proof in Section 4. In Section 5, under the global Lipschitz condition, we reveal the order of the convergence of the approximate solutions. Finally, a conclusion is made in Section 6.

2. Preliminaries and EM Scheme Throughout this paper let Ω, F, P be a complete probability space with a filtration {Ft }t≥0 satisfying the usual conditions, that is, it is right continuous and increasing while F0 contains all P-null sets. Let wt w1 t, . . . , wm tT be an m-dimensional Brownian motion defined on the probability space. Let | · | be the Euclidean norm in Rn . Let R 0, ∞, and let τ > 0. Denoted by C−τ, 0, Rn  the family of continuous functions from −τ, 0 to Rn p with the norm ϕ sup−τ≤θ≤0 |ϕθ|. Let p > 0 and LF0 −τ, 0; Rn  be the family of F0 n measurable C−τ, 0; R -valued random variables ξ such that Eξp < ∞. If xt is an Rn valued stochastic process on t ∈ −τ, ∞, we let xt {xt θ : −τ ≤ θ ≤ 0} for t ≥ 0. Let rt, t ≥ 0, be a right-continuous Markov chain on the probability space taking values in a finite state space S {1, 2, . . . , N} with the generator Γ γij N×N given by ⎧   ⎨γij Δ oΔ P rt Δ j | rt i ⎩1 γij Δ oΔ

if i / j, if i j,

2.1

 where Δ > 0. Here γij ≥ 0 is the transition rate from i to j if i / j while γii − i / j γij . We assumethat the Markov chain r· is independent of the Brownian motion w·. It is well known that almost every sample path of r· is a right-continuous step function with finite number of simple jumps in any finite subinterval of  0, ∞. In this paper, we consider the n-dimensional NSFDEsMS dxt − uxt , rt fxt , rtdt gxt , rtdwt, p

t ≥ 0,

2.2

with initial data x0 ξ ∈ LF0 −τ, 0; Rn  and r0 i0 ∈ S, where f : C−τ, 0; Rn  → Rn , g : C−τ, 0; Rn  → Rn×m and u : C−τ, 0; Rn  → Rn . As a standing hypothesis we assume that both f and g are sufficiently smooth so that 2.2 has a unique solution. We refer the reader to Mao 10, 12 for the conditions on the existence and uniqueness of the solution xt. The initial data ξ and i0 could be random, but the Markov property ensures that it is sufficient to consider only the case when both x0 and i0 are constants. To analyze the Euler-Maruyama EM method, we need the following lemma see 6, 7, 10–12, 16.

Journal of Applied Mathematics

3

Lemma 2.1. Given Δ > 0, let rkΔ rkΔ for k ≥ 0. Then {rkΔ , k 0, 1, 2, . . .} is a discrete Markov chain with the one-step transition probability matrix  P Δ Pij Δ N×N eΔΓ .

2.3

For the completeness, we give the simulation of the Markov chain as follows. Given a stepsize Δ > 0, we compute the one-step transition probability matrix  P Δ Pij Δ N×N eΔΓ .

2.4

Let r0Δ i0 and generate a random number ξ1 which is uniformly distributed in 0, 1. Define

r1Δ



⎧ ⎪ ⎪ ⎪i1 , ⎨

if i1 ∈ S − {N} such that

N−1  ⎪ ⎪ ⎪ Pi0 ,j Δ ≤ ξ1 , ⎩N, if

i 1 −1

i1 

j 1

j 1

Pi0 ,j Δ ≤ ξ1
T . The explicit discrete EM approximate solution ykΔ, k ≥ −N is defined as follows: ykΔ ξkΔ, −N ≤ k ≤ 0, Δ

f y kΔ , rkΔ Δ yk 1Δ ykΔ u ykΔ , rkΔ − u y k−1Δ , rk−1

g ykΔ , rkΔ Δwk ,

0 ≤ k ≤ M − 1,

2.7

4

Journal of Applied Mathematics

where Δwk wk 1Δ − wkΔ and y kΔ {y kΔ θ : −τ ≤ θ ≤ 0} is a C−τ, 0; Rn -valued random variable defined by

ykΔ θ yk iΔ

 θ − iΔ yk i 1Δ − yk iΔ Δ

Δ − θ − iΔ θ − iΔ yk iΔ

yk i 1Δ, Δ Δ

2.8

for iΔ ≤ θ ≤ i 1Δ, i −N, −N 1, . . . , −1, where in order for y −Δ to be well defined, we set y−N 1Δ ξ−NΔ. That is, ykΔ · is the linear interpolation of yk − NΔ, yk − N 1Δ, . . . , ykΔ. We hence have       y θ Δ − θ − iΔ yk iΔ θ − iΔ yk i 1Δ kΔ Δ Δ     ≤ yk iΔ ∨ yk i 1Δ.

2.9

We therefore obtain  y

kΔ θ

    max yk iΔ, −N≤i≤0

for any k −1, 0, 1, . . . , M − 1.

2.10

It is obvious that y −Δ  ≤ y 0 . In our analysis it will be more convenient to use continuous-time approximations. We hence introduce the C−τ, 0; Rn -valued step process

yt

M−2 

ykΔ 1kΔ,k 1Δ t y M−1Δ 1M−1Δ,MΔ t,

k 0

rt

M−1 

2.11

rkΔ 1kΔ,k 1Δ t,

k 0

and we define the continuous EM approximate solution as follows: let yt ξt for −τ ≤ t ≤ 0, while for t ∈ kΔ, k 1Δ, k 0, 1, . . . , M − 1,   t − kΔ ykΔ − y k−1Δ , rkΔ − u y −Δ , r0Δ yt ξ0 u y k−1Δ

Δ t t  

f y s , rs ds g y s , rs dws. 0

0

2.12

Journal of Applied Mathematics

5

Clearly, 2.12 can also be written as   t − kΔ Δ yt ykΔ u y k−1Δ

ykΔ − y k−1Δ , rkΔ − u y k−1Δ , rk−1 Δ t t  

f ys , rs ds

g ys , rs dws. kΔ

2.13



In particular, this shows that ykΔ ykΔ, that is, the discrete and continuous EM approximate solutions coincide at the grid points. We know that yt is not computable because it requires knowledge of the entire Brownian path, not just its Δ-increments. However, ykΔ ykΔ, so the error bound for yt will automatically imply the error bound for ykΔ. It is then obvious that     y  ≤ ykΔ , kΔ

∀k 0, 1, 2, . . . , M − 1.

2.14

Moreover, for any t ∈ 0, T ,   sup yt 

0≤t≤T



  sup y kΔ  ≤

0≤k≤M−1

sup

  sup ykΔ  0≤k≤M−1

  sup ykΔ θ

0≤k≤M−1 −τ≤θ≤0

2.15

  ≤ sup sup yt θ 0≤t≤T −τ≤θ≤0

  ≤ sup ys, −τ≤s≤T

and letting t/Δ be the integer part of t/Δ, then           y      y t t/ΔΔ  ≤ yt/Δ Δ ≤ sup ys . −τ≤s≤t

2.16

These properties will be used frequently in what follows, without further explanation. For the existence and uniqueness of the solution of 2.2 and the boundedness of the solution’s moments, we impose the following hypotheses e.g., see 11. Assumption 2.2. For each integer j ≥ 1 and i ∈ S, there exists a positive constant Cj such that   2   2     2   f ϕ, i − f ψ, i  ∨ g ϕ, i − g ψ, i  ≤ Cj ϕ − ψ 

2.17

for ϕ, ψ ∈ C−τ, 0; Rn  with ϕ ∨ ψ ≤ j. Assumption 2.3. There is a constant K > 0 such that     2    2 f ϕ, i  ∨ g ϕ, i  ≤ K 1 ϕ2 , for ϕ ∈ C−τ, 0; Rn  and i ∈ S.

2.18

6

Journal of Applied Mathematics

Assumption 2.4. There exists a constant κ ∈ 0, 1 such that for all ϕ, ψ ∈ C−τ, 0; Rn  and i ∈ S,       u ϕ, i − u ψ, i  ≤ κϕ − ψ ,

2.19

for ϕ, ψ ∈ C−τ, 0; Rn  and u0, i 0. We also impose the following condition on the initial data. p

Assumption 2.5. ξ ∈ LF0 −τ, 0; Rn  for some p ≥ 2, and there exists a nondecreasing function α· such that 

 E

sup |ξt − ξs|

2

−τ≤s≤t≤0

≤ αt − s,

2.20

with the property αs → 0 as s → 0. From Mao and Yuan 11, we may therefore state the following theorem. Theorem 2.6. Let p ≥ 2. If Assumptions 2.3–2.5 are satisfied, then  E

 sup |xt|

p

−τ≤t≤T

≤ Hκ,p,T,K,ξ ,

2.21

for any T > 0, where Hκ,p,T,K,ξ is a constant dependent on κ, p, T , K, ξ. The primary aim of this paper is to establish the following strong mean square convergence theorem for the EM approximations. Theorem 2.7. If Assumptions 2.2–2.5 hold,  lim E

Δ→0

2  sup xt − yt

 0.

2.22

0≤t≤T

The proof of this theorem is very technical, so we present some lemmas in the next section, and then complete the proof in the sequent section.

3. Lemmas Lemma 3.1. If Assumptions 2.3–2.5 hold, for any p ≥ 2, there exists a constant Hp such that  E where Hp is independent of Δ.

p  sup yt

−τ≤t≤T



 ≤H p ,

3.1

Journal of Applied Mathematics

7

Proof. For t ∈ kΔ, k 1Δ, k 0, 1, 2, . . . , M − 1, set yt  : yt − uyk−1Δ t − kΔykΔ − Δ yk−1Δ /Δ, rk  and  ht : E

 p    , sup ys

  p  ht : E sup ys   .

−τ≤s≤t

3.2

0≤s≤t

Recall the inequality that for p ≥ 1 and any ε > 0, |x y|p ≤ 1 εp−1 |x|p ε1−p |y|p . Then we have, from Assumption 2.4,  ⎛ ⎞p ⎞     − y y − kΔ t  p   kΔ k−1Δ 1−p  ⎜ Δ ⎟ ⎟  ytp ≤ 1 εp−1 ⎜  , rk ⎠ ⎠ 

ε u⎝yk−1Δ

⎝ yt   Δ   ⎛

 ≤ 1 εp−1

   p p   t − kΔ  . yt   ε1−p κp  y

y − y kΔ k−1Δ   k−1Δ Δ

3.3

By y −Δ  ≤ y 0 , noting k 0, 1, 2, . . . , M − 1,     t − kΔ   p  k 1Δ − t  p  t − kΔ      y  ykΔ − y k−1Δ  ≤  ykΔ  y k−1Δ 

 k−1Δ Δ Δ Δ  p        t − kΔ k 1Δ − t     sup ys

sup ys ≤ Δ Δ −τ≤s≤t −τ≤s≤t p  ≤ sup ys . −τ≤s≤t

3.4

Consequently, choose ε κ/1 − κ, then    p p      ytp ≤ 1 − κ1−p yt   .

κ sup ys −τ≤s≤t

3.5

Hence, 

p  ht ≤ Eξ E sup ys



p

0≤s≤t

3.6

 ≤ Eξp κht 1 − κ1−p ht, which implies ht ≤

 ht Eξp

. 1 − κ 1 − κp

3.7

8

Journal of Applied Mathematics

Since

yt  y0 

t 0





f y s , rs ds

t 0

 g y s , rs dws,

3.8

with y0  y0 − uy−Δ , by the Holder inequality, we have ¨  p     t   p p p−1 t     p  p−1 yt f y , rs  ds  g y , rs dws . y0   ≤3   t s s  0  0

3.9

Hence, for any t1 ∈ 0, T ,   1 ≤ 3 ht

p−1

p    T p−1 E Ey0

 t1 0

p      t    p  f y , rs  ds E sup  g y , rs dws . s s  0≤t≤t1  0 3.10

By Assumption 2.4 and the fact y −Δ  ≤ y 0 , we compute that  p p      Ey0 − u y −Δ , r0Δ  Ey0    p  ≤ E y0 κy −Δ    p   ≤ E y0 κy 0 

3.11

≤ E|ξ0| κξp ≤ 1 κp Eξp .

Assumption 2.3 and the Holder inequality give ¨

E

 t1 0

   f y , rs p ds ≤ E s

 t1 0

 2 p/2 K p/2 1 y s  ds

≤ K p/2 2p−2/2 E  ≤K

p/2 p−2/2

2

 t1 0

 p 1 ys  ds



  t1  p    T

E sup yt ds . 0

−τ≤t≤s

3.12

Journal of Applied Mathematics

9

Applying the Burkholder-Davis-Gundy inequality, the Holder inequality and Assumption ¨ 2.3 yield p    t p/2   t  1   2     g y s , rs ds ≤ Cp E E sup  g ys , rs dws  0≤t≤t1  0 0 

≤ Cp T p−2/2 E

 t1 0

≤ Cp T p−2/2 E

 t1 0

   g y , rs p ds s  2 p/2 K p/2 1 ys  ds

≤ Cp T p−2/2 K p/2 2p−2/2 E

 t1 0

   t1  p    ds , T

E sup yt

 ≤ Cp T

p−2/2

K

 p 1 ys  ds



p/2 p−2/2

2

−τ≤t≤s

0

3.13 where Cp is a constant dependent only on p. Substituting 3.11, 3.12, and 3.13 into 3.10 gives  1  ≤ 3p−1 1 κp Eξp K p/2 2p−2/2 T p Cp 2T p−2/2 K p/2 T ht

3

p−1

K

p/2 p−2/2 p−1

2

T

p−2/2

Cp 2T 

K

p/2

!  t1

 E

0

: C1 C2

!

 p    ds sup yt

−τ≤t≤s

3.14

 t1 hsds. 0

Hence from 3.7, we have    t1 1 Eξp C1 C2

hsds ht1  ≤ 1 − κ 1 − κp 0 C1 C2 Eξp



≤ 1 − κ 1 − κp 1 − κp

 t1

3.15

hsds. 0

By the Gronwall inequality we find that " hT  ≤

# p C1 Eξp

eC2 T/1−κ . 1 − κ 1 − κp

3.16

From the expressions of C1 and C2 , we know that they are positive constants dependent only on ξ, κ, K, p, and T , but independent of Δ. The proof is complete.

10

Journal of Applied Mathematics

Lemma 3.2. If Assumptions 2.3–2.5 hold, then for any integer l > 1,  E

 2   sup ykΔ − y k−1Δ 

 ≤ c1 c1 αΔ c1 lΔl−1/l : γΔ,

0≤k≤M−1

3.17

where c1 1/1 − 2κ, c1 8κ/1 − 2κH2, and c1 l is a constant dependent on l but independent of Δ. Proof. For θ ∈ iΔ, i 1Δ, where i −N, −N 1, . . . , −1, from 2.7,   i 1Δ − θ     yk iΔ − yk − 1 iΔ y kΔ − y k−1Δ  ≤ Δ  θ − iΔ 

yk i 1Δ − yk iΔ Δ      ≤ yk iΔ − yk − 1 iΔ ∨ yk i 1Δ − yk iΔ,

3.18

so       y kΔ − y k−1Δ  ≤ sup yk iΔ − yk − 1 iΔ.

3.19

−N≤i≤0

We therefore have  E

 2   sup ykΔ − yk−1Δ 





 ≤E

0≤k≤M−1

sup



−N≤i≤0

0≤k≤M−1

   ykΔ − yk − 1Δ2 .

 ≤E

2  sup yk iΔ − yk − 1 iΔ

sup

−N≤k≤M−1

3.20

When −N ≤ k ≤ 0, by Assumption 2.5 and y−N 1Δ ξ−NΔ,  E

2  sup ykΔ − yk − 1Δ

−N≤k≤0



 ≤E

 sup |ξkΔ − ξk − 1Δ|

−N≤k≤0

≤ αΔ.

2

3.21

Journal of Applied Mathematics

11

When 1 ≤ k ≤ M − 1, from 2.13, we have Δ Δ Δ − u yk−2Δ , rk−2

f y k−1Δ , rk−1 Δ ykΔ − yk − 1Δ u yk−1Δ , rk−1 Δ Δwk−1 .

g yk−1Δ , rk−1

3.22

Recall the elementary inequality, for any x, y > 0 and ε ∈ 0, 1, x y2 ≤ x2 /ε y2 /1 − ε. Then   ykΔ − yk − 1Δ2 2 1   Δ Δ Δ Δ − u yk−2Δ , rk−1

u yk−2Δ , rk−1 − u yk−2Δ , rk−2  u y k−1Δ , rk−1 ε 2 1   Δ Δ Δ g yk−1Δ , rk−1 Δwk−1 

f y k−1Δ , rk−1 1−ε 2 8κ2  2 2κ2      ≤ y k−1Δ − yk−2Δ 

yk−2Δ  ε ε 2  2 2  2    2 Δ Δ Δwk−1  .

f y k−1Δ , rk−1 g yk−1Δ , rk−1 Δ

1−ε 1−ε ≤

3.23

Consequently  E

2  sup ykΔ − yk − 1Δ



1≤k≤M−1

     2  2 8κ2 2κ2    

E E sup yk−1Δ − y k−2Δ  sup y k−2Δ  ≤ ε ε 1≤k≤M−1 1≤k≤M−1       2 2 2 2Δ2    

E sup f yk−1Δ , rt  E sup g yk−1Δ , rt Δwk−1  .

1−ε 1−ε 1≤k≤M−1 1≤k≤M−1 3.24 We deal with these four terms, separately. By 3.21,  E

 2   sup y k−1Δ − yk−2Δ 



1≤k≤M−1

 ≤E

sup

1≤k≤M−1 −N≤i≤0

 ≤E

2  sup yk i − 1Δ − yk − 2 iΔ

sup

  ykΔ − yk − 1Δ2

−N≤k≤M−1





12

Journal of Applied Mathematics     2 2      

E sup ykΔ − yk − 1Δ ≤ E sup ykΔ − yk − 1Δ −N≤k≤0

1≤k≤M−1

 2    . sup ykΔ − yk − 1Δ



≤ αΔ E

1≤k≤M−1

3.25 Noting that Esup−τ≤t≤T |yt|2  ≤ Esup−τ≤t≤T |yt|2  ≤ H2 where Hp has been defined in Lemma 3.1, by Assumption 2.3 and 2.15,  E

 2   Δ sup f y k−1Δ , rk−1 



1≤k≤M−1

 ≤E



2     sup K 1 y k−1Δ 



1≤k≤M−1

 ≤ K KE  ≤ K KE  ≤ K KE

2  sup yk − 1 iΔ

sup

1≤k≤M−1 −N≤i≤0

  ykΔ2

sup

−N≤k≤M−1

2  sup yt

 3.26





−τ≤t≤T

≤ K1 H2.

By the Holder inequality, for any integer l > 1, ¨  E

 2   Δ Δwk−1  sup g yk−1Δ , rk−1



1≤k≤M−1



 2   sup g y k−1Δ , rt  sup |Δwk−1 |2

≤E

1≤k≤M−1

  ≤ E



1≤k≤M−1

 2l/l−1   Δ sup g yk−1Δ , rk−1 

l−1/l   E

1≤k≤M−1

  ≤ E

 2 l/l−1 sup K 1 ykΔ 

0≤k≤M−1





≤ ⎣K l/1−l E 1

1/l sup |Δwk−1 |

2l

1≤k≤M−1

l−1/l   E

M−1 

|Δwk |

1/l 2l

k 0



 2 sup ykΔ 

0≤k≤M−1

l/l−1 ⎤l−1/l  1/l M−1  2l ⎦ E|Δwk | k 0

Journal of Applied Mathematics 

 ≤ 2

1/l−1

K

l/1−l

13 

1 E

 2l/l−1 sup ykΔ 

l−1/l 

M−1 

2l − 1!!Δ

0≤k≤M−1

"



≤ 2

1/l−1

K

l/1−l



2l 1 H l−1

1/l l

k 0

#l−1/l

2l − 1!!T Δl−1

!1/l

≤ DlΔl−1/l , 3.27

where Dl is a constant dependent on l. Substituting 3.25, 3.26, and 3.27 into 3.24, choosing ε κ, and noting Δ ∈ 0, 1, we have 

2  sup ykΔ − yk − 1Δ

E



1≤k≤M−1

3.28

2κ 8κ 2K1 H2 2Dl l−1/l ≤ αΔ

H2

Δ . 1 − 2κ 1 − 2κ 1 − 2κ2 Combining 3.21 with 3.28, from 3.20, we have  E

 2   sup ykΔ − yk−1Δ 



0≤k≤M−1

 ≤E

sup

−N≤k≤M−1

 ≤E ≤

  ykΔ − yk − 1Δ2

2  sup ykΔ − yk − 1Δ

−N≤k≤0



 

E

2  sup ykΔ − yk − 1Δ



3.29

1≤k≤M−1

1 8κ 2K1 H2 2Dl l−1/l αΔ

H2

Δ , 1 − 2κ 1 − 2κ 1 − 2κ2

as required. Lemma 3.3. If Assumptions 2.3–2.5 hold,  E

 2 sup ys − y s 

0≤s≤T

 ≤ c2 c2 α2Δ c2 lΔl−1/l : βΔ,

3.30

where c2 , c2 are a constant independent of l and Δ, c2 l is a constant dependent on l but independent of Δ.

14

Journal of Applied Mathematics

Proof. Fix any s ∈ 0, T  and θ ∈ −τ, 0. Let ks ∈ {0, 1, 2, . . . , M − 1}, kθ ∈ {−N, −N 1, . . . , −1}, and ksθ ∈ {−N, −N 1, . . . , M−1} be the integers for which s ∈ ks Δ, ks 1Δ, θ ∈ kθ Δ, kθ

1Δ, and s θ ∈ ksθ Δ, ksθ 1Δ, respectively. Clearly, 0 ≤ s θ − ks kθ  ≤ 2Δ,

3.31

ksθ − ks kθ  ∈ {0, 1, 2}. From 2.7, y s yks Δ θ yks kθ Δ

3.32

θ − kθ Δ  yks kθ 1Δ − yks kθ Δ , Δ

which yields     ys − y  ys θ − y θ s ks Δ    θ − kθ Δ  ≤ ys θ − yks kθ Δ

yks kθ 1Δ − yks kθ Δ Δ     ≤ ys θ − yks kθ Δ yks kθ 1Δ − yks kθ Δ, 3.33 so by 3.20 and Lemma 3.2, noting yMΔ yM − 1Δ from 2.15,  E

 2 sup ys − y s 





 ≤ 2E sup

0≤s≤T

0≤s≤T



2E  ≤ 2E

 2 sup ys θ − yks kθ Δ



−τ≤θ≤0



sup 0≤ks ≤M−1

sup −τ≤θ≤0,0≤s≤T

2  sup yks kθ 1Δ − yks kθ Δ

−N≤kθ ≤0

  ys θ − yks kθ Δ2





2γΔ. 3.34

Therefore, it is a key to compute Esup−τ≤θ≤0,0≤s≤T |ys θ − yks kθ Δ|2 . We discuss the following four possible cases. Case 1 ks kθ ≥ 0. We again divide this case into three possible subcases according to ksθ − ks kθ  ∈ {0, 1, 2}.

Journal of Applied Mathematics

15

Subcase 1 ksθ − ks kθ  0. From 2.13,   s θ − ksθ Δ y ksθ Δ − yksθ −1Δ , rkΔsθ ys θ − yks kθ Δ u yksθ −1 Δ

Δ  s θ  Δ − u yksθ −1Δ , rksθ−1

f yv , rv dv ksθ Δ



 s θ ksθ Δ

3.35

 g y v , rv dwv,

which yields  E

  ys θ − yks kθ Δ2

sup



−τ≤θ≤0,0≤s≤T,ks kθ ≥0



≤ 3E

sup

    s θ − ksθ Δ Δ u y

y − y , r ksθ −1Δ ksθ Δ ksθ −1Δ ksθ  Δ

−τ≤θ≤0,0≤s≤T,ksθ ≥0

2 −u y ksθ −1Δ , rkΔsθ−1 





2 ⎞    s θ    sup

3E⎝ f yr , rr dr  ⎠    −τ≤θ≤0,0≤s≤T,ksθ ≥0 ksθ Δ



2 ⎞    s θ    sup

3E⎝ g y r , rr dwr ⎠.    −τ≤θ≤0,0≤s≤T,ksθ ≥0 ksθ Δ 3.36

From Assumption 2.4, 3.24, 3.26, and Lemma 3.2, we have  E

    2  Δ s θ−k sθ Δ Δ u y −u yksθ −1Δ , rksθ−1  yksθ Δ −yksθ −1Δ , rksθ ksθ −1Δ

 Δ

sup

−τ≤θ≤0,0≤s≤T,ksθ ≥0

   2  s θ − ksθ Δ  

8κ2 H2 sup yksθ Δ − y ksθ −1Δ  ≤ 2κ E  Δ −τ≤θ≤0,0≤s≤T,ksθ ≥0    2   2

8κ2 H2 ≤ 2κ E sup  y ksθ Δ − y ksθ −1Δ  

2

−τ≤θ≤0,0≤s≤T,ksθ ≥0

 ≤ 2κ E 2

sup

 2    yksθ Δ − y ksθ −1Δ 

0≤ksθ ≤M−1



8κ2 H2

≤ 2κ2 γΔ 8κ2 H2. 3.37

16

Journal of Applied Mathematics By the Holder inequality, Assumption 2.3, and Lemma 3.1, ¨ 2 ⎞    s θ    f y r , rr dr  ⎠ E⎝ sup    −τ≤θ≤0,0≤s≤T,ksθ ≥0 ksθ Δ ⎛

 ≤ ΔE

 s θ sup −τ≤θ≤0,0≤s≤T,ksθ ≥0

ksθ Δ

 ≤ KΔE

   f y , rr 2 dr r

sup

 s θ

−τ≤θ≤0,0≤s≤T,ksθ ≥0

ksθ Δ



 2 1 yr  dr



 T 

   2   1 sup yr dr

≤ KΔE

0

≤ KΔ

0≤r≤T

1 E

T





T

sup yt|

2

−τ≤t≤T

0

≤ KΔ

3.38

dr

1 H2dr

0

≤ KT 1 H2Δ. Setting v s θ and kv ksθ and applying the Holder inequality yield ¨ 2 ⎞    s θ    g y r , rr dwr ⎠ E⎝ sup    −τ≤θ≤0,0≤s≤T,ksθ ≥0 ksθ Δ ⎛

 E

2    g ykv Δ , rv wv − wkv Δ

sup

0≤v≤T,0≤kv ≤M−1

  ≤ E

 2l/l−1   g y kv Δ , rv 

sup

l−1/l

0≤v≤T,0≤kv ≤M−1

1/l

  × E

|wv − wkv Δ|

sup

2l

.

0≤v≤T,0≤kv ≤M−1

The Doob martingale inequality gives 

 E

sup

|wv − wkv Δ|

0≤v≤T,0≤kv ≤M−1



E

2l





sup

sup

0≤kv ≤M−1

kv Δ≤v≤kv 1Δ

|wv − wkv Δ|

2l

 3.39

Journal of Applied Mathematics   M−1  ≤E

 2l

sup

kv Δ≤v≤kv 1Δ

kv 0



17 |wv − wkv Δ|





M−1 

E

sup

2l

|wv − wkv Δ|

kv 0

kv Δ≤v≤kv 1Δ



2l M−1  E|wkv 1Δ − wkv Δ|2l .



2l 2l − 1

kv 0

3.40 By 3.27, we therefore have ⎛

2 ⎞    s θ    sup E⎝ g yr , rr dwr ⎠    k Δ −τ≤θ≤0,0≤s≤T,ksθ ≥0 sθ  ≤

2l 2l − 1

l−1/l 2    2l/l−1   E sup g ykv Δ , rv  0≤kv ≤M−1

1/l

 ×

M−1 

3.41

E|wkv 1Δ − wkv Δ|2l

kv 0

 ≤

2l 2l − 1

2

DlΔl−1/l .

Substituting 3.37, 3.38, and 3.41 into 3.36 and noting Δ ∈ 0, 1 give  E

sup

  ys θ − yks kθ Δ2

−τ≤θ≤0,0≤s≤T,ks kθ ≥0





≤ 6κ γΔ 24κ H2 3 KT 1 H2

2

2



2l 2l − 1

2

 Dl Δl−1/l

3.42

: 6κ2 γΔ 24κ2 H2 c1 lΔl−1/l . Subcase2 ksθ − ks kθ  1. From 2.13, ys θ − yks kθ Δ yksθ Δ − yks kθ Δ ys θ − yksθ Δ ≤ u y ks kθ Δ , rkΔs kθ − u y ks kθ −1Δ , rkΔs kθ −1 f y ks kθ Δ , rkΔs kθ Δ

g y ks kθ Δ , rkΔs kθ Δwks kθ ys θ − yksθ Δ, 3.43

18

Journal of Applied Mathematics

so we have  E

  ys θ − yks kθ Δ2

sup



−τ≤θ≤0,0≤s≤T,ks kθ ≥0

( 

≤4 E

 2   u y ks kθ Δ , rkΔs kθ − u y ks kθ −1Δ , rkΔs kθ −1 

sup

−τ≤θ≤0,0≤s≤T,ks kθ ≥0



E

 2   f y ks kθ Δ , rkΔs kθ Δ

sup

E

3.44

 2   g yks kθ Δ , rkΔs kθ Δwks kθ 

sup



E



−τ≤θ≤0,0≤s≤T,ks kθ ≥0



−τ≤θ≤0,0≤s≤T,ks kθ ≥0

  ys θ − yksθ Δ2

sup





) .

−τ≤θ≤0,0≤s≤T,ks kθ ≥0

Since  E

 2   u yks kθ Δ , rkΔs kθ − u yks kθ −1Δ , rkΔs kθ −1 

sup



−τ≤θ≤0,0≤s≤T,ks kθ ≥0

≤ 2κ2 γΔ 8κ2 H2, 3.45

from 3.26, 3.27, and the Subcase 1, noting Δ ∈ 0, 1, we have  E

sup

  ys θ − yks kθ Δ2



−τ≤θ≤0,0≤s≤T,ks kθ ≥0

* ≤ 4 2κ2 γΔ 8κ2 H2 K1 H2Δ2 DlΔl−1/l +

3 2κ2 γΔ 8κ2 H2 Δ c1 lΔl−1/l

3.46

: 32κ2 γΔ 128κ2 H2 c2 lΔl−1/l .

Subcase 3 ksθ − ks kθ  2. From 2.13, we have

ys θ − yks kθ Δ ys θ − yksθ − 1Δ yksθ − 1Δ − yks kθ Δ,

3.47

Journal of Applied Mathematics

19

so from the Subcase 2, we have  E

  ys θ − yks kθ Δ2

sup



−τ≤θ≤0,0≤s≤T,ks kθ ≥0



≤ 2E

  ys θ − yksθ − 1Δ2

sup



−τ≤θ≤0,0≤s≤T,ks kθ ≥0



2E

sup

  yksθ − 1Δ − yks kθ Δ2



−τ≤θ≤0,0≤s≤T,ks kθ ≥0

≤ 2 32κ2 γΔ 128κ2 H2 c2 lΔl−1/l

3.48

!

2 2κ2 γΔ 8κ2 H2 K1 H2Δ2 DlΔl−1/l

!

: 68κ2 γΔ 272κ2 H2 c3 lΔl−1/l .

From these three subcases, we have  E

  ys θ − yks kθ Δ2

sup



−τ≤θ≤0,0≤s≤T,ks kθ ≥0

3.49

≤ 106κ2 γΔ 424κ2 H2 c1 l c2 l c3 lΔl−1/l .

Case 2 ks kθ −1 and 0 ≤ s θ ≤ Δ. In this case, applying Assumption 2.5 and case 1, we have  E

sup −τ≤θ≤0,0≤s≤T

  ys θ − yks kθ Δ2



≤ 2E

sup −τ≤θ≤0,0≤s≤T

  ys θ − y02





2 

2Ey0 − y−Δ

3.50

≤ 212κ2 γΔ 848κ2 H2 2c1 l c2 l c3 lΔl−1/l 2αΔ.

Case 3 ks kθ −1 and −Δ ≤ s θ ≤ 0. In this case, using Assumption 2.5,  E

sup −τ≤θ≤0,0≤s≤T

  ys θ − yks kθ Δ2

 ≤ αΔ.

3.51

20

Journal of Applied Mathematics

Case 4 ks kθ ≤ −2. In this case, s θ ≤ 0, so using Assumption 2.5,  E

sup −τ≤θ≤0,0≤s≤T

  ys θ − yks kθ Δ2

 ≤ α2Δ.

3.52

Substituting these four cases into 3.34 and noting the expression of γΔ, there exist c2 , c2 , and c2 l such that  E

 2 sup ys − y s 

0≤s≤T

 ≤ c2 α2Δ c2 c2 lΔl−1/l ,

3.53

This proof is complete. Remark 3.4. It should be pointed out that much simpler proofs of Lemmas 3.2 and 3.3 can be obtained by choosing l 2 if we only want to prove Theorem 2.7. However, here we choose l > 1 to control the stochastic terms βΔ and γΔ by Δl−1/l in Section 3, which will be used to show the order of the strong convergence. Lemma 3.5. If Assumption 2.4 holds,  E

   2 sup u yt , rt − u yt , rt 

 ≤ 8κ2 H2LΔ : ζΔ

3.54

0≤t≤T

where L is a positive constant independent of Δ. Proof. Let n T/Δ, the integer part of T/Δ, and 1G be the indication function of the set G. Then, by Assumption 2.4 and Lemma 3.1, we obtain  E

   2 sup u y t , rt − u y t , rt 



0≤t≤T

 ≤ max E 0≤k≤n

   2 sup u y s , rs − u ys , rs 



s∈tk ,tk 1 

 ≤ 2max E 0≤k≤n

    2 sup u ys , rs − u y s , rs  1{rs / rtk }



s∈tk ,tk 1 

 ≤ 4max E

sup

0≤k≤n



s∈tk ,tk 1 

  ≤ 8κ max E 2

0≤k≤n

  2 |uys , rs| u ys , rs  1{rs / rtk }

 2 sup y t 

0≤t≤T

2



 E 1{rs / rtk }



Journal of Applied Mathematics

21

 ≤ 8κ2 H2E 1{rs / rtk }

  8κ2 H2E E 1{rs / rtk } | tk  , 3.55

where in the last step we use the fact that ytk and 1{rs / rtk } are conditionally independent with respect to the σ− algebra generated by rtk . But, by the Markov property,   1{rtk  i} P rs / i | rtk  i E 1{rs / rtk } | rtk  i∈S





1{rtk  i}



γij s − tk  os − tk 



j/ i

i∈S



 







3.56

1{rtk  i} max −γij Δ oΔ 1≤i≤N

i∈S

≤ LΔ,

where L is a positive constant independent of Δ. Then  E

   2 sup u y t , rt − u y t , rt 

 ≤ 8κ2 H2LΔ.

3.57

0≤t≤T

This proof is complete. Lemma 3.6. If Assumption 2.3 holds, there is a constant C, which is independent of Δ such that

E

T 0

E

T 0

    f y , rs − f y , rs 2 ds ≤ CΔ, s s

3.58

    g y , rs − g y , rs 2 ds ≤ CΔ. s s

3.59

Proof. Let n T/Δ, the integer part of T/Δ. Then

E

T 0

n      f y , rs − f y , rs 2 ds E s s k 0

 tk 1  2   f ytk , rs − f ytk , rtk   ds, tk

3.60

22

Journal of Applied Mathematics

with tn 1 being T . Let 1G be the indication function of the set G. Moreover, in what follows, C is a generic positive constant independent of Δ, whose values may vary from line to line. With these notations we derive, using Assumption 2.3, that  tk 1   2   E f ytk , rs − f ytk , rtk   ds tk

≤ 2E

 tk 1 " tk

≤ CE

 2 #   |fytk , rs|2 f ytk , rtk   1{rs / rtk } ds

 tk 1  tk

 2    1 y tk  1{rs / rtk } ds

3.61

 tk 1 " " ##  2    ≤C E E 1 ytk  1{rs / rtk } | rtk  ds tk

 tk 1 " " # #  2 

   ≤C E E 1 ytk  | rtk  E 1{rs / rtk } | rtk  ds, tk

where in the last step we use the fact that ytk and 1{rs / rtk } are conditionally independent with respect to the σ− algebra generated by rtk . But, by the Markov property,  

E 1{rs / rtk } | rtk  1{rtk  i} P rs / i | rtk  i i∈S

  1{rtk  i} γij s − tk  os − tk  i∈S

j/ i



   ≤ 1{rtk  i} max −γij Δ oΔ i∈S

3.62

1≤i≤N

≤ CΔ. So, by Lemma 3.1,

E

 tk 1   tk 1   2  2     1 Ey tk  ds f y tk , rs − f y tk , rtk   ds ≤ CΔ tk

tk

3.63

≤ CΔ . 2

Therefore E

T 0

Similarly, we can show 3.59.

    f y , rs − f y , rs 2 ds ≤ CΔ. s s

3.64

Journal of Applied Mathematics

23

4. Convergence of the EM Methods In this section we will use the lemmas above to prove Theorem 2.7. From Lemma 3.1 and , such that Theorem 2.6, there exists a positive constant H 

 E

sup |xt|

p

−τ≤t≤T

 ∨E

p  sup yt



−τ≤t≤T

, ≤ H.

4.1

Let j be a sufficient large integer. Define the stopping times     vj : inf t ≥ 0 : yt  ≥ j ,

  uj : inf t ≥ 0 : xt  ≥ j ,

ρj : uj ∧ vj ,

4.2

where we set inf ∅ ∞ as usual. Let et : xt − yt.

4.3

Obviously,  E

 sup |et|

2



 2

E

E

sup |et| 1{uj >T,vj >T }

0≤t≤T



0≤t≤T

 2

sup |et| 1{uj ≤T

or vj ≤T }

4.4

.

0≤t≤T

Recall the following elementary inequality:  aγ b1−γ ≤ γa 1 − γ b,

4.5

∀a, b > 0, γ ∈ 0, 1.

We thus have, for any δ > 0,  E

 2

sup |et| 1{uj ≤T or vj ≤T }

⎡ 2/p p δ−2/p−2 1{uj ≤T ≤ E⎣ δ sup |et| {0≤t≤T }

0≤t≤T







2δ E p

sup |et|p 0≤t≤T

p−2/p or vj ≤T }

⎤ ⎦ 4.6

 p−2

2/p−2 P uj ≤ T or vj ≤ T . pδ

Hence 

 E

sup |et|

2

 ≤E

0≤t≤T

 2

sup |et| 1{ρj >T } 0≤t≤T



 P uj ≤ T or vj ≤ T . 2/p−2

p−2 pδ

  2δ p

E sup |et| p 0≤t≤T

4.7

24

Journal of Applied Mathematics

Now,    xt p P uj ≤ T ≤ E 1{uj ≤T } p j   1 p ≤ p E sup |xt| j −τ≤t≤T

4.8

, H . jp



Similarly, ,  H P vj ≤ T ≤ p . j

4.9

Thus    P vj ≤ T or uj ≤ T ≤ P vj ≤ T P uj ≤ T 4.10

, 2H . jp



We also have  E

 sup |et|

p

 ≤2

p−1

E

0≤t≤T



 p sup |xt | yt  

p

4.11

0≤t≤T p,

≤ 2 H. Moreover, 

 E

2

sup |et| 1{ρj >T }



  2 sup e t ∧ ρj  1{ρ >T }

E

0≤t≤T



j

0≤t≤T

 ≤E

   2   . sup e t ∧ ρj

4.12

0≤t≤T

Using these bounds in 4.7 yields  E

 sup |et|

0≤t≤T

2

 ≤E

  2 sup e t ∧ ρj  0≤t≤T



 , , 2 p−2 H 2p 1 δH

2/p−2 p . p pδ j

4.13

Journal of Applied Mathematics

25

inequality, when v ∈ kΔ, k 1Δ, Setting v : t ∧ ρj and for any ε ∈ 0, 1, by the Holder ¨ for k 0, 1, 2, . . . , M − 1, 2  |ev|2 xv − yv     v − kΔ  ≤ uxv , rv − u y k−1Δ

y kΔ − yk−1Δ , rkΔ Δ v v 2     



fxs , rs − f ys , rs ds

gxs , rs − g y s , rs dws 0

0

   2 1  v − kΔ ≤ uxv , rv − u y k−1Δ

y kΔ − y k−1Δ , rkΔ  ε Δ  v  v 2     2  

2 T

fxs , rs−f y s , rs ds  gxs , rs−g y s , rs dws . 1−ε 0 0 4.14 Assumption 2.4 yields   2  v − kΔ Δ  uxv , rv − u y

y − y k−1Δ kΔ k−1Δ , rk   Δ   2  v − kΔ 2 ≤ 2κ xv − y k−1Δ − y kΔ − y k−1Δ   Δ     v − kΔ

2u y k−1Δ

ykΔ − y k−1Δ , rv Δ  2 v − kΔ −u y k−1Δ

y kΔ − y k−1Δ , rkΔ  Δ  2  v − kΔ           xv − yv  yv − y  y − y  − ≤ 2κ2  − y y kΔ v kΔ k−1Δ  k−1Δ   Δ     v − kΔ

2u y k−1Δ

ykΔ − y k−1Δ , rv Δ 2 v − kΔ −u yk−1Δ

y kΔ − yk−1Δ , rkΔ  Δ 2  2     2κ2   yv − y 2  xv − yv 2 4κ ≤ − y y v kΔ k−1Δ  ε 1−ε     v − kΔ

2u y k−1Δ

ykΔ − y k−1Δ , rv Δ 2  v − kΔ Δ  −uyk−1Δ

ykΔ − yk−1Δ , rk  . Δ 

4.15

26

Journal of Applied Mathematics

Then, we have  2   2 2  2κ2  4κ2       yv − y v y kΔ − y k−1Δ  |ev| ≤ 2 xv − yv

ε1 − ε ε     v − kΔ 

2u yk−1Δ

y kΔ − yk−1Δ , rv Δ  2 v − kΔ −u y k−1Δ

y kΔ − y k−1Δ , rkΔ  Δ  v  v 2     2  

2  T

fxs , rs−f y s , rs ds  gxs , rs−g ys , rs dws . 1−ε 0 0 4.16 2

Hence, for any t1 ∈ 0, T , by Lemmas 3.2–3.5,     2   2 2κ2     ≤ 2 E sup xt∧ρj − yt∧ρj  E sup e t ∧ ρj ε 0≤t≤t1 0≤t≤t1     2 4κ2   E sup yt∧ρj − y t∧ρj 

ε1 − ε 0≤t≤t1    2  

E sup ykΔ∧ρj − yk−1Δ∧ρj  

0≤k≤M−1

     kΔ ∧ ρj − kΔ  y kΔ −y k−1Δ , r kΔ ∧ ρj

2E sup u y k−1Δ

 Δ 0≤k≤M−1 

2 ⎤  kΔ ∧ ρj − kΔ  y kΔ − y k−1Δ , r kΔ ∧ ρj  ⎦ −u yk−1Δ

 Δ 

 t1 ∧ρj  2

2T

E fxs , rs − f y s , rs ds 1−ε 0 ⎡ 2 ⎤    t∧ρj   2   E⎣ sup 

gxs , rs − g y s , rs dws ⎦   1−ε 0≤t≤t1 0    2 2κ2 4κ2    ≤ 2 E sup xt∧ρj − yt∧ρj 

γΔ βΔ 2ζΔ ε1 − ε ε 0≤t≤t1  t1 ∧ρj  2

2T E

fxs , rs − f y s , rs ds 1−ε 0 ⎡ 2 ⎤    t∧ρj   2  

E⎣ sup  gxs , rs − g y s , rs dws ⎦.  1−ε 0≤t≤t1  0 4.17

Journal of Applied Mathematics

27

Since xt yt ξt when t ∈ −τ, 0, we have  E

 2   sup xt∧ρj − yt∧ρj 



 ≤E

0≤t≤t1

   2 sup sup x t ∧ ρj θ − y t ∧ ρj θ 

−τ≤θ≤0 0≤t≤t1

 ≤E



−τ≤t≤t1

 E

   2 sup x t ∧ ρj − y t ∧ ρj 



4.18

    2 sup x t ∧ ρj − y t ∧ ρj  .

0≤t≤t1

By Assumption 2.2, Lemma 3.3, and Lemma 3.6, we may compute E

 t1 ∧ρj 0

   fxs , rs − f y , rs 2 ds s

≤E

 t1 ∧ρj 0

≤ 2Cj E

     fxs , rs − f y , rs f y , rs − f y , rs 2 ds s s s

 t1 ∧ρj 0

≤ 4Cj E

 t1 ∧ρj

  xs − ys ys − y 2 ds 2CΔ s   xs − ys 2 ds 4Cj E

0

≤ 4Cj E

 t1 ∧ρj 0

≤ 4Cj E

 t1

 t1 ∧ρj 0

  ys − y 2 ds 2CΔ s

2  sup xs θ − ys θ ds 4Cj T βΔ 2CΔ

−τ≤θ≤0

   2 sup x s ∧ ρj θ − y s ∧ ρj θ  ds

0 −τ≤θ≤0

4Cj T βΔ 2CΔ  t1    2 sup x r ∧ ρj − y r ∧ ρj  ds 4Cj T βΔ 2CΔ ≤ 4Cj E 0 −τ≤r≤s

4Cj

 t1 0

   2 E sup x r ∧ ρj − y r ∧ ρj  ds 4Cj T βΔ 2CΔ. 0≤r≤s

4.19 By the Doob martingale inequality, Lemma 3.3, Lemma 3.6, and Assumption 2.2, we compute ⎡

2 ⎤    t∧ρj     E⎣ sup  gxs , rs − g y s , rs dws ⎦   0≤t≤t1 0 ⎡

2 ⎤    t∧ρj       E⎣ sup  gxs , rs − g y s , rs g ys , rs − g y s , rs dws ⎦   0≤t≤t1 0

28

Journal of Applied Mathematics ≤ 8Cj E

 t1 ∧ρj 0

  xs − y 2 ds 8CΔ s

  t1     2 ≤ 16Cj E sup x r ∧ ρj − y r ∧ ρj  ds 16Cj T βΔ 8CΔ. 0≤r≤s

0

4.20

Therefore, 4.17 can be written as 

2κ2 1− 2 ε

 

  2 E sup e t ∧ ρj  0≤t≤t1

 ≤

 8Cj T T 4 4κ2 4CΔT 8 βΔ γΔ

βΔ

ε1 − ε 1−ε 1−ε 8Cj T 4

2ζΔ

1−ε

  t1    2   ds. E sup e v ∧ ρj 0

0≤v≤s

4.21

Choosing ε 1

 E



2κ/2 and noting κ ∈ 0, 1, we have

  2 sup e t ∧ ρj 



0≤t≤t1

√ √ 2 T 1

2κ T 4 16κ2 1 2κ 16C j

 ≤ βΔ

γΔ

2 2 √ √ √ √ βΔ 1 − 2κ 1 3 2κ 1 − 2κ 1 3 2κ √ 2 √ 2 8CΔT 8 1 2κ 2 1 2κ

√ 2 √ √ 2 √ ζΔ 1 − 2κ 1 3 2κ 1 − 2κ 1 3 2κ

2   16Cj 1 2κ T 4  t1   2

E sup e s ∧ ρj  dv √ 2 √ 0≤s≤v 1 − 2κ 1 3 2κ 0 ≤



 16Cj T T 4 16 8CΔT 8 √ 2 βΔ γΔ √ 2 βΔ √ 2 1 − 2κ 1 − 2κ 1 − 2κ



1−

2 √

16Cj T 4 ζΔ √ 2 2κ 1 − 2κ

  t1    2 E sup e s ∧ ρj  dv. 0

0≤s≤v

4.22

Journal of Applied Mathematics

29

By the Gronwall inequality, we have ⎡





  2 ⎢ E sup e t ∧ ρj  ≤ ⎣ 0≤t≤t1

 16Cj T T 4 16 √ 2 βΔ γΔ √ 2 βΔ 1 − 2κ 1 − 2κ 4.23

⎤ √ 8CΔT 8 2ζΔ ⎥ 16/1− 2κ2 Cj T T 4

. √ ⎦ × e √ 2 1 − 2κ 1 − 2κ

By 4.13, 

 sup |et|2

E

0≤t≤T

⎡  16Cj T T 4 16 8CΔT 8 √ 2 βΔ γΔ √ 2 βΔ √ 2 1 − 2κ 1 − 2κ 1 − 2κ

⎢ ≤ ⎣



 , 2 p−2 H

4.24

√ , 2ζΔ 2p 1 δH ⎥ 16/1− 2κ2 Cj T T 4

2/p−2 p .

√ ⎦ × e p pδ j 1 − 2κ



, ≤ /3, then choose Given any  > 0, we can now choose δ sufficient small such that 2p 1 δH/p j sufficient large such that  , 2 p−2 H pδ2/p−2 j p

Suggest Documents