Open Journal of Discrete Mathematics, 2011, 1, 66-84 doi:10.4236/ojdm.2011.12009 Published Online July 2011 (http://www.SciRP.org/journal/ojdm)
Mean Square Numerical Methods for Initial Value Random Differential Equations 1
Magdy A. El-Tawil1*, Mohammed A. Sohaly2 Engineering Mathematics Department, Faculty of Engineering, Cairo University, Giza, Egypt 2 Mathematic Department, Faculty of Science, Mansoura University, Mansoura, Egypt E-mail:
[email protected],
[email protected] Received April 1, 2011, revised April 30, 2011, accepted May 9, 2011
Abstract In this paper, the random Euler and random Runge-Kutta of the second order methods are used in solving random differential initial value problems of first order. The conditions of the mean square convergence of the numerical solutions are studied. The statistical properties of the numerical solutions are computed through numerical case studies. Keywords: Random Differential Equations, Mean Square Sense, Second Random Variable, Initial Value Problems, Random Euler Method, Random Runge Kutta-2 Method
1. Introduction Random differential equations (RDE) are defined as differential equations involving random inputs. In recent years, increasing interest in the numerical solution of (RDE) has led to the progressive development of several numerical methods. This paper is interested in studying the following random differential initial value problem (RIVP) of the form: dX f t , X , X t0 X 0 dt
(1.1)
Randomness may exist in the initial value or in the differential operator or both. In [1,2], the authors discussed the general order conditions and a global convergence proof is given for stochastic Runge-Kutta methods applied to stochastic ordinary differential equations (SODEs) of Stratonovich type. In [3,4], the authors discussed the random Euler method and the conditions for the mean square convergence of this problem. In [5], the authors considered a very simple adaptive algorithm based on controlling only the drift component of a time step. Platen, E. [6] discussed discrete time strong and weak approximation methods that are suitable for different applications. Other numerical methods are discussed in [7-12]. In this paper the random Euler and random RungeKutta of the second order methods are used to obtain an approximate solution for Equation (1.1). This paper is
Copyright © 2011 SciRes.
organized as follows. In Section 2, some important preliminaries are discussed. In Section 3, the existence and uniqueness of the solution of random differential initial value problem is discussed and the convergence of random Euler and random Runge-Kutta of the second order methods is discussed. In Section 4, the statistical properties for the exact and numerical solutions are studied. Section 5 presents the solution of some numerical examples of first order random differential equations using random Euler and random Runge-Kutta of the second order methods showing the convergence of the numerical solutions to the exact ones (if possible). The general conclusions are presented in the end section.
2. Preliminaries 2.1. Mean Square Calculus [13] Definition1. Let us consider the properties of a class of real r.v.’s X 1 , X 2 , , X n whose second moments, E X 12 , E X 22 , are finite. In this case, they are called
“second order random variables”, (2.r.v’s). Definition 2. The linear vector space of second order random variables with inner product, norm and distance, is called an L2 -space. A s.p. X t , t T is called a “second order stochastic process” (2.s.p) if for t1 , t2 , tn , the r.v’s X t1 , X t2 ,, X tn are elements of L2 -space.
OJDM
M. A. EL-TAWIL ET AL.
A second order s.p.
X t , t T
t ,
X t E X 2
2
is characterized by
t T .
2.1.1. The Convergence in Mean Square A sequence of r.v’s X n converges in mean square (m.s) to a random variable X m. s X or if lim X n X 0 i.e. X n n
lim X n X
n
where lim is the limit in mean square sense. 2.1.2. Mean-Square Differentiability The random process X t is mean-square differentiX Xt exists, and is denoted by able at t if lim t h h 0 h X Xt X t lim t h h 0 h
suppose the right-hand side function b t , X is continuous and satisfies a mean square (m.s) Lipschitz condition in its second argument: b t, X b t, Y c X Y
where C is a constant or b t, X b t, Y c t X Y c X Y
(3.4)
where c(t) is a continuous function {because in every finite interval c(t) ≤ constant}. then the solution of Equation (3.1) exists and is unique. The proof The existence can be proved by using successive approximations. Let X t0 X 0
(3.5)
and for n 1
t
X tn X 0 t 0 b X sn 1 , s ds. X t X t 1
3.1. Existence and Uniqueness
(3.6)
0
t 0 b s, X 0 ds t
k t t0
where
Let us have the random initial value problem dX b t , X , t T t0 , t , X t0 X 0 dt
b t, X k
(3.1)
where X t is second order random process. This equation is equivalent to integral equation X t X 0 t 0 b X s , s ds t
(3.7)
For n > 1 we obtain: X t X t n
n 1
(3.2)
Theorem (3.1.1) If we have the random initial value problem (3.1) and n
(3.3)
For n 1 we obtain:
3. Random Initial Value Problem (RIVP)
X t X t
67
n 1 n2 t 0 b s, X s b s, X s ds t
n 1
t
t 0 c X s
n 2
Xs
(3.8)
ds
Successively, we can obtain the following:
t
n 1
c X sn 1 X sn 2 ds t0 t
t
t0
t0
t t t
c c X sn 2 X sn 3 dsds c3 X sn 3 X sn 4 dsdsds t 0 t 0 t0 t t
t
t t
t
t0 t0
t0
t0 t 0
t0
c n 1 X s1 X s0 ds ds c n 1 k s t0 d ds.
Hence X X n
n 1
kc
n 1
t t t t t0 d ]d d s t s s s ds kc n 1 0 n! t0 t0 t0 t0 t
n
(3.9)
Since:
k .c n 1
n 1
t t0 n!
k c t t0 e c
is convergent for finite t,
(3.10)
hence we can have the following Copyright © 2011 SciRes.
OJDM
M. A. EL-TAWIL ET AL.
68
k t t0 kc t t0 X t1 X t0 X t2 X t1 X tn 1 X tn 2 2! 1!
2
n kc n 2 t t0 n 1 t t0 k .c ( n 1) n! n 1! n 1 (3.11)
Accordingly, X t1 X t0 X t2 X t1 X tn 1 X tn 2 X tn X tn 1 lim X tn X t0 X t n
Hence: X t1 X t0 X t2 X t1 X tn 1 X tn 2 X tn X tn 1 X t1 X t0 X t2 X t1 X tn 1 X tn 2 X n X n 1
This yield lim X tn X t0 X t n
constant, hence the only solution of the integral Equation (3.17) is
k c t t0 e c
Then lim X n exists. i.e.
Ut 0
n
X t lim X n
n t
(3.12)
Since X tn is the general solution of Equation (3.6) and X t is the general solution of Equation (3.2). To prove the uniqueness of the solution, let X t is a solution of the initial-value problem (3.1), or, which is the same, of the integral Equation (3.2), and Yt is the solution of dy b Y t , t , t T t0 , t , Y t0 Y0 dt
(3.13)
to prove the uniqueness of the solution we want to prove that X t Yt .
(3.14)
By subtraction (3.2) and the corresponding integral equation for Yt t
3.2. The Convergence of Euler Scheme for Random Differential Equations in (m.s.) Sense Let us have the random differential equation X t f X t , t , t T t0 , t1 , X t0 X 0 (3.22)
where X0 is a random variable and the unknown X t as well as the right-hand side f (X,t) are stochastic processes defined on the same probability space. Definitions [6,7] Let g: T L2 is an m.s. bounded function and let h > 0 then The “m.s. modulus of continuity of g” is the function W g , h sup
t t* h
t0
(3.15)
t
X t Yt t c X s Ys ds 0
t
(3.16)
i.e; U t 0 c U s ds
(3.17)
where U t X t Yt . From Equation (3.17) we have:
(3.18)
t
U t c t t0 U t
(3.19)
Note that: at t t0 we obtain U 0 0 then: U0 0 From (3.19) c must satisfy the following condition: c
1 t t0
(3.20)
which is in contradiction with being an independent free Copyright © 2011 SciRes.
(3.21)
Hence X t Yt i.e., the solution of Equation (3.1) exists and is unique.
X t Yt X 0 Y0 b X s , s b Ys , s ds
Since X 0 Y0 then:
k c t t0 e c
g t g t* , t, t* T
The function g is said to be m.s uniformly continuous in T if: lim W g , h 0 h 0
Note that: (The limit depends on h because g is defined at every t so we can write W g , h W h ) In the problem (3.22), we find that the convergence of this problem depends on the right hand side (i.e. f X t , t then we want to apply the previous definition on f X t , t hence: Let f X t , t be defined on S T where S is bounded set in L2 Then we say that f is “randomly bounded uniformly continuous” in S, if lim W f x,. , h 0 h 0
OJDM
M. A. EL-TAWIL ET AL.
(note that W f X . , h W h )
E
3.2.1. Random Mean Value Theorem for Stochastic Processes The aim of this section is to establish a relationship between the increment X t X t0 of a 2-s.p. and its m.s. derivative X for some lying in the interval t0 , t for t t0 . This result will be used in the next section to prove the convergence of the random Euler method. Lemma (3.3.2) [6,7] Let Y t is a 2-s.p., m.s. continuous on interval T t0 ,t . Then, there exists t0 , t such that t
t
0
Y s ds Y t t0 , t0 t1 t
(3.25)
The proof Since Y t is m.s. continuous, the integral process t
t
0
Y s ds is well defined and the correlation function
y r , s is well defined, is a deterministic continuous function on T × T. For each fixed r, the function y r , s is continuous and by the classic mean value theorem for integrals, it follows that:
y r , s ds y r , t t 0
t
t
0
t0 , t
t 0
(3.26)
E y r y s ds E y r y t t0
Since y r , s E y r y s We must prove that for the value satisfying (3.26) one get:
t y s ds y t t0 t
2
t
t0
2 y s ds y t t0 0
y s ds y t t E y s ds 2 E y s ds y t t 2
t
0
t0
t
2
t0
t
t0
0
2 2 E y t t0
Copyright © 2011 SciRes.
then by substituting in (3.28) E
t
t
t
2 y s ds y t t0
t0
t
t
0
E y s y r drds t E y s y ds t t0 0 t
0
t E y s y ds t t0 E[ y y t t0 t
2
0
(3.29) And since: t
t
0
E y r y s d s E y r y t t0
then by substituting in (3.28) we have: E
2 y s ds y t t0
t
t0
E y s y ds t t0 E y s y ds t t0 t0 t0 t
t
E[ y y t t0 E[ y y t t0 0 2
i.e.
2
t y s ds y t t0 t
0
2
0 we obtain
t y s ds y t t0 0
Theorem (3.3.1) [6,7] Let X s be a m.s. differentiable 2-s.p. in t0 , t1 and m.s. continuous in T t0 , t . Then, there exists t0 , t1 such that X t X t0 X t t0 , t0 t1 t The proof The result is a direct consequence of Lemma (3.3.2) applied to the 2-s.p. Y t X t X s ds X t t0
t
t
0
t
t
0
X s ds X t X t0
(3.30)
The proof of (3.30) Let X t be a m.s. differentiable on T and let the ordinary function f t , s be continuous on T T whose f t , s partial derivative exist s t If Y t f t , s X s ds (3.31)
a
Then Y t f t , s X s a a t
(3.28) and since:
t0
2 t t y s ds E y s y r drds t0 t0
(3.27)
The proof of (3.27) As E
t
And the integral formula
0
E
t
Note that by definition of y r , s expression (3.26) can be written in the form
t
69
t
f t , s s
X s ds
(3.32)
Let f t , s 1 in Equations (3.31) and (3.32) we OJDM
M. A. EL-TAWIL ET AL.
70
Now we have the solution of problem (3.22) is X tn At t tn then X tn X t and the solution of Euler method (3.33) is X n Then we can define the error
have the useful result that: If X t is m.s. Riemann integrable on T then:
a X s ds X t X a , a, t T t
en X n X tn
Then we have: X t X t0 X t t0
en X n X t
By (3.33) and (3.36) it follows that
3.2.2. The Convergence of Random Euler Scheme In this section we are interested in the mean square convergence, in the fixed station sense, of the random Euler method defined by X n 1 X n hf X n , tn , X t0 X 0 , n 0
X n 1 X tn 1
This implies
(3.33)
en 1 X n X tn h f X n , tn f X t , t
where X n and f X n , tn are 2-r.v.’s , h tn tn 1 , tn t0 nh and f: S T L2 , S L2 satisfies the following conditions: C1: f X , t is randomly bounded uniformly continuous, C2: f X , t satisfies the m.s. Lipschitz condition
Hence
en 1 X n X tn h f X n , tn f X t , t
X n X tn h f X t , t f X n , tn
f x, t f y , t k t x y
(3.37) Since:
where
t k t t1
Note t0 , t1 and we can use instead of t and from Theorem (3.3.1) at t t then we have: X t X t0 X t t t0 then
X t X t0 f X t , t t t0
Note that we deal with the interval tn , tn 1 t tn , tn 1 and hence t0 was the starting in the prob-
lem (3.22) and here tn is the starting and since Euler method deal with solution depend on previous solution and if we have X tn instead of X t0 then we can use X tn 1 instead of X t . Then the final form of the problem (3.22) is X tn 1 X tn hf X t , t , for some
t tn , tn 1 Copyright © 2011 SciRes.
(3.36)
f X tn , tn f X tn , tn f X n , tn
(3.38)
f ( X (t ), t ) f ( X (t ), tn )
(3.35)
where X t is the theoretical solution 2-s.p. of the problem (3.22), t tn t0 nh . Taking into account (3.22), and Theorem (3.3.1), one gets, Since from (3.22) we have at t t then X t f x t , t
f X t , t f X t , tn f X t , tn
Note that under hypothesis C1 and C2, we are interested in the m.s. convergence to zero of the error en X n X t
f X t , t f X n , tn
(3.34)
0
X n hf X n , tn X tn hf X t , t
f ( X (t ), tn ) f ( X (tn ), tn ) f ( X (tn ), tn ) f ( X n , tn )
Since the theoretical solution X is m.s. bounded in t0 , t1 , sup X t M and t0 t t1
Under hypothesis C1, C2 We obtain
wh f X t , t f X t , t k t Mh f X t , t f X t , tn
n
n
n
n
(*)
Since k tn is Lipschitz constant (from C2) and from Theorem (3.3.1) we have X t X t0 X t t0 and note that the two points are X t and X tn in (*) then we have: X t X tn X t tn Mh
Since t tn h and M sup X t t0 t t1
f X tn , tn f X n , tn k tn X tn X n k tn en OJDM
M. A. EL-TAWIL ET AL.
71
Then by substituting in (3.38) we have
f X t , t f X n , tn w h k tn Mh k tn en
(3.39)
Then by substituting in (3.37) we have en 1 en h w h k tn Mh k tn en 1 k tn h en h w h k tn Mh 1 K tn h [1 k tn h en 1 h w h k tn Mh h w h k tn Mh 1 K tn h en 1 h w h k tn Mh 1 1 k tn h 2
3 2 1 K tn h en 2 h w h k tn Mh 1 1 k tn h 1 k tn h
1 K tn h
n 1
2 n e0 h w h k tn Mh 1 1 k tn h 1 k tn h 1 k tn h
Since: 1 1 k t h 1 k t h 2 1 k t h n n n n
is geometrical sequence. Then:
w h k tn Mh 0 as The term: lim h 0 k tn h 0 ( w h 0 as h 0 )
And the second term: w h k tn Mh 1 K t h n lim n h 0 k tn
1 1 k t h 1 k t h 2 1 k t h n n n n
1 k t h n
n
1
we have:
k tn h
w h k tn Mh n lim [1 K tn h ] h 0 k tn
Then we get en 1 1 K tn h
n 1
e0 [1 k tn h 1] n
w h k tn Mh
k tn
Taking into account that e0 0 where 1 k t h n 1 n w h k tn Mh k tn
lim en 1 h 0
1 k t h n 1 n lim w h k tn Mh h 0 k tn w h k tn Mh 1 K t h n lim n h 0 k tn w h k tn Mh lim h 0 k tn
(3.40) Note that: Copyright © 2011 SciRes.
w h k tn Mh n lim 1 K tn h lim 0 h 0 h k tn
(3.42)
The first limit in (3.42) equal zero and: n The computation of lim 1 K tn h as follows: h 0 n Let y lim 1 K tn h then by tacking the loh 0 garithm of the two sides we have:
e0 X 0 X t0 0 .
en 1
(3.41)
n ln y ln lim 1 K tn h h 0 n lim ln 1 K tn h h 0 lim n ln 1 K tn h h 0 t t lim n 0 ln 1 K tn h h 0 h t n t0 ln 1 K tn h lim h 0 h tn t0 ln 1 k tn h lim h 0 h
By using the (L’Hospital’s Rule): OJDM
M. A. EL-TAWIL ET AL.
72
lim
tn t0 ln 1 k tn h
h 0
problem (3.22), t tn t0 nh . Taking into account (3.22), and Theorem (3.3.1), one gets, Since from (3.22) we have at t t then
h
t n t0 lim
1 k tn (1 k tn h) 1
h 0
t n t0 k t n t t0 k t h 0 1 k t h n ln y t t0 k tn which implies that
y e
t t0 k t n
X t X t0 X t t t0
X t X t0 f X t , t t t0
hence
By substituting in (3.42):
0 e
t t0 k t n
(3.44)
0
By substituting from (3.44) and (3.42) in (3.40) hence lim en 1 0 i.e., h 0
en converge in m.s to zero as
h 0 hence X n X tn X t . m. s
3.3. The Convergence of Runge-Kutta of Second Order Scheme for Random Differential Equations in Mean Square Sense In this section we are interested in the mean square convergence, in the fixed station sense, of the random Runge-Kutta of second order method defined by h X n 1 X n [ f X n , tn f X n f X n , tn , tn 1 , 2 X t0 X 0 , n 0 (3.45)
where X n and f X n , tn are 2-r.v.’s, h tn tn1 , tn t0 nh and f: S T L2 , S L2 satisfies the following conditions: C1: f X , t is randomly bounded uniformly continuous, C2: f X , t satisfies the m.s. Lipschitz condition f x, t f y , t k t x y t1 0
(3.46)
Note that under hypothesis C1 and C2, we are interested in the m.s. convergence to zero of the error en X n X t
(3.47)
where X t is the theoretical solution 2-s.p. of the Copyright © 2011 SciRes.
lem (3.22) and here tn is the starting and since Euler method deal with solution depend on previous solution and if we have X tn instead of X t0 we can use X tn 1 instead of X t then the final form of the problem (3.22) is
X tn 1 X tn hf X t , t , for some t tn , tn 1
(3.48) Now we have the solution of problem (3.22) is X tn At t tn then X tn X t and the solution of Runge-Kutta of 2 order method (3.45) is X n Then we can define the error en X n X t By (3.45) and (3.48) it follows that X n 1 X tn 1 h f X n , tn f X n f X n , tn , tn 1 X tn 2 h h f X t , t f X t , t 2 2
Xn
Then we obtain: en 1 X n X tn
h f X n , tn f X t , t 2
h f X n f X n , tn , tn 1 f X t , t 2
By taking the norm for the two sides: en 1 X n X tn
where
t k t
Note that we deal with the interval tn , tn 1 t tn , tn 1 and hence t0 was the starting in the prob-
t t k t lim 1 K tn h e 0 n h 0 n
w h k tn Mh 1 K t h n lim n h 0 k tn
Note t0 , t1 and we can use instead of t And from Theorem (3.3.1) at t t then we obtain
lim
Then
X t f x t , t
(3.43)
h f X n , tn f X t , t 2
h f X n f X n , tn , tn 1 f X t , t 2 h X n X tn f X t , t f X n , tn 2 h f X n f X n , tn , tn 1 f X t , t 2 (3.49)
OJDM
M. A. EL-TAWIL ET AL.
73
Since:
f X t , t f X n , tn
f X t , t f X t , tn f X t , tn
f X t n , t n f X t n , tn f X n , t n
(3.50)
f X t , t f X t , tn f X t , tn f X tn , tn f X tn , tn f X n , tn
Are X t and X tn in (*) then
Since the theoretical solution X is m.s. bounded in t0 , t1 , sup X t M and
X t X tn X t tn Mh
t0 t t1
Under hypothesis C1, C2 We have
wh f X t , t f X t , t k t Mh f X t , t f X t , tn
n
n
n
n
where t tn h and M sup X t t0 t t1
(*)
f X tn , tn f X n , tn
where k tn Is Lipschitz constant (from C2) and:
From Theorem (3.3.1) we have X t X t0 X t t0 and note that the two points
Then by substituting in (3.50) we have
k tn X tn X n k tn en
f X t , t f X n , tn w h k tn Mh k tn en
(3.51)
And another term:
f X n f X n , tn , tn 1 f X t , t
f X t , t f X t , t f X t , t
f X t , t f X n f X n , tn , tn 1
n 1
n 1
f X tn , tn 1 f X tn , tn 1 f X n f X n , tn , tn 1
f X t , t f X t , tn 1 f X t , tn 1 f X tn , tn1 f X tn , tn 1 f X n f X n , tn , tn 1 w h k tn 1 Mh k tn 1 en M
Since:
f X t , t f X t , t
f X t , t f X t , tn 1 w h
n 1
n
n 1
k tn 1 Mh
where k tn . Is Lipschitz constant (from C2) and: From Theorem (3.3.1) we have X t X t0 X t t0 and note that the two points are X t and X tn in (*) then we have:
X t X tn X t tn Mh
Copyright © 2011 SciRes.
where t tn h and M sup X t t0 t t1
And the last term: f X tn , tn 1 f X n f X n , tn , tn 1 k tn 1 X tn X n f X n , tn k tn 1 X tn X n f X n , tn k tn 1 en M
Then by substituting in (3.49) we have OJDM
M. A. EL-TAWIL ET AL.
74
h h w h k tn Mh k tn en w h k tn 1 Mh k tn 1 en M 2 2 h h en 1 k tn k tn 1 2w h hk tn M hk tn 1 M K tn 1 M 2 2 h h en 1 1 k tn k tn 1 2 w h hk tn M hk tn 1 M K tn M 2 2 en 1 en
h h 1 k tn k tn 1 2w h hk tn M hk tn 1 M K tn 1 M 2 2 2
h h h en 1 1 k tn k tn 1 2 w h hk tn M hk tn 1 M K tn 1 M 2 k tn k tn 1 2 2 2 h h en 2 1 k tn k tn 1 2w h hk tn M hk tn 1 M K tn M 2 2 2 h h h 1 k tn k tn 1 2 w h hk tn M hk tn 1 M K tn 1 M 2 k tn k tn 1 2 2 2 3
h h en 2 1 k tn k tn 1 2w h hk tn M hk tn 1 M K tn 1 M 2 2 2 h h 1 1 k tn k tn 1 1 k tn k tn 1 2 2
Then we have: n 1
h h en 1 e0 1 k tn k tn 1 2 w h hk tn M hk tn 1 M K tn1 M 2 2 2 n h h h k t k t k t k t k t k t 1 1 1 1 n n 1 n n 1 n n 1 2 2 2
Since: 2 n h h h 1 1 k tn k tn 1 1 k tn k tn 1 1 k tn k tn 1 2 2 2
is geometrical sequence then we have: n h 1 k t k t 1 n n 1 n 2 2 h h h [1 1 k tn k tn 1 1 k tn k tn 1 1 k tn k tn 1 ] h 2 2 2 k tn k tn 1 2
Then we get:
en 1
n h k t k t 1 1 1 n n n 1 2 h h e0 1 k tn k tn 1 2 w h hk tn M hk tn 1 M K tn 1 M h 2 2 k tn k tn 1 2
Taking into account that e0 0 where e0 X 0 X t0 0 Copyright © 2011 SciRes.
OJDM
M. A. EL-TAWIL ET AL.
75
n h 1 k tn k tn 1 1 2 h 2 w h hk tn M hk tn 1 M k tn 1 M h 2 k tn k tn 1 2
en 1
n h k t k t 1 n n 1 1 2 h lim en 1 lim 2 w h k tn Mh hk tn 1 M K tn 1 M h 0 h 0 2 h k tn k tn 1 2 h n 2 w h k tn Mh k tn 1 Mh k tn 1 M h K t k t 1 lim 2 n n 1 h 0 h 2 k tn k tn 1 2 h 2 w h k tn Mh k tn 1 Mh k tn 1 M lim 2 h 0 h k tn K tn 1 2 Note that: The term: h 2w h k tn Mh k tn 1 Mh k tn 1 M 0 lim 2 0 h 0 h k tn 1 k tn K tn 1 2 and the second term: h n 2w h k tn Mh k tn 1 Mh k tn 1 M h lim 2 1 K tn k tn 1 h 0 h 2 k tn k tn 1 2 we have: h n 2w h k tn Mh k tn 1 Mh k tn 1 M h lim 2 1 K tn k tn 1 h 0 h 2 k tn k tn 1 2 h n h 2 w h k tn Mh k tn 1 Mh k tn 1 M lim 1 K tn k tn 1 lim 2 h 0 h 0 h 2 k tn k tn 1 2
The first limit in (3.53) equals zero and: The computation of n h lim 1 K tn k tn 1 is as follows: h 0 2 h ln y ln lim 1 K tn k tn 1 h 0 2
(3.53)
n h Let y lim 1 K tn k tn 1 then by tackh 0 2
ing the logarithm of the two sides we have: n
n h ln 1 K tn k tn 1 lim h 0 2
h t t lim n ln 1 K tn k tn 1 lim n 0 0 h 0 h h 2 h tn t0 ln(1 k tn k tn 1 2 lim h 0 h Copyright © 2011 SciRes.
(3.52)
h ln[1 2 K tn k tn 1
OJDM
M. A. EL-TAWIL ET AL.
76
A sequence of r.v’s X n converges in probability to a random variable X as n if
By using the (L’Hospital’s Rule):
h k tn k tn 1 2 lim h 0 h 1 1 k tn t n t0 h 2 1 k tn k tn 1 2 lim h 0 1 1 1 t n t0 k t n t t0 k t lim 2 2 h 0 h 1 k t 1 k tn k tn 1 2
tn t0 ln 1
t t0 k t ln y 2 1 k t
Then y e
t t0 k t 2 1 k t
lim p X n X 0 0
n
(3.54)
hence: t t k t
0 n h 2 1 k t lim 1 K tn h k tn 1 e h 0 2
By substituting in (3.53): h 2w h k tn Mh k tn 1 Mh k tn 1 M lim 2 h 0 h k tn k tn 1 2 t t k t
0 n h 2 1 k t 0 1 K tn k tn 1 0 e 2
(3.55) By substituting from (3.55) and (3.53) in (3.51) then we obtain lim en 1 0 i.e. h 0
en
converges in m.s to
zero as h 0 hence X n X tn X t m. s
4. Some Results
Definition 4.2 [13]. “The convergence in distribution” A sequence of r.v’s X n converge in distribution to a random variable X as n if lim Fxn x Fx x
n
Lemma (4.1) [13] The convergence in m.s implies convergence in probability Lemma (4.2) [13] The convergence in probability implies convergence in distribution Theorem 4.2 m. s m. s X then PDF of X n If X n PDF of f xn x f X x X i.e.; nlim Proof m. s X then Since we have shown that If X n d X n X m. s X then lim Fxn x FX x i.e., if X n n d d Then we have: lim FX n x FX x then n dx dx lim f xn x f X x n
5. Numerical Examples Example (5.1) The differential equation with random term in it and random initial condition y Kx, y x0 D, x x0 , xn ,
K, D are independent Poisson random variables with joint PDF
Theorem 4.1 Let {Xn, n = 0, 1, ···}, {Yn, n = 0, 1, ···} be sequences of 2-r.v’s over the same probability space and let a and b be deterministic real numbers. Suppose: X lim X n and Y lim Yn n
fK ,D K , D
1) The exact solution,
n
Then: 1) aX bY lim aX n bYn n
2) E X lim E X n n
y D
n
EX 2
y1 y0 hf y0 , x0 D hKx0
n
Copyright © 2011 SciRes.
2
at n = 1
5) lim Var X n Var X Definition 4.1 [13]. “The convergence in probability”
yn yn 1 hf yn 1 , xn 1 , y x0 y0
n
2 n
K x 2 x02
2) The numerical solution Using the Random Euler Method:
3) E XY lim E X nYn 4) lim E X
e 4 2 k D , K , D 0,1, 2, k !D!
at n = 2 OJDM
M. A. EL-TAWIL ET AL.
y2 y1 hf y1 , x1 D hKx0 hKx1
77
yn D nhKx0 n n 1 2 Kh 2 .
D hKx0 hK x0 h
We can prove that: 1) lim yn y
at n = 3
n
y3 y2 hf y2 , x2 D hKx0 hKx1 hKx2
Proof 2
D hKx0 hK x0 h hK x0 2h
Since lim yn y (if and only if) lim E yn y 0 n
n
Then:
at n = 4 y4 y3 hf y3 , x3 D hKx0 hKx1 hKx2 hKx3
yn y nhKx0
D hKx0 hK x0 h hK x0 2h hK x0 3h
yn y
and so on… Then the general numerical solution is
n n 1
2
h 2 K
K x 2 x02
2
2
n n 1 x 2 x02 2 h n h K x 2nhK x0 2 2 2
yn D hKx0 hK x0 h hK x0 2h
hK x0 3h hK x0 n 1 h
2
2
2 0
2
n n 1 x 2 x02 h2 K 2 2 2
n 1
i.e., yn D hK x0 ih . i 0
where h
This can be written in another form:
2
xn x0 n
n 1 2 2 2 yn y K 2 x02 xn x0 K 2 x0 xn x0 xn x0 x 2 x02 n
K 2 n 1 2 xn x0 x 2 x02 4 n 2 2 n 1 2 2 n 1 K 2 x02 xn2 2 K 2 xn x03 K 2 x04 K 2 xn x0 xn x0 K x0 xn x0 n n
2
K 2 xn x0 x 2 x02 K 2 x02 x 2 x02
K 2 n 1 4 n
K 2 n 1 K2 2 2 2 2 x x x x x x02 n 0 0 2 n 4
2
xn x0
4
2
lim E yn y 18 x02 x 2 36 x03 x 18 x04 18 xx0 x x0 18 x02 x x0 2
2
2
n
18 4 x 18 x3 x0 27 x 2 x02 18 xx03 4 18 18 18 x04 9 x 4 18 x3 x0 18 xx03 9 x04 x 4 9 x 2 x02 x04 4 4 4 18 18 x02 x 2 36 x03 x 18 x04 72 xx03 36 x 2 x02 36 x04 x 4 18 x 3 x0 4 18 27 x 2 x02 18 xx03 x04 9 x 4 18 x 3 x0 18 xx03 9 x04 4 18 4 18 4 2 2 x 9 x x0 x0 0 4 4
18 xx0 x 2 x02 18 x02 x 2 x02
i.e; lim yn y n
Copyright © 2011 SciRes.
We can verify theorem (4.1) as follows
OJDM
M. A. EL-TAWIL ET AL.
78
2) lim E yn E y
Proof
n
n 1
n
i 0
j 1
yn D hK x0 ih D hK x0 j 1 h D hK x0 x0 h x0 2h x0 3h x0 n 1 h n n 1 D hK nx0 h 1 2 3 n 1 D hK nx0 h 2 n n 1 xn x0 D K nhx0 h 2 where h n 2 2 xn x0 xn x0 n n 1 n 1 2 D K nx0 xn x0 D K x0 xn x0 2 2n n n
n 1 n 1 2 2 E yn E D E K x0 xn x0 E xn x0 xn x0 2 2 E x0 xn E x02 2 n 2 n
Then: 1 2 lim E yn 2 2 E x0 x E x02 E x x0 2 1 1 2 2 E x0 x E x02 E x 2 E x0 x E x02 2 2 1 1 2 2 E x 2 E x02 2 E x 2 E x02 E y 2 2
n
i.e.; lim E yn E y . n
Since yn D nhKx0 n n 1 2 Kh 2 Then we have:
3) lim E yn2 E y 2 n
Proof
E yn2 E D Kx0 xn x0 Kn n 1 2 xn x0
2
n2
2
2 2 E D K xn x0 x0 2 E D K xn x0 x0 K n 1 n xn x0 2 2 E K n n 1 n xn x0 2
2
2 2 E D 2 E K xn x0 x0 2 E DK xn x0 x02 2 E D K xn x0 x0 K n 1 n xn x0 2 2 2 E[ K 2 n n 1 n E xn x0 2
2
E D 2 E K 2 E xn x0 2 E K 2 E xn x03 E K 2 E x0 2 E D E K E xn x0 2 E D E K E x02 2
4
2 3 2 E D E K n 1 n E xn x0 2 2 n 1 n E K 2 E x0 x x0 2 2 [n n 1 n]2 E K 2 E xn x0 2
2
E D E K 2 E xn x0 2 E K 2 E xn x03 E K 2 E x0 2 E D E K E xn x0 2 E D E K E x02 2
2
4
2 1 2 3 4 E D E K n 1 n E xn x0 n 1 n E K 2 E x0 x x0 n n 1 n E K 2 E xn x0 4
Copyright © 2011 SciRes.
OJDM
M. A. EL-TAWIL ET AL.
79
Then by taking the limit:
lim E yn2 6 6 E xx0 12 E xx03 6 E x0 8E xx0 8 E x02
n
2
4
6 2 3 4 4 E x x0 6 E x0 x x0 E x x0 4 2 4 6 6 E xx0 12 E xx03 6 E x0 8 E xx0 8E x02 4 E x 2 8 E xx0 4 E x02 6 E x3 x0 18 E x 2 x02 18E xx03 6 E x04
6 6 E x 4 6 E x3 x0 6 E xx03 9E x 2 x02 E x04 E y 2 4 4
i.e. lim E yn2 E y 2 . n
fy y
4) lim var yn var y n
Proof
lim E y lim E y E y E y Var y
2 lim Var yn lim E yn2 E yn n
n
2 n
n
2
2
2
i.e., lim Var yn Var y . n
J
5) lim PDF yn PDF y n
Proof Since y D
K x 2 x02
2 Let us define Z = D. Then the inverse transformation is: D Y K
K x 2 x02
2 y Z
2
, D = Z then we have D = Z and
2 0
k y J D y
K yn
K Z n
D yn
D Z n
1 2 nhx n 0 n 1 2 h 0
2
2
1 nhx0 n n 1 2 h 2 1
1 nhx0 n n 1 2 h 2
k 2 z 2 x x02 D 0 z
2 x x02 2
1
yn z n f yn , Z n yn , zn f K , D , zn J 2 nhx0 n n 1 2 h
2 x x02 2
4
Z
2Y Z
2 y Z e 4 2 X X 0 f y , z y, z f K , D 2 , Z J 2 2 y Z x x0 Z ! 2 ! 2 x x0
Since D 0 then Z 0 hence K x 2 x02 yZ 2
Copyright © 2011 SciRes.
2
Zn
2
f y n yn
where h
Yn Z n nhx0 n n 1 2 h 2
e 2 yn z n ! zn ! 2 nhx0 n n 1 2 h
Then:
2Y Z
Then:
x x 2
4
For a numerical solution: since yn D nhKx0 n n 1 2 Kh 2 yn z n Let zn D then K nhx0 n n 1 2 h 2
n
n
Z
e 2 x x0 2 y Z PDF y . Z 0 Z ! 2 ! 2 x x0 y
4
Zn
Yn zn nhx0 n n 1 2 h 2
e 2 Zn 0 yn z n zn ! ! 2 nhx0 n n 1 2 h yn
xn x0 n
OJDM
M. A. EL-TAWIL ET AL.
80 zn
4
e 2
yn
f y n yn
Yn zn xn x0 x0 [ n 1 2 n ] xn x0 2
yn D
yn z n ]! 2 xn x0 x0 n 1 2n xn x0 PDF yn zn
zn ![
1 yn D hK x0 x0 h x0 2h 2 , 1 x0 n 1 h x0 nh 2
Then by taking the limit we have Z
2 Y z 2
2
e 4 2 x x0 lim f yn yn fy y n 2 y z Z 0 ! z ! 2 2 x x0 y
hK x0 2 x1 2 x2 2 x3 ....... 2 xn 1 xn , 2
1 yn D hK x0 x0 h x0 2h 2 1 x0 n 1 h x0 nh 2
i.e.; lim PDF yn PDF y n
n 1 1 yn D hK x0 ih nh 2 K . 2 i 0
B. Using the Random Runge-Kutta method: yn 1 yn
h f yn , xn f yn f yn , xn , xn 1 , 2
This can be written in another form:
at n = 0 yn D nhKx0
h y1 y0 f y0 , x0 f y0 f y0 , x0 , x1 2 h hK D Kx0 Kx1 D x0 x1 . 2 2
n2 h2 K . 2
We can prove that: 1) lim yn y n
At n = 1
Proof 2
Since lim yn y (if and only if) lim E yn y 0
h y2 y1 f y1 , x1 f y1 f y1 , x1 , x2 2 h h D Kx0 Kx1 Kx1 Kx2 2 2 hK D x0 2 x1 x2 . 2
n
n
K x 2 x02 n2 2 yn y nhKx0 h K 2 2 yn y
2
n2 x 2 x02 n 2 h 2 K 2 x02 2nhK 2 x0 h 2 2 2
At n = 2 h f y2 , x2 f y2 f y2 , x2 , x3 2 hK h D x0 2 x1 x2 Kx2 Kx3 2 2 hK D x0 2 x1 2 x2 x3 . 2
y3 y2
2 x 2 x02 2 n 2 K h 2 2
where h
Then the general solution is:
2
xn x0 n
2 2 2 yn y K 2 x02 xn x0 K 2 x0 xn x0 xn x0 x 2 x02 2 2 K 2 xn x0 x 2 x02 4
K 2 x02 xn2 2 K 2 xn x03 K 2 x04 K 2 xn x0 xn x0 K 2 x02 xn x0 2
K 2 xn x0 x 2 x02 K 2 x02 x 2 x02
Copyright © 2011 SciRes.
K2 4 xn x0 4
K2 K2 2 2 x x02 xn x0 x 2 x02 2 4
2
2
OJDM
M. A. EL-TAWIL ET AL.
81
2
lim E yn y
n
18 x02 x 2 36 x03 x 18 x04 18 xx0 x x0 18 x02 x x0 18 xx0 x 2 x02 18 x02 x 2 x02 2
2
18 4 18 18 18 x 18 x 3 x0 27 x 2 x02 18 xx03 x04 9 x 4 18 x3 x0 18 xx03 9 x04 x 4 9 x 2 x02 x04 4 4 4 4 18 18 x02 x 2 36 x03 x 18 x04 72 xx03 36 x 2 x02 36 x04 x 4 18 x3 x0 27 x 2 x02 18 xx03 4 18 4 18 4 18 4 3 3 4 x0 9 x 18 x x0 18 xx0 9 x0 x 9 x 2 x02 x04 4 4 4
2) lim E yn E y
i.e.; lim yn y . n
n
Verification of Theorem (4.1):
Proof
n 1
yn D hK x0 ih i 0
n nh 2 K nh 2 K D hK x0 j 1 h 2 2 j 1
nh 2 K 2
D hK x0 x0 h x0 2h x0 3h x0 n 1 h D hK nx0 h 1 2 3 n 1
nh 2 K 2
n n 1 nh 2 K D hK nx0 h 2 2 n n 1 nh 2 K x D K nhx0 h 2 where h n 2 2 2 x x x x n n 1 xn D K nx0 n 0 n 0 n n 2
x0 n x0 K 2
2n
x x K n 1 2 D K x0 xn x0 xn x0 n 0 n 2 2n 2
2 xn x0 K n 1 2 E yn E D E K x0 xn x0 xn x0 2n 2n
2 E xn x0 n 1 2 E xn x0 2 2 E x0 xn E x02 2n 2n
2
lim E yn 2 2 E x0 x E x02 E x x0 2
n
2
1 1 2 2 E x0 x E x02 E x 2 E x0 x E x02 2 2
2 E x 2 E x02 E y
i.e.; lim E yn E y . n
3) lim E yn2 E y 2 n
Copyright © 2011 SciRes.
Proof Since yn D nhKx0
n2 h2 K then: 2 OJDM
M. A. EL-TAWIL ET AL.
82
E y
2 n
2 x x K E D Kx0 xn x0 n 0 2
2
K xn x0 2 K xn x0 2 E E D K xn x0 x0 2 E D K xn x0 x0 2 2
2
2
E D 2 E K xn x0 x0 2 E DK xn x0 x02 2
K xn x0 2 K xn x0 2 E 2 E D K xn x0 x0 2 2
2
E D 2 E K 2 E xn x0 2 E K 2 E xn x03 E K 2 E x0 2
4
2 2 E D E K E xn x0 2 E D E K E x02 2 E D E K E xn x0 2 3 2 2 E K 2 E x0 x x0 2 E K 2 E xn x0 2
2
E D E K 2 E xn x0 2 E K 2 E xn x03 E K 2 E x0 2
2
4
2 2 E D E K E xn x0 2 E D E K E x02 E D E K E xn x0 1 3 4 E K 2 E x0 xn x0 E K 2 E xn x0 4
lim E yn2 6 6 E xx0 12 E xx03 6 E x0 8 E xx0 8 E x02
n
2
4
6 2 3 4 4 E x x0 6 E x0 x x0 E x x0 4 6 6 E xx0 12 E xx03 6 E x0 8 E xx0 8E x02 4 E x 2 2
4
8E xx0 4 E x02 6 E x3 x0 18 E x 2 x02 18 E xx03 6 E x04
6 6 E x 4 6 E x3 x0 6 E xx03 9 E x 2 x02 E x04 E y 2 4 4
is:
i.e.; lim E yn2 E y 2 . n
4) lim Var yn Var y
D y
n
Proof
lim E y lim E y E y E y Var y
2 lim Var yn lim E yn2 E yn n n 2 n
n
2
n
n
2
2
i.e. lim Var yn Var y n
5) lim PDF yn PDF y n
Since y D
K x 2 x02
2 Let us define Z = D. Then the inverse transformation Copyright © 2011 SciRes.
K
K x 2 x02
2 y z
2
D = z then we have D = z and
x 2 x02 K y J D y
K 2 z 2 x x02 D 0 z
2 x x02 2
1
2 x x02 2
Z
2Y Z
2 y z e 4 2 X X 0 f y , z y, z f K , D 2 , z J 2 2 y z x x0 z ! 2 ! 2 x x0 2
2
Since D 0 then z 0 this implies OJDM
M. A. EL-TAWIL ET AL.
K x 2 x02
y z
i.e.; lim PDF yn PDF y . n
2 Z
Example (5.2) Solve the problem
2Y Z 2
2
e 4 2 X X 0 PDF y . 2 y z z 0 z ! 2 ! 2 x x0 y
fy y
dy y 2 Ky, y 0 K , t 0, tn , K exp 1 dt The exact solution
Numerically, n2 h2 K since yn D nhKx0 2 yn z n Let zn D then K n2 h2 nhx0 2 k k 1 1 2 2 yn zn n h n2 h2 J nhx0 nhx0 D D 2 2 0 1 yn zn 1
nhx0
f yn , zn
AT n = 1
y1 y0 hf y0 , t0 K h y02 Ky0 K . At n = 2 y2 y1 hf y1 , t1 K h y12 ky1 K . At n = 3
2
n
e 4 2 zn 0 yn z n zn ! 2 2 nhx n h 0 2
!
n
E yn E K K E y
4
Zn
e 2
where h
Yn Z n xn x0 x0 xn x0 2
xn x0 n
Z
4) lim Var yn Var y
lim E y lim E y E y E y Var y
2 lim Var yn lim E yn2 E yn n n
2
2Y Z
X 2 X 02
e 2 fy y lim f yn yn n 2 y z z 0 z ! 2 ! 2 x x0 Copyright © 2011 SciRes.
2
n
yn z n zn ! ! 2 xn x0 x0 xn x0 2 PDF yn 4
Since yn K E yn2 E K K 2 E y 2
2 n
n
2
zn
y
3) lim E yn2 E y 2 n
!
n
Verification of Theorem (4.1) It is clear that: 2) lim E yn E y
Yn Z n n2 h2 nhx0 2
2
Since lim E yn y lim E K K 0
n 2 h2 2
e 2 yn z n zn ! 2 2 nhx n h 0 2
yn
f y n yn
yn yn 1 hf yn 1 , tn 1 , y 0 K .
n
Yn Z n nhx0
4
Zn
yn
The numerical solution by the Euler method:
And so on…. Then the general numerical solution: yn K . It is clear that: 1) lim yn y
yn z n , zn J yn , z n f k , D 2 2 nhx n h 0 2 Zn
f y n yn
yK
y3 y2 hf y2 , t2 K h y22 ky2 K .
n2 h2 2
83
2
n
n
2
5) lim PDF yn PDF y n
y K Then J 1 which implies
PDF y J PDF k e y yn K Then J 1 which implies PDF yn J PDF K e yn k OJDM
M. A. EL-TAWIL ET AL.
84
Then: lim PDF yn PDF y
Euler Method for Solving Differential Equations with Uncertainties,” Progress in Industrial Mathematics at ECMI, Madrid, 2006.
n
6. Conclusions The initially valued first order random differential equations can be solved numerically using the random Euler and random Runge-Kutta methods in mean square sense. The existence and uniqueness of the solution have been proved. The convergence of the presented numerical techniques has been proven in mean square sense. The results of the paper have been illustrated through some examples.
[5]
H. Lamba, J. C. Mattingly and A. Stuart, “An adaptive Euler-Maruyama Scheme for SDEs, Convergence and Stability,” IMA Journal of Numerical Analysis, Vol. 27, No. 3, 2007, pp. 479-506. doi:10.1093/imanum/drl032
[6]
E. Platen, “An Introduction to Numerical Methods for Stochastic Differential Equations,” Acta Numerica, Vol. 8, 1999, pp. 197-246. doi:10.1017/S0962492900002920
[7]
D. J. Higham, “An Algorithmic Introduction to Numerical Simulation of SDE,” SIAM Review, Vol. 43, No. 3, 2001, pp. 525-546. doi:10.1137/S0036144500378302
[8]
D. Talay, and L. Tubaro, “Expansion of The Global Error for Numerical Schemes Solving Stochastic Differential Equation,” Stochastic Analysis and Applications, Vol. 38, No. 4, 1990, pp. 483-509. doi:10.1080/07362999008809220
[9]
P. M. Burrage, “Numerical Methods for SDE,” Ph.D. Thesis, The University of Queensland, 1999.
7. References [1]
[2]
[3]
[4]
K. Burrage, and P.M. Burrage, “High Strong Order Explicit Runge-Kutta Methods for Stochastic Ordinary Differential Equations,” Applied Numerical Mathematics, Vol. 22, No. 1-3, 1996, pp. 81-101. doi:10.1016/S0168-9274(96)00027-X K. Burrage, and P. M. Burrage, “General Order Conditions for Stochastic Runge-Kutta Methods for Both Commuting and Non-Commuting Stochastic Ordinary Equations,” Applied Numerical Mathematics, Vol. 28, No. 2-4, 1998, pp. 161-177. doi:10.1016/S0168-9274(98)00042-7 J. C. Cortes, L. Jodar and L. Villafuerte, “Numerical Soluion of Random Differential Equations, a Mean Square Approach,” Mathematical and Computer Modelling, Vol. 45, No.7, 2007, pp. 757-765. doi:10.1016/j.mcm.2006.07.017
[10] P. E. Kloeden, E. Platen and H. Schurz, “Numerical Solution of SDE Through Computer Experiments,” Second Edition, Springer, 1997. [11] M. A. El-Tawil, “The Approximate Solutions of Some Stochastic Differential Equations Using Transformation,” Journal of Applied Mathematics and Computing, Vol. 164, No. 1, 2005, pp. 167-178. doi:10.1016/j.amc.2004.04.062 [12] P. E. Kloeden, and E. Platen, “Numeical Solution of Stochastic Differential Equations,” Springer, Berlin, 1999. [13] T. T. Soong, “Random Differential Equations in Science and Engineering,” Academic Press, New York, 1973.
J. C. Cortes, L. Jodar, and L.Villafuerte, “A Random
Copyright © 2011 SciRes.
OJDM