Adams-type Methods for the Numerical Solution of ... - CiteSeerX

3 downloads 0 Views 227KB Size Report
Adams-type Methods for the. Numerical ... Formulae, Adams methods, predictor-corrector methods. ...... Introduction to Stochastic Di erential Equations, Marcel.
Adams-type Methods for the Numerical Solution of Stochastic Ordinary Di erential Equations L. Brugnano

y

K. Burrage

z

P. M. Burrage

z

Abstract

The modelling of many real life phenomena for which either the parameter estimation is dicult, or which are subject to random noisy perturbations, is often carried out by using stochastic ordinary differential equations (SODEs). For this reason, in recent years much attention has been devoted to deriving numerical methods for approximating their solution. In particular, in this paper we consider the use of linear multistep formulae (LMF). Strong order convergence conditions up to order 1 are stated, for both commutative and noncommutative problems. The case of additive noise is further investigated, in order to obtain order improvements. The implementation of the methods is also considered, leading to a predictor-corrector approach. Some numerical tests on problems taken from the literature are also included.

Keywords: stochastic ODEs, strong convergence, Linear Multistep Formulae, Adams methods, predictor-corrector methods.

1 Introduction Many real world phenomena are (or appear to be) liable to random noisy perturbation. This is the case, for example, in investment nance, turbulent di usion, chemical kinetics, VLSI circuit design, etc. (see, for example, [7, 8, 10]). The mathematical modelling of such phenomena is, therefore, not well  Work supported by CNR, contract n.98.01037.CT01, and by the Universita di Firenze. y Dipartimento di Matematica \U. Dini", Universita di Firenze, 50134 Firenze, Italy. z Department of Mathematics, University of Queensland, Brisbane 4072, Australia.

1

matched by deterministic equations, and stochastic equations are preferable instead. When the evolution of such phenomena has to be studied, then one often must handle a system of stochastic ordinary di erential equations (SODEs). As in the case of deterministic ODEs, only a few, very simple SODEs can be solved analytically. As a consequence, there is the need for numerical methods for approximating their solutions. However, this is a relatively new eld of investigation and, for this reason, there are not yet general purpose codes for handling SODEs. In [1, 2, 3, 4, 5, 6] the numerical solution of SODEs by means of suitably modi ed Runge-Kutta methods has been considered. We are here concerned with the use of linear multistep formulae (LMF) for approximating a SODE in the form d X

dy(t) = f (y(t))dt+ gj (y(t))dWj (t); t 2 [0; T ]; j =1

y(0) = y 2 IRm; (1) 0

which, without loss of generality, we have assumed to be autonomous, in order to simplify the notation. In the formulation (1), the Wj (t), j = 1; : : : ; d, are independent Wiener processes, modelling independent Brownian motions, which satisfy the initial condition Wj (0) = 0 with probability 1 [8]. The deterministic term f (y) is sometimes called the drift. The Wiener processes are known to be Gaussian processes satisfying

E (Wj (t)) = 0; E (Wj (t)Wj (s)) = minft; sg; whose increments Zs dWj t are, if not overlapping, independent and N (0; jt ? sj) distributed. The solution of (1) can be formally written as Zt d Zt X gj (y(s))dWj (s); y(t) = y + f (y(s))ds + 0

where the integrals

Zt 0

j =1

0

gj (y(s))dWj (s);

0

j = 1; : : : ; d;

are stochastic integrals (see, for example, [8]). They are de ned as the limit (in the mean square sense), as n ! 1, of the approximating sums n X gj (y(i))(Wj (ti) ? Wj (ti? )); 1

i=1

2

where i = ti +(1 ? )ti? , for a xed  2 [0; 1] and, for simplicity, ti = it=n, i = 0; : : :; n. For stochastic integrals, di erent choices of  usually result in di erent values for the integral. The most common choices for the parameter  are 1

  = 0, which gives an It^o integral, and   = , which gives a Stratonovich integral. 1 2

The It^o formulation has the advantage of preserving the martingale property of the Wiener process, so that

E

Zb a

0 Z

1 Z b 

b  E @

a q(t)dWj (t)

A = a E kq(t)k dt:

!

2

q(t)dWj (t) = 0;

2

On the other hand, the Stratonovich integrals formally satisfy the usual rules of calculus. For example,

   1 1 Wj (t)dWj (t) = 2 Wj (b) ? Wj (a) +  ? 2 (b ? a) a    21 Wj (b) ? Wj (a) ; because  = for Stratonovich integrals. The latter are usually denoted by Zb q(t)  dWj (t); a whereas the usual notation is referred to as It^o integrals. As a consequence, we may reformulate equation (1) in its equivalent Stratonovich form. Considering that in general (see, for example, [8]) one has Zt Zt Zt d q(Wj )  dWj = q(Wj )dWj + 21 dW q(Wj (s))ds; j it follows that the Stratonovich formulation of (1) is given by d X (2) dy(t) = g (y(t))dt + gj (y(t))  dWj (t); Zb

2

2

2

2

1 2

0

0

0

where

0

j =1

g (y(t)) = f (y(t)) ? 12 0

3

d X j =1

gj0 (y(t))gj (y(t));

with gj0 denoting the Jacobian matrix of gj . In the following, we shall always use Stratonovich calculus. For this reason, we shall assume that the problems have been recast in the corresponding Stratonovich formulations. We are now concerned with the numerical approximation of (2), by means of a numerical method in the form k X i=0

i yn

i

+

=

k d X X j =0 i=0

inj gj;n i ; +

n = 0; 1; : : : ;

(3)

y ; : : : ; yk ? xed, where, as usual, if y(t) is the continuous solution to (2), yn i is the numerical approximation to y(tn i) and gj;n i = gj (yn i ). We shall only consider here the case of a uniform partition of the integration interval, tn = nh, n = 0; : : : ; N , h = T=N . The coecients f ig are assumed to be independent of n (moreover, we shall x the usual scaling k = 1), whereas the remaining j coecients f ing are in general stochastic variables. 0

1

+

+

+

+

In the next section, we shall obtain conditions for the coecients, in order to meet suitable accuracy requirements.

2 Strong order conditions When speaking about the accuracy of numerical methods for SODEs, we distinguish between two kind of convergence:

Weak convergence: this case concerns the situations where one is interested in the moments. One then requires that there exist constants

C; ; p > 0 such that

p max n kE (q (yn) ? q (y (tn)))k  Ch ;

for all stepsizes h <  and polynomials q. In such a case, it is said that the method has weak order p. Strong convergence: in this case, one is interested in the mean square convergence of the trajectories, which means that p max n E (kyn ? y (tn )k)  Ch ;

for all stepsizes h < , for methods having strong order p. 4

The second requirement is more critical, and will be our matter of investigation, for methods in the form (3). The usual procedure, for deriving methods of strong order p, is to require that the truncation error of the scheme, obtained by inserting the continuous solution evaluated at the grid-points in the discrete equation, k d X k X X inj gj (y(tn i)); (4) n = iy(tn i) ? has O(hp

=

+

i=0

+1 2

+

j =0 i=0

) mean square expectation [2, 3]. That is, it satis es

E (knk) = O(hp

): (5) In order to do this, we need to introduce some preliminary results and notations. First of all, we recall the following stochastic Taylor expansions for the solution y(t) of (2) (see, for example, [8]): =

+1 2

y(t + h) = y(t) d d X X gj0 (y(t))g`(y(t))J`j (t) + gj (y(t))Jj (t) + j `;j d   X gj00(y(t)) (g` (y(t)); gr(y(t))) + gj0 (y(t))g`0 (y(t))gr(y(t)) Jr`j (t) + r;`;j +::: where, by setting W (t) = t, =0

=0

=0

0

Jj (t) =

Zt

h

+

t

dWj ;

Jr`j (t) =

J`j (t) =

Z t h Z s Z s1 +

t

t

t

Z t hZ s +

t

t

dW` (s )  dWj (s); 1

dWr (s )  dW` (s )  dWj (s): 2

1

More generally, for a suitably smooth function g(y), one obtains that d X

g(y(t + h)) = g(y(t)) + g0(y(t))g`(y(t))J`(t) ` d X + (g00(y(t)) (g` (y(t)); gr(y(t))) + g0(y(t))g`0 (y(t))gr(y(t))) Jr`(t) r;` +::: =0

=0

By using the above expansions, we can evaluate the truncation error (4) as follows (all functions are evaluated at y(tn)): 5

2 d d k X X X ni 4 n = i y(tn) + gj Jj + gj0 g` J`jni i=0

j =0

`;j =0

3 ni + : : :5 gj00(g` ; gr ) + gj0 g`0 gr Jr`j + r;`;j " X d d k X X j in gj + gj0 g` J`ni ? i j ` 3 d   X gj00(g` ; gr ) + gj0 g`0 gr Jr`ni + : : :5 ; + d X





=0

=0 =0

=0

r;`=0

(6)

where, for all r; `; j = 0; : : : ; d, and i = 0; : : : ; k, Zt+ Zs Zt+ Jjni = t dWj ; J`jni = t t dW` (s )  dWj (s); Z Z Z ni = t + s s1 dW (s )  dW (s )  dW (s): Jr`j r ` j t t t In order to derive the conditions on the coecients of the method to satisfy (5), let us denote, for any string fj ; : : :; j g with ji 2 f0; : : :; dg, by z(j ; : : :; j ) the number of zeros in the string. Then, by considering (see P. Burrage [6], for example) that n

n

i

i

1

n

n

n

n

i

2

n

n

1

n

1

1

E (jJjnij) = O(h E (jJ`jnij) = O(h ni j) = O(h E (jJr`j

zj =

(1+ ( )) 2

);

z `;j ))=2 );

(2+ (

z r;`;j ))=2 );

(3+ (

the following strong order conditions are derived from (6): deterministic consistency and strong order 1=2: k X i = 0; ( k = 1) i k   X iJjni ? inj = 0; j = 0; : : :; d; =0

i=0

(7) (8)

strong order 1: all the previous ones and, moreover, d X `;j =1

gj0 g`

k  X i=1

 iJ`jni ? inj J`ni = 0; 6

(9)

strong order 3=2: all the previous ones and, moreover, k  X

 iJjni ? inJjni + i j k  d  X 0 X iJ nij ? inj J ni = 0; gj g i j d  k  X  X ni ? j J ni = 0: gj00(g` ; gr ) + gj0 g`0 gr iJr`j in r` d X

g0 gj

=1

0

0

0

=1

0

0

0

=1

=1

i=1

r;`;j =1

(10) (11)

We now look for methods (3) such that:

in = h i; 0

k X

inj =

r=1

(12)

Jjnr dir ;

i = 0; : : : ; k; j = 1; : : : ; d;

where the scalars f ig and fdir g are to be determined. Consequently, condition (8) becomes k X

iJjni ?

k k X X

Jjnr dir i r i k k k X X ni X iJj ? Jjnr dir  (^ T ? eT D)J^jn; =

0 =

=1

r=1

i=1

where

^ = ( ; : : :; k )T ; 1

and

D = [d^ D^ ]T ;

=0 =1

i=0

J^jn = ( Jjn ; : : : ; Jjnk )T ;

e = (1; : : : ; 1)T 2 IRk ; +1

1

D^ = [d^ ; : : :; d^k ]; d^i = ( di ; : : : ; dik )T ; i = 0; : : : ; k: The above condition then reads as ^ = DT e, that is, k X (13) r = dir ; r = 1; : : :; k: 0

1

1

i=0

Let us now examine condition (9). For this, we shall distinguish between the following two cases: 7

the commutative case, for which [gj ; g`](y)  (gj0 (y)g` (y) ? g`0 (y)gj (y)) = 0; for all j; ` = 1; : : : ; d, and y 2 IR; the non-commutative case, for which [gj ; g` ](y) = 6 0; for at least one pair of di erent indices j; ` = 1; : : : ; d: Let us now consider the rst case, from which (9) results in 0 = =

d X `;j =1 d X

gj0 g`

i=1 k X

iJ`jni ? inj J`ni



 i Jjjni ? inj Jjni + i d k   X X gj0 g` i(J`jni + Jj`ni) ? inj J`ni ? in` Jjni :

j =1

gj0 gj

j