The numerical integration of stiff systems using stable ...

0 downloads 0 Views 433KB Size Report
using Taylor series method of expansion about x and collect powers in h to obtain the ...... [4] John Charles Butcher, Numerical methods for ordinary differential ... [7] Simeon Ola Fatunla, Numerical integrators for stiff and highly oscillatory ...
Journal of Modern Methods in Numerical Mathematics 8:1-2 (2017), 99–117

The numerical integration of stiff systems using stable multistep multiderivative methods Yakubu, D. G.a , Aminu, M.a , Aminu, A.b a

Department of Mathematical Sciences, Abubakar Tafawa Balewa University, Bauchi, Nigeria.

b

Department of Mathematics, Kano University of Science and Technology, Wudil, Kano, Nigeria.

Abstract In this paper we describe the construction of stable multistep multiderivative methods designed for continuous numerical integration of stiff systems of initial value problems in ordinary differential equations. These methods are obtained based on the multistep collocation technique, which are shown to be A-stable, convergent with large regions of absolute stability. They are suitable for solving stiff systems of initial value problems with large eigenvalues lying close to the imaginary axis. Numerical experiments illustrate the behaviour of the methods, which show that they are competitive with stiff integrators that are known to have strong stability characteristic properties. Comparison of the solution curves obtained is in good agreement with the exact solutions which demonstrate the reliability and usefulness of the methods. Keywords: Continuous scheme, multistep collocation formula, multiderivative method, stiff system. 2010 MSC: 65L04, 65L05, 65L06. 1. Introduction In this paper our primary objective is to construct stable multistep multiderivative methods (MMDMs) designed for continuous numerical integration of stiff systems of initial value problems in ordinary differential equations (ODEs) of the form dy = f (x, y(x)), dx

y(x0 ) = y0 ,

x ∈ [x0 , T].

(1.1)

Here y : [x0 , T] → Rd and f : [x0 , T] × Rd → Rd is assumed to be sufficiently smooth and y0 ∈ Rd is the given initial value. Without any loss of generality we assume that the constant step-size h > 0 and we define the grid points along the x-axis by xn = x0 + nh, n = 0, 1, 2, · · · , N where Nh = T − x0 , and a set of equally spaced points on the integration interval is defined by x0 < x1 < x2 < · · · < xn+1 = T. ∗

Corresponding author Email addresses: [email protected] ( Yakubu, D. G. ), [email protected] ( Aminu, M.), [email protected] (Aminu, A.) Received: 18 August 2017 Accepted: 5 October 2017 http://dx.doi.org/10.20454/jmmnm.2017.1319 c 2090-8296 2017 Modern Science Publishers. All rights reserved.

Yakubu et al., Journal of Modern Methods in Numerical Mathematics 8:1-2 (2017), 99–117

100

It is worthy of notice that in the recent few years, new methods have been developed for the numerical integration of (1.1) characterized by good stability properties and evaluation of second, third and even high order derivatives are involved, see for example [6, 1, 10, 11]. The more reason for considering high order derivative methods is that many ordinary methods suffer from induced instability when applied to stiff systems in ordinary differential equations of the form (1.1), due to the large negative real part which is not well represented. Usually instability takes place if such a large negative real part is represented by the method as an increasing function instead of a decreasing function, and this happens with the well known traditional methods (explicit Runge-Kutta methods and linear multistep methods), unless a very small step size is used. The multiderivative methods yield very interesting results when used to solve the type of equation in (1.1). Though, according to Lambert [12], the use of high-derivative methods have long history dating back to the time of Hermite 1912 (pages 438 - 439) and some of the pioneering work share a common ancestry with some methods. For example, Obreshkoff, derived discrete quadrature formulae for integrating functions. When extra derivatives are included, one can obtain methods with excellent properties for the numerical integration of ODEs, including high-order accuracy with only fewer function evaluations than would otherwise be required. Hairer and Wanner [8] gave the definition of ”multistep, multistage and multiderivative methods” that to date has being a broad definition of numerical methods for solving ODEs. Their definition contains Runge-Kutta methods, linear multistep methods as well as the general linear methods of John Butcher. In this paper, our objective is to construct stable multistep multiderivative methods, with good stability characteristic properties, fewer function evaluations and converge rapidly to the exact solution. Assumption 1.1: In the ODEs (1.1), the function f belongs to C1 -class and therefore satisfies the Lipschitz condition with the constant L. That is, if the estimation





f (x, y(x)) − f (x, y(x))

≤ L

y − y˜

˜ holds, L is called the Lipschitz constant. Definition 1.1. [5] A numerical method is said to be A-stable if its region of absolute stability contains the whole of the complex left hand-half plane Rehλ < 0. Alternatively, a numerical method is called A-stable if all the solution of (1.1) tend to zero as n → ∞, when the method is applied with a fixed positive h to any differential equation of the form dy/dx = λy where λ is a complex constant with negative real part. Definition 1.2. A solution y(x) is said to be stable if given any  > 0 there is a δ > 0 such that any other solution ˆ y(x)of (1.1)satisfying y(a) − y(a) ˆ ≤δ also satisfies y(x) − y(x) ˆ ≤ for all x > a. ˆ → 0, as x → ∞. The solution y(x) is asymptotically stable if in addition to the above y(x) − y(x) Theorem 1.3. [4] If f satisfies Lipschitz condition with constant L then the initial value problem y0 (x) = f (x, y(x)), possesses a unique solution on the interval [x0 , T] .

y(x0 ) = y0 ,

Yakubu et al., Journal of Modern Methods in Numerical Mathematics 8:1-2 (2017), 99–117

101

2. Formulation of the multistep multiderivative methods To explain our formulation, we consider an approximate solution to the exact solution of (1.1) by the interpolant of the form, p−1 X 2 p−1 y(x) = α0 + α1 x + α2 x + · · · + αp−1 x = αi x i (2.1) i=0

which is continuously differentiable of any order required. We set the sum p = q + r + s + t so as to be able to determine {αi } in (2.1). In this formulation q denotes the number of interpolation points used and r > 0, s > 0, t > 0 are distinct collocation points. Interpolating y(x) in (2.1) at the points {xn+j } and collocating y0 (x), y00 (x) and y000 (x) at the same points {xn+j } we have the following system of equations y(xn+j ) = yn+j ,

( j = 0, 1, 2, ..., q − 1),

(2.2)

y0 (xn+j ) = fn+j ,

( j = 0, 1, 2, ..., r − 1),

(2.3)

y00 (xn+j ) = 1n+j ,

(j = 0, 1, 2, ..., s − 1),

(2.4)

y000 (xn+j ) = ln+j ,

( j = 0, 1, 2, ..., t − 1).

(2.5)

In the sense of [15] equations (2.2)-(2.5) can be expressed in the matrix form as: Dα = y

(2.6)

where the matrix D and the vectors α and y are defined as follows:                     D =                   

1 .. .

xn .. .

1 xn+q−1

p−1

x2n .. .

x3n .. .

x4n .. .

··· .. .

x2n+q−1

x3n+q−1

x4n+q−1

···

xn+q−1

2xn .. .

3x2n .. .

4x3n .. .

··· .. .

D0 xn .. .

4x3n+r−1

···

D0 xn+r−1

12x2n .. .

··· .. .

D00 xn .. .

0 .. .

1 .. .

0

1

0 .. .

0 .. .

2 .. .

0

0

2

0 .. .

0 .. .

0 .. .

6 .. .

0

0

0

6

2xn+r−1 3x2n+r−1 6xn .. .

6xn+s−1 12x2n+s−1 · · · 24xn .. .

··· .. .

24xn+t−1 · · ·

xn .. .

p−1

p−2

p−2

p−3

p−3

D00 xn+s−1 p−4

D000 xn .. .

p−4

D000 xn+t−1

                     .                   

(2.7)

00 000 000 T α = (α0 , α1 , · · · , αp−1 )T , y = (yn , · · · , yn+q−1 , y0n , · · · , y0n+r−1 , y00 n , · · · , yn+s−1 , yn , · · · , yn+t−1 ) ,

where D0 = (p − 1), D00 = (p − 1)(p − 2) and D000 = (p − 1)(p − 2)(p − 3) in (2.7) represent the first, second and third derivatives respectively and correspond to the differentiation with respect to x. Similar to the Vandermonde matrix, D in (2.6) is non-singular. A closed form of the solution for the system in (2.6) is

Yakubu et al., Journal of Modern Methods in Numerical Mathematics 8:1-2 (2017), 99–117

102

presented which has been obtained by considering the inverse of the Vandermonde matrix, that is, α = D−1 y.

(2.8)

Re-arranging equations (2.6) -(2.8) to have the multistep collocation formula of the type in [16] which was a generalization of [14] and here we extend to second and third derivatives as follows, y(x) =

q−1 X j=0

α j (x)yn+j + h

r−1 X

β j (x) fn+j + h

2

j=0

s−1 X

γ j (x)1n+j + h

j=0

3

t−1 X

ω j (x)ln+j ,

(2.9)

j=0

where yn+j = y(xn + jh),

fn+j = f (xn + jh, y(xn + jh)),

1n+j =

d f (x, y(x)) x=xn+j y=yn+j , dx

ln+j =

d2 f (x, y(x)) x=xn+j y=yn+j . dx2

The continuous coefficients α j (x), β j (x), γ j (x) and ω j (x) in (2.9) are parameters of the method which are to be determined. They are assumed polynomials of degree p-1 given by

α j (x) =

p−1 X

α j,i+1 xi ,

i=0

hβ j (x) = h

p−1 X

β j,i+1 xi ,

(2.10)

i=0

h γ j (x) = h 2

2

p−1 X

γ j,i+1 xi ,

i=0

h ω j (x) = h 3

3

p−1 X

ω j,i+1 xi .

i=0

The numerical constant coefficients α j,i+1 , β j,i+1 , γ j,i+1 and ω j,i+1 in (2.10) are to be determined. They can be obtained from the components of the matrix D−1 . That is, if the identity in (2.11) holds,

Yakubu et al., Journal of Modern Methods in Numerical Mathematics 8:1-2 (2017), 99–117

         C =        

(q−1)

hβ0

(q−1)

hβ1

(q−1)

hβ2

(q−1)

hβ3 .. .

(0)

···

α0

(0)

···

α1

(0)

···

α2

(0)

··· .. .

α3

···

αp−1

α0

α1

α2

α3 .. .

(0)

αp−1

.. .

(q−1)

(0)

···

hβ0

(0)

···

hβ1

(0)

···

hβ2

(0)

··· .. .

hβ3 .. .

(0)

···

hβp−1

hβp−1

(r−1)

h2 γ0

(0)

···

h2 γ0

(r−1)

h2 γ1

(r−1)

h2 γ2

(r−1)

h2 γ3 .. .

(r−1)

h2 γp−1

(s−1)

h3 ω0

(0)

···

h2 γ1

(0)

···

h2 γ2

(0)

··· .. .

h2 γ3 .. .

(0)

···

h2 γp−1

(0)

···

h3 ω0

(s−1)

h3 ω1

(s−1)

h3 ω2

(s−1)

h3 ω3 .. .

(s−1)

h3 ωp−1

(t−1)

(0)

···

h3 ω1

(0)

···

h3 ω2

(0)

··· .. .

h3 ω3 .. .

(0)

···

h3 ωp−1

(t−1) (t−1) (t−1)

(t−1)

          ≡ D−1 .        

103

(2.11)

The choice C = D−1 leads to the determination of the numerical constant coefficients α j,i+1 , β j,i+1 , γ j,i+1 and ω j,i+1 in (2.10). Actual evaluations of the matrices C and D are carried out with a computer algebra system, for example, Maple. In the multiderivative methods, we see that not only the function f(x,y) is evaluated at the internal intermediate points, but in addition the functions D f , D2 f, · · · , where D is the differential operator. Hence, in addition to the computation of the f -values at the internal stages in the standard linear multistep methods, the third derivative methods involve computing g-values and l-values where f, g and l are as defined in (2.9). Theorem 2.1. Let j = 0, 1, 2, · · · , r + s + t + 1 and we define xi in (2.1) by pi (x) =

p−1 X

xi

i=0

such that each system of equation given by CD = I has a unique solution, where D and C are as given in (2.6) and (2.11) respectively and I is an identity matrix of appropriate dimension. Proof. The proof follows similar approach given in Yakubu et al.[17], with some slight modifications different from the proof in [17]. Inserting (2.10) into (2.9) we have, y(x) =

q−1 X p−1 X

α j,i+1 yn+ j Pi (x) + h

j=0 i=0

2

h

p−1 s−1 X X

p−1 r−1 X X

β j,i+1 fn+ j Pi (x)+

j=0 i=0

γ j,i+1 1n+ j Pi (x) + h

3

j=0 i=0

p−1 t−1 X X

ω j,i+1 ln+j Pi (x)

j=0 i=0

which is factorized to obtain    p−1  q−1 r−1 s−1 t−1   X X X X X     2 3 α y + h β f + h γ 1 + h ω l Pi (x). y(x) =  j,i+1 n+ j j,i+1 n+ j j,i+1 n+ j j,i+1 n+ j       i=0  j=0 j=0 j=0 j=0

(2.12)

Expressing equation (2.12) in the form

y(x) =

p−1 X i=0

φi Pi (x)

(2.13)

Yakubu et al., Journal of Modern Methods in Numerical Mathematics 8:1-2 (2017), 99–117

104

where     q−1 r−1 s−1 t−1   X X X X     2 3 φi =  α y + h β f + h γ 1 + h ω l  j,i+1 n+ j j,i+1 n+ j j,i+1 n+ j j,i+1 n+ j      j=0  j=0 j=0 j=0 and Pi (x) =

p−1 X

xi

i=0

we have   q−1 s−1 t−1 r−1  X X X  X 3 2 γ 1 + h ω j,1 ln+ j , α y + h β f + h y(x) =  j,1 n+j j,1 n+ j j,1 n+ j    j=0 j=0 j=0 j=0 q−1 X

α j,2 yn+ j + h

j=0

q−1 X

β j,2 fn+j + h2

j=0

α j,3 yn+ j + h

j=0

.. .

r−1 X

r−1 X

β j,3 fn+ j + h2

j=0

α j,p−1 yn+ j + h

j=0

γ j,2 1n+ j + h3

j=0

.. . q−1 X

s−1 X

s−1 X

γ j,3 1n+ j + h3

j=0

j=0

β j,p−1 fn+ j + h2

ω j,2 ln+ j ,

j=0

t−1 X

ω j,3 ln+ j ,

j=0

.. . r−1 X

t−1 X

.. ., s−1 X j=0

γ j,p−1 1n+ j + h3

t−1 X j=0

  p−1   X ω j,p−1 ln+ j  Pi (x).    i=0

(2.14)

Recall that p = q + r + s + t, so that if we expand (2.14) fully, we have the propose continuous scheme as follows: T    y(x) = yn , · · · , yn+q−1 , fn , · · · , fn+r−1 , 1n , · · · , 1n+s−1 , ln , · · · , ln+t−1 CT 1, x, · · · , xq+r+s+t−1

(2.15)

  where T denotes transpose of the matrix C in (2.11) and the vector 1, x, · · · , xq+r+s+t−1 in (2.15) Remark 2.2. We call the matrix D in (2.6) the multistep interpolation and collocation matrix which has a very simple structure. From (2.11), the columns of C, which give the continuous coefficients α j (x), β j (x), γ j (x) and ω j (x) can be obtained from the corresponding columns of D−1 . As can be seen the entries of C are the constant coefficients of the formula given in (2.9) which are to be determined. The matrix C is the solution vector (output) and D is termed the data (input), which is assumed to be non-singular for the existence of the inverse matrix C.

3. Specification of the methods 3.1 A sixth order stable multistep multiderivative method In this section, we determine the coefficients of the first stable multistep multiderivative method. We introduce and define the following η = (x − xn ) which we shall use in the formulation of the continuous scheme. To obtain the coefficients of the first method, we use the procedure of theorem 2.1 to expand the right-hand side of y(x) in (2.9)

Yakubu et al., Journal of Modern Methods in Numerical Mathematics 8:1-2 (2017), 99–117

105

using Taylor series method of expansion about x and collect powers in h to obtain the continuous scheme in the form of (2.15) as follows, y(x) = α0 (x)yn + h[β1 (x) fn+1 + β2 (x) fn+2 ] + h2 [γ1 (x)1n+1 + γ2 (x)1n+2 ] + h3 [ω1 (x)ln+1 + ω2 (x)ln+2 ]

(3.1)

where α0 (x) = 1,    β1 (x) =      β2 (x) =      γ1 (x) =      γ2 (x) =      ω1 (x) =      ω2 (x) =  

 −2η6 + 18hη5 − 65h2 η4 + 120h3 η3 − 120h4 η2 + 64h5 η   ,  2h5   2η6 − 18hη5 + 65h2 η4 − 120h3 η3 + 120h4 η2 − 62h5 η   ,  2h5   −5η6 + 46hη5 − 170h2 η4 + 320h3 η3 − 320h4 η2 + 160h5 η   ,  10h4   −5η6 + 44hη5 − 155h2 η4 + 280h3 η3 − 275h4 η2 + 140h5 η   ,  10h4   −10η6 + 96hη5 − 375h2 η4 + 760h3 η3 − 840h4 η2 + 480h5 η   ,  120h3   10η6 − 84hη5 + 285h2 η4 − 500h3 η3 + 480h4 η2 − 240h5 η   .  120h3 

Evaluating the continuous scheme y(x) in (3.1) at the points x = xn+1 and xn+2 we obtain the first stable multistep multiderivative method as: yn+1 = yn +

h2 h3 h [900 fn+1 − 780 fn+2 ] + [3721n+1 + 3481n+2 ] + [111ln+1 − 49ln+2 ] 120 120 120

yn+2 = yn +

(3.2)

h2 h3 h [120 fn+1 − 90 fn+2 ] + [481n+1 + 421n+2 ] + [14ln+1 − 6ln+2 ] 15 15 15

3.2 A twelfth order stable multistep multiderivative method In the second method of the stable multistep multiderivative method, we include the two end points of the integration interval as collocation points, in addition to the interior collocation points to have y(x) = α0 (x)yn + h[β0 (x) fn + β1 (x) fn+1 + β2 (x) fn+2 + β3 (x) fn+3 ] + h2 [γ0 (x)1n + γ1 (x)1n+1 + γ2 (x)1n+2 + γ3 (x)1n+3 ] + h3 [ω0 (x)ln + ω1 (x)ln+1 + ω2 (x)ln+2 + ω3 (x)ln+3 ]

where α0 (x) = 1,

(3.3)

Yakubu et al., Journal of Modern Methods in Numerical Mathematics 8:1-2 (2017), 99–117       β0 (x) =            β1 (x) =            β2 (x) =            β3 (x) =     

    73216440h5 η7 − 91544068h6 η6 + 68196744h7 η5 − 24022152h8 η4 + 2395008h11 η   ,  2395008h11      1386η12 − 25200hη11 + 197736h2 η10 − 874720h3 η9 + 2388771h4 η8 −     4136880h5 η7 + 4468464h6 η6 − 2794176h7 η5 + 798336h8 η4  ,  11 29568h      −1386η12 + 24696hη11 − 189420h2 η10 + 816200h3 η9 − 2160081h4 η8 +     3597000h5 η7 − 3689532h6 η6 + 2145528h7 η5 − 548856h8 η4  ,  11 29568h      15862η12 − 265104hη11 + 1901592h2 η10 − 7650720h3 η9 + 18911277h4 η8 −     29486160h5 η7 + 28450576h6 η6 − 15656256h7 η5 + 3814272h8 η4  ,  11 2395008h       −8470η12 + 164640hη11 − 1399860h2 η10 + 6825280h3 η9 − 20994435h4 η8 +    42118560h5 η7 − 54702340h6 η6 + 43346688h7 η5 − 17130960h8 η4 + 1995840h10 η2  γ0 (x) =  3991680h10        2310η12 − 41160hη11 + 314160h2 η10 − 1336720h3 η9 + 3449985h4 η8 −        5487240h5 η7 + 5183640h6 η6 − 2594592h7 η5 + 498960h8 η4  , γ1 (x) =   10 147840h          2310η12 − 42000hη11 + 328020h2 η10 − 1435280h3 η9 + 3844995h4 η8 −        6460080h5 η7 + 6666660h6 η6 − 3891888h7 η5 + 997920h8 η4    , γ2 (x) =  10 147840h            −8470η12 + 140280hη11 − 997920h2 η10 + 3985520h3 η9 − 9788625h4 η8       +51718680h5 η7 − 14577640h6 η6 + 7990752h7 η5 − 1940400h8 η4     , γ3 (x) =  10 3991680h           −770η12 + 15120hη11 − 130284h2 η10 + 646800h3 η9 − 2040885h4 η8 +       4253040h5 η7 − 5875100h6 η6 + 5222448h7 η5 − 2744280h8 η4 + 665280h10 η2     , ω0 (x) =   3991680h9       −15862η12 + 305928hη11 − 2575188h2 η10 + 12390840h3 η9 − 37435167h4 η8 +

       ,    

106

Yakubu et al., Journal of Modern Methods in Numerical Mathematics 8:1-2 (2017), 99–117       ω1 (x) =            ω2 (x) =            ω3 (x) =     

107

 2310η12 − 42840hη11 + 343728h2 η10 − 1558480h3 η9 + 4369365h4 η8 −     7765560h5 η7 + 8565480h6 η6 − 5388768h7 η5 + 1496880h8 η4  ,  443520h9      −2310η12 + 40320hη11 − 302148h2 η10 + 1268960h3 η9 − 3267495h4 η8    +5290560h5 η7 − 5280660h6 η6 + 2993760h7 η5 − 748440h8 η4   , 443520h9      770η12 − 12600hη11 + 88704h2 η10 − 351120h3 η9 + 855855h4 η8 −    1318680h5 η7 + 1259720h6 η6 − 687456h7 η5 + 166320h8 η4   . 3991680h9    

Evaluating the continuous scheme y(x) in (3.3) at the points x = xn+1 , xn+2 and xn+3 we obtain the second method of the stable multistep multiderivative method as follows,

yn+1 = yn +

h [9125230 fn + 19210770 fn+1 − 4739310 fn+2 + 353390 fn+3 ]+ 23950080

h2 [12896581n − 17264341n+1 + 17264341n+2 − 1069381n+3 ]+ 23950080 h3 [68214ln + 1194210ln+1 − 402462ln+2 + 9078ln+3 ] 23950080

h [70310 fn + 244620 fn+1 + 57510 fn+2 + 1780 fn+3 ]+ 187110

yn+2 = yn +

h2 [97921n + 51841n+1 − 51841n+2 − 5521n+3 ]+ 187110 h3 [510ln + 11448ln+1 − 1026ln+2 + 48ln+3 ] 187110

yn+3 = yn +

h [19245 fn + 54675 fn+1 + 54675 fn+2 + 19245 fn+3 ]+ 49280

h2 [27991n − 21871n+1 + 21871n+2 − 27991n+3 ]+ 49280 h3 [153ln + 2187ln+1 + 2187ln+2 + 153ln+3 ] 49280

(3.4)

Yakubu et al., Journal of Modern Methods in Numerical Mathematics 8:1-2 (2017), 99–117

108

4. Analysis of the properties of the stable multistep multiderivative methods 4.1 Order, consistency, zero-stability and convergence of the MMD methods With the multistep collocation formula (2.9) we associate the linear difference operator ` defined by `[y(x); h] =

q X

α j (x)y(x + jh) + h

j=0

r X

β j (x)y (x + jh) + h 0

2

j=0

s X

γ j (x)y (x + jh) + h 00

3

t X

j=0

ω j (x)y000 (x + jh)

(4.1)

j=0

where y(x) is an arbitrary function, continuously differentiable on [x0 , T]. Following [12], we can write the terms in (4.1) as a Taylor series expansion about the point x to obtain the expression, `[y(x); h] = C0 y(x) + C1 hy0 (x) + C2 h2 y00 (x) + · · · + Cp hp y(p) (x) + · · ·

(4.2)

where the constant coefficients Cp , p = 0, 1, 2, · · · are given as follows: C0 =

q X

αj

j=0

C1 =

q X

jα j

j=1

C2 =

C3 = .. . Cp =

q

r

j=1

j=0

X 1 X 2 j αj − βj 2! q

r

s

j=1

j=1

j=0

X X 1 X 3 j αj − jβ j − γj 3! .. .

q

r

s

t

j=1

j=1

j=1

j=0

X X X 1 X p 1 1 1 j αj − jp−1 β j − jp−2 γ j − jp−3 ω j , p = 4, 5, · · · p! (p − 1)! (p − 2)! (p − 3)!

According to [12], the multistep collocation formula (2.9) has order p if   `[y(x); h] = h(p+1) , C0 = C1 = · · · = Cp = 0,

Cp+1 , 0.

Therefore Cp+1 is the error constant and Cp+1 hp+1 y(p+1) (x) is the principal local truncation error. Hence from our calculation the order and error constants for the constructed methods are presented in Table 1. It is clear from the Table that the stable multistep multiderivative methods are of high order and hence accurate. Definition 4.1. [12] A linear multistep method is said to be consistent if the order of the method is greater than or equal to one, that is, if p ≥ 1. (i)ρ(1) = 0 and (ii)ρ0 (1) = σ(1), where ρ(z) and σ(z)are respectively the 1st and 2nd characteristic polynomials. From Table 1 and definition 4.1 we can attest that the stable multistep multiderivative methods are consistent.

Yakubu et al., Journal of Modern Methods in Numerical Mathematics 8:1-2 (2017), 99–117

109

Table 1: Order and error constants of the stable MMD methods. Method

Order

Error constant

Method (3.2)

p=6

C7 = 2.48809 × 10−1

p=6

C7 = 3.09523 × 10−2

p = 12

C13 = −2.10780 × 10−2

p = 12

C13 = −1.34032 × 10−4

p = 12

C13 = −7.86713 × 10−5

Method (3.4)

Definition 4.2. [12] A linear multistep method is said to be zero-stable if the roots  k  X  i k−1  ρ(λ) = det  A λ  = 0 i=0

satisfies λ j ≤ 1, j = 1, 2, · · · , k and for those roots with λ j = 1,the multiplicity does not exceed 2. Based on definition 4.2 the newly constructed stable multistep multiderivative methods are zero-stable. Definition 4.3. [5] The necessary and sufficient conditions for a linear multistep method to be convergent are that it must be consistent and zero-stable. From definitions 4.1 and 4.2 the stable multistep multiderivative methods are convergent. 4.2 Regions of absolute stability of the stable multistep multiderivative methods Linear stability is very important aspect in the design of algorithm to solve ordinary differential equations. For this reason, we consider for the ”multistep multiderivative methods” the test equation of the form dy = λy, dx

λ ∈ C and Rλ < 0,

(4.3)

with a fixed positive step size h > 0. Since, the multistep collocation methods contain second derivative 1(x, y) as well as third derivative l(x, y), it is natural to suppose that 1(x, y) = λ2 y and l(x, y) = λ3 y. Reformulating (3.2) and (3.4) as general linear methods, see Burrage and Butcher [2] we can easily plot the regions of absolute stability of the new methods. Hence, we use the notations introduced by Butcher[4], where a general linear method is represented by a partitioned(s + r) × (s + r) matrices (containing A, U, B and V). Thus, for convenience we replace the matrices U with C and V with D. The coefficients of these matrices A, C, B and D indicate the relationship between the various numerical quantities that arise in the computation of the stability regions. The elements of the matrices A, C, B and D are substituted into the recurrent relation, y[n−1] = M(z)y[n] , n = 1, 2, 3, · · · , N − 1, z = λh, where the stability matrix M(z) = D + zB(I − zA)−1 C, and the stability polynomial of the method can easily be obtain as follows, ρ(η, z) = det(r(A − Cz − D1z2 − D2z3 ) − B).

Yakubu et al., Journal of Modern Methods in Numerical Mathematics 8:1-2 (2017), 99–117

110

The region of absolute stability of the method is defined as R = x ∈ C : ρ(η, z) = 1 =⇒ η ≤ 1. Computing the stability function gives the stability polynomial of the method, which is plotted to produce the required graph of the region of absolute stability of each method as shown in Figure 1. 2.5

10

2

8

1.5

6

1

4

0.5

2

0

0

−0.5

−2

−1

−4

−1.5

−6

−2

−8

−2.5 −0.5

0

0.5

1

1.5

2

2.5

3

3.5

−10

0

2

4

6

8

10

12

14

Figure 1: Regions of Absolute Stability of Method (3.2) and Method (3.4) respectively

Remark 4.4. In the stable multistep multiderivative methods we added the matrices D1 and D2 obtained from the coefficients of h2 and h3 respectively to the stability matrices A, C, B and D which enabled us to plot the regions of absolute stability of the new methods. The regions of absolute stability of the stable multistep multiderivative methods are A-stable, since the regions consist of the complex planes outside the enclosed Figures.

5. Experimental Results Numerical results on a wide selection of problems are summarized for the new derived methods to see how well they compare with other recent derived third derivative (TD) and second derivative (SD) methods which are also accurate for the numerical solution of stiff systems in ordinary differential equations. We report the numerical results obtained from the applications of the new stable multistep multiderivative methods. We present the computed results side by side in Tables in the formalism of [3]. In the presentation we use, nfe to denote the number of function evaluations, and Ext, to indicate the exact solutions in the solution curves for comparison purposes. Example 1: We consider the stiff system, y01 (x) = −10y1 (x) + βy2 (x),

y1 (0) = 1,

y02 (x) = −βy1 (x) − 10y2 (x), y03 (x) = −γy3 (x),

y2 (0) = 1, y3 (0) = 1,

where β = 21 and γ = 10. The exact solution of this example is given by y1 (x) = e−γx (cos(βx) + sin(βx)), y2 (x) = e−γx (cos(βx) − sin(βx)), y3 (x) = e−γx . We solve this problem using the new stable MMD methods in the interval [0, 1] and the results obtained are presented side by side in Table 2 while the solution curves are displayed in Figure 2.

Yakubu et al., Journal of Modern Methods in Numerical Mathematics 8:1-2 (2017), 99–117

111

Table 2: Absolute errors in the numerical integration of Example 1. x

5

50

250

500

yi

Method (3.2)

Method(3.4)

y1

2.80886425230165 × 10−12

2.22044604925031 × 10−16

y2

4.18209911146050 × 10−12

1.11022302462516 × 10−16

y3

9.54791801177635 × 10−15

0

y1

2.41487108088023 × 10−11

0

y2

6.641798222551754 × 10−12

3.33066907387547 × 10−16

y3

4.80171458150380 × 10−14

2.22044604925031 × 10−16

y1

6.71109001704195 × 10−13

5.20417042793042 × 10−18

y2

2.21295464591931 × 10−12

5.63785129692462 × 10−18

y3

4.43742265154867 × 10−15

2.60208521396521 × 10−18

y1

3.04139648036640 × 10−14

1.69406589450860 × 10−21

y2

5.53718312520934 × 10−15

4.06575814682064 × 10−20

y3

5.92787537806450 × 10−17

5.42101086242752 × 10−20

1.2

1.2 y(1) y(2) y(3) y(1)Ext y(2)Ext y(3)Ext

1 0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0

0

−0.2

−0.2

−0.4

−0.4

−0.6

0

0.2

0.4

0.6

0.8

y(1) y(2) y(3) y(1)Ext y(2)Ext y(3)Ext

1

1

−0.6

0

0.2

0.4

0.6

0.8

Figure 2: Solution curves using method(3.2) and method(3.4), with nfe =500

1

Yakubu et al., Journal of Modern Methods in Numerical Mathematics 8:1-2 (2017), 99–117

112

Table 3: Absolute errors in the numerical integration of Example 2 x

5

50

250

500

yi

Method(3.2)

Method(3.4)

y1

4.92739848922952 × 10−10

2.22044604925031 × 10−16

y2

4.92739737900649 × 10−10

1.11022302462516 × 10−16

y3

1.07522860393061 × 10−7

2.22044604925031 × 10−16

y1

6.64531762950560 × 10−11

1.11022302462516 × 10−16

y2

6.64525986321385 × 10−11

2.60208521396521 × 10−18

y3

9.26261619488278 × 10−11

1.73472347597681 × 10−18

y1

7.77156117237610 × 10−16

0

y2

6.68681061869404 × 10−19

4.52364397489937 × 10−26

y3

6.68681068331753 × 10−19

4.52364397489937 × 10−26

y1

6.66133814775094 × 10−16

0

y2

1.77010873837006 × 10−29

1.38708333397030 × 10−36

y3

1.77010873837006 × 10−29

1.34006355993741 × 10−36

Example 2: This example is a linear stiff problem with the corresponding initial conditions,     y0 (x)   −0.1  1       y0 (x)  =  0  2       y0 (x)   0 3

−49.9 −50 70

    y1 (x)          0   y2 (x)  ,    −120   y3 (x)  0

     y1 (0)   2           y2 (0)  =  1  .          y (0)   2  3

The exact solution is,     y1 (x)   exp(−0.1x) + exp(−50x)        y2 (x)  =  exp(−50x)        y (x)   exp(−50x) + exp(−120x) 3

     .   

We consider the solution of this problem within the interval [0,1] and the results obtained are presented in Table 3 and the solution curves are shown in Figure 3. Example 3: The third example is a highly stiff problem, see Lambert [13, 9],     y0 (x)   −21    1     y0 (x)  =  19  2       y0 (x)   40 3

   −20   y1 (x)   y1 (0)        −21 20   y2 (x)  ,  y2 (0)     −40 −40   y3 (x)   y3 (0) 19

     1         =  0  .         −1 

Yakubu et al., Journal of Modern Methods in Numerical Mathematics 8:1-2 (2017), 99–117 2

2 y(1) y(2) y(3) y(1)Ext y(2)Ext y(3)Ext

1.8 1.6 1.4

y(1) y(2) y(3) y(1)Ext y(2)Ext y(3)Ext

1.8 1.6 1.4

1.2

1.2

1

1

0.8

0.8

0.6

0.6

0.4

0.4

0.2

0.2

0

113

0

0.2

0.4

0.6

0.8

1

0

0

0.2

0.4

0.6

0.8

1

Figure 3: Solution curves using method(3.2) and method(3.4), with nfe =500

The exact solution is,     y1 (x)   0.5exp(−2x) + 0.5exp(−40x)(cos40x + sin40x)        y2 (x)  =  0.5exp(−2x) − 0.5exp(−40x)(cos40x + sin40x)        y (x)   −exp(−40x)(cos40x + sin40x) 3

     .   

We solve the problem in the range of [0,1] and the computed results are shown in Table 4, while the solution curves obtained are displayed in Figure 4. We compare the numerical solutions obtained with the results from the conventional Radau-Runge-Kutta method of order 5, which is widely recognized to be among the most efficient methods for stiff system see [4] page 226, Radau IIA and [9] page 74 Table 5.6. 1

1 y(1) y(2) y(3) y(1)Ext y(2)Ext y(3)Ext

0.8 0.6 0.4

0.6 0.4

0.2

0.2

0

0

−0.2

−0.2

−0.4

−0.4

−0.6

−0.6

−0.8

−0.8

−1

y(1) y(2) y(3) y(1)Ext y(2)Ext y(3)Ext

0.8

0

0.2

0.4

0.6

0.8

1

−1

0

0.2

0.4

0.6

0.8

1

Figure 4: Solution curves using Radau IIA [4, 9] and method(3.4), with nfe =500 Example 4: We consider another linear problem which is particularly referred to by some eminent author, (see [7]) as a troublesome problem for some existing methods. This is because some of the eigenvalues lying close to the imaginary axis, a case where some stiff integrators were known to be inefficient. The reference solutions at the end point of integration interval [0,1] are shown in Table 5 while the solution curves are displayed in Figure 5. Thus, only the first four components {y1 , y2 , y3 , y4 } are shown in the Table of values.

Yakubu et al., Journal of Modern Methods in Numerical Mathematics 8:1-2 (2017), 99–117

Table 4: Absolute errors in the numerical integration of Example 3 x

5

50

250

500

yi

Radau IIA [4, 9],

Method(3.4)

y1

2.60451549216612 × 10−10

2.22044604925031 × 10−16

y2

2.60451687994490 × 10−10

2.77555756156289 × 10−17

y3

1.07396436188623 × 10−9

5.55111512312578 × 10−17

y1

2.78099765438355 × 10−12

5.55111512312578 × 10−17

y2

2.78099765438355 × 10−12

0

y3

3.96613819020217 × 10−10

1.60224752302623 × 10−17

y1

8.32667268468867 × 10−17

2.77555756156289 × 10−17

y2

1.38777878078145 × 10−16

0

y3

1.99011769660684 × 10−16

7.94314865659918 × 10−19

y1

1.11022302462516 × 10−16

1.38777878078145 × 10−17

y2

1.11022302462516 × 10−16

1.38777878078145 × 10−17

y3

6.23470886821874 × 10−18

6.15837525335804 × 10−18

114

Yakubu et al., Journal of Modern Methods in Numerical Mathematics 8:1-2 (2017), 99–117 Table 5: Absolute errors in the numerical integration of Example 4 x

5

50

250

500

yi

Yakubu and Markus [18]

Method (3.4)

y1

2.22044604925031 × 10−16

4.44089209850063 × 10−16

y2

1.74166236988071 × 10−15

2.56739074444567 × 10−16

y3

3.33066907387547 × 10−16

0

y4

2.22044604925031 × 10−16

0

y1

1.666533453693773 × 10−15

3.33066907387547 × 10−16

y2

8.24340595784179 × 10−15

1.94289029309402 × 10−16

y3

2.55351295663786 × 10−15

4.44089209850063 × 10−16

y4

3.77475828372553 × 10−15

0

y1

5.86336534880161 × 10−16

2.42861286636753 × 10−17

y2

4.18068357710411 × 10−16

1.73472347597681 × 10−18

y3

3.63598040564739 × 10−15

2.77555756156289 × 10−17

y4

5.10702591327572 × 10−15

8.88178419700125 × 10−16

y1

8.73121562029733 × 10−18

5.99699326656045 × 10−19

y2

4.43167638003450 × 10−18

4.06575814682064 × 10−20

y3

8.15320033709099 × 10−16

2.42861286636753 × 10−17

y4

1.66533453693773 × 10−16

5.55111512312578 × 10−16

                 

  y01 (x)   −10 100 0     y02 (x)   −100 −10 0     y03 (x)   0 0 −4  =    0 0 0 y4 (x)   0     0 0 y05 (x)   0     0 0 0 0 y6 (x)

0

0

0

0

0

0

−1

0

0

−0.5

0

0

       0      0      0     0      −0.1   0

  y1 (x)       y2 (x)       y3 (x)    ,    y4 (x)       y5 (x)       y6 (x)  

  y1 (0)       y2 (0)       y3 (0)    =    y4 (0)       y5 (0)       y6 (0)  

 1    1    1    1    1    1 

115

Yakubu et al., Journal of Modern Methods in Numerical Mathematics 8:1-2 (2017), 99–117 1.5

1.5 y(1) y(2) y(3) y(4) y(5) y(6) y(1)Ext y(2)Ext y(3)Ext y(4)Ext y(5)Ext y(6)Ext

1

0.5

0

0.5

0

−0.5

−1

−1

0

0.2

0.4

0.6

0.8

y(1) y(2) y(3) y(4) y(5) y(6) y(1)Ext y(2)Ext y(3)Ext y(4)Ext y(5)Ext y(6)Ext

1

−0.5

−1.5

116

1

−1.5

0

0.2

0.4

0.6

0.8

1

Figure 5: Solution curves using Yakubu and Markus [18] and method (3.4), with nfe =500

6. Concluding remarks The methods derived in this paper based on the numerical experiments are efficient with enhanced stability characteristics suitable for the approximate numerical integration of stiff systems of ordinary differential equations. The methods proved an efficient way to find numerical integration to stiff systems of initial value problems when the third derivative terms are cheap to evaluate. We present two new stable methods of orders six and twelve that are intended for accurate integration of stiff systems of equations. We have also compared the numerical solutions obtained with the second and third derivative methods which are also accurate for stiff systems. The solution curves are compared with the exact solutions graphically in Figures. The computations associated with the examples in this paper are performed using Matlab.

7. Acknowledgment The first author is grateful to acknowledge the financial support of the Tertiary Education Trust Fund (TETFUND) Ref. No.TETF/DAST& D.D/6.13/NOM-CA/BAS& BNAS.

References [1] OA Akinfenwa, B Akinnukawe, and SB Mudasiru, A family of continuous third derivative block methods for solving stiff systems of first order ordinary differential equations, Journal of the Nigerian Mathematical Society 34 (2015), no. 2, 160–168. [2] Kevin Burrage and John Charles Butcher, Non-linear stability of a general class of differential equation methods, BIT Numerical Mathematics 20 (1980), no. 2, 185–203. [3] John C Butcher and Gholamreza Hojjati, Second derivative methods with rk stability, Numerical Algorithms 40 (2005), no. 4, 415–429. [4] John Charles Butcher, Numerical methods for ordinary differential equations, John Wiley & Sons, 2008. [5] Germund G Dahlquist, A special stability problem for linear multistep methods, BIT Numerical Mathematics 3 (1963), no. 1, 27–43. [6] Ali K Ezzeddine and Gholamreza Hojjati, Third derivative multistep methods for stiff systems, International journal of nonlinear science 14 (2012), no. 4, 443–450. [7] Simeon Ola Fatunla, Numerical integrators for stiff and highly oscillatory differential equations, Mathematics of computation 34 (1980), no. 150, 373–390. [8] Ernst Hairer and Gerhard Wanner, Multistep-multistage-multiderivative methods for ordinary differential equations, Computing 11 (1973), no. 3, 287–303. , Solving ordinary differential equations ii: Stiff and differential-algebraic problems, 1996. [9] [10] Samuel N Jator, AO Akinfenwa, Solomon A Okunuga, and Adetokunbo B Sofoluwe, High-order continuous third derivative formulas with block extensions for y = f (x, y, y), International journal of Computer mathematics 90 (2013), no. 9, 1899–1914. [11] SN Jator, Block third derivative method based on trigonometric polynomials for periodic initial-value problems, Afrika Matematika 27 (2016), no. 3-4, 365–377. [12] John D Lambert, Computational methods in ordinary differential equations, Wiley, 1973. , Numerical methods for ordinary differential systems: the initial value problem, John Wiley, 1991. [13]

Yakubu et al., Journal of Modern Methods in Numerical Mathematics 8:1-2 (2017), 99–117

117

[14] Ivar Lie and Syvert P Nørsett, Superconvergence for multistep collocation, Mathematics of computation 52 (1989), no. 185, 65–79. [15] Taketomo Mitsui and Dauda Gulibur Yakubu, Two-step family of look-ahead linear multistep methods for odes, The science and Engineering Review Doshisha University, Japan 52 (2011), no. 3, 181–188. [16] P Onumanyi, DO Awoyemi, SN Jator, and UW Sirisena, New linear mutlistep methods with continuous coefficients for first order initial value problems, J. Nig. Math. Soc 13 (1994), no. 7, 37–51. [17] DG Yakubu, AI Bakari, and S Markus, Two-step second-derivative high-order methods with two off-step points for solution of stiff systems, Afrika Matematika 26 (2015), no. 5-6, 1081–1093. [18] DG Yakubu and S Markus, Second derivative of high-order accuracy methods for the numerical integration of stiff initial value problems, Afrika Matematika 27 (2016), no. 5-6, 963–977.

Suggest Documents