A simple technique for constructing two-step Runge-Kutta methods ...

3 downloads 92 Views 199KB Size Report
Tell us about your experience on SpringerLink and you could win a MacBook Pro. Take survey. Download PDF · Computational Mathematics and Mathematical ...
ISSN 09655425, Computational Mathematics and Mathematical Physics, 2009, Vol. 49, No. 11, pp. 1837–1846. © Pleiades Publishing, Ltd., 2009. Original Russian Text © L.M. Skvortsov, 2009, published in Zhurnal Vychislitel’noi Matematiki i Matematicheskoi Fiziki, 2009, Vol. 49, No. 11, pp. 1920–1930.

A Simple Technique for Constructing TwoStep Runge–Kutta Methods L. M. Skvortsov Bauman State Technical University, Vtoraya Baumanskaya ul. 5, Moscow, 105005 Russia email: [email protected] Received December 12, 2008; in final form, May 4, 2009

Abstract—A technique is proposed for constructing twostep Runge–Kutta methods on the basis of onestep methods. Explicit and diagonally implicit twostep methods with the second or third stage order are examined. Test problems are presented showing that the proposed methods are superior to conventional onestep techniques. DOI: 10.1134/S0965542509110025 Key words: twostep Runge–Kutta methods, stage order, explicit methods, diagonally implicit meth ods, stiff systems of equations.

INTRODUCTION Twostep Runge–Kutta methods (TSRK) are an extension of conventional onestep methods (see [1–7]). At each integration step, these methods use not only information gathered at the current step but also information from the preceding step. Owing to this feature, twostep methods can have a higher stage order, which makes it possible to improve their accuracy. In particular, this is true of what concerns solving stiff and differential algebraic problems. A general class of twostep Runge–Kutta methods was proposed in [1]. We examine a specific subset of this class for which such methods are easy to construct and imple ment as software. Consider the Cauchy problem for the system of ordinary differential equations y' = f ( t, y ),

y ( t0 ) = y0 ,

(1.1)

where y is the variable vector, f is a vector function, and t is an independent variable. Consider the twostep Runge–Kutta methods defined by the formulas 1

1

1

F n = f ( t n, Y n ),

Yn = yn ,

(1.2а)

s+1 i

Yn = yn + hn

∑ (a F ij

j n–1

j

i

+ b ij F n ),

i

F n = f ( t n + c i h n, Y n ),

i = 2, 3, …, s,

(1.2b)

j=1

s+1 s+1 Yn

= yn + hn

∑b

j s + 1, j F n ,

s+1

Fn

s+1

= f ( t n + h n, Y n ),

s+1

yn + 1 = Yn ,

(1.2c)

j=1

i

i

where hn is the current step size, while Y n and F n (i = 1, 2, …, s + 1) are the stage values and their deriva tives at the current step. We call (1.2b) the inner stages and (1.2c) the concluding stage. Observe that no calculations are required at stage (1.2a) because its output is the same as the one of the concluding stage of the preceding step. For this reason, these methods are said to have the FSAL property (First Same As Last). We combine the coefficients of method (1.2) into two matrices and a vector as follows: A = [ a ij ],

B = [ b ij ],

c = [ c i ],

i, j = 1, 2, …, s + 1.

Here, we set c1 = 0, cs + 1 = 1, a1i = as + 1, i = b1i = 0 (i = 1, 2, …, s + 1). 1837

1838

SKVORTSOV

The technique that we propose for constructing twostep methods of type (1.2) is based on the conven tional Runge–Kutta method determined by the matrix B and the vector c, which we call the original method. We define A as a matrix of the form T

A = dg ,

(1.3)

where the vectors d and g are chosen so as to satisfy appropriate conditions for improving the stage order of the original method and to ensure the required stability properties. This approach makes it possible to raise the stage order of the original method by one or two. In general, all the coefficients of a twostep method depend on the ratio of step sizes w = hn/hn – 1. By contrast, in our methods, only the components of the vector g depend on w. This simplifies their implementation as variablestep methods. The first step is performed using the original onestep method; consequently, no special starting procedure is required. In view of (1.3), formulas (1.2) can be written in the form s+1 1

Yn = yn ,

1

1

F n = f ( t n, Y n ),

u =

∑g F j

j n – 1,

j=1

⎛ i j⎞ Yn = yn + hn ⎜ di u + b ij F n⎟ , ⎝ ⎠ j=1 s+1



i

i

i = 2, 3, …, s,

s+1

yn + 1 = Yn ,

F n = f ( t n + c i h n, Y n ),

(1.4)

s+1 s+1

Yn

= yn + hn

∑b

j s + 1, j F n ,

s+1

Fn

= f ( t n + h n, Y n ),

s+1

j=1

which makes them more convenient to implement. 2. ORDER CONDITIONS The derivation of the order conditions is based on the comparison of the Taylor expansions for the exact and numerical solutions. Some assumptions are often used to simplify the order conditions. An important concept in this process is the stage order, which is defined as the least order over all the stages. For the two step Runge–Kutta methods, the order conditions were investigated in [1–3]. We present certain order conditions based on these publications for methods of types (1.2) and (1.4). We use the following notation for vectors of dimension s + 1: T

e = [ 1, …, 1 ] ,

T

e s + 1 = [ 0, …, 0, 1 ] ,

T

b = [ b s + 1, 1, …, b s + 1, s + 1 ] .

Suppose that the stage order of the original method is q ; i.e., it holds that kBc

k–1

k

= c ,

k = 1, 2, …, q,

q

( q + 1 )Bc ≠ c

q+1

.

(2.1)

(Hereinafter, we assume that raising a vector to a power is performed componentwise.) We also adopt the condition q ≥ 1, which is conventional for Runge–Kutta methods. Then, c = Be, and the method is uniquely determined by the matrix B. Let p be the order of the original method. We assume that p > q . Consider the twostage method (1.2), and let q be its stage order. Then, we have k–1 k c – e k–1 k A ⎛ ⎞ + Bc = c , ⎝ w ⎠

k = 1, 2, …, q.

(2.2)

Assume that q > q . Substituting (1.3) into (2.2), we obtain for k = q + 1 the equality q T c–e q+1 q ( q + 1 )dg ⎛ ⎞ = c – ( q + 1 )Bc . ⎝ w ⎠

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 49

No. 11

2009

A SIMPLE TECHNIQUE FOR CONSTRUCTING

1839

This equality is fulfilled if we set d = c

q+1

q

– ( q + 1 )Bc ,

(2.3)

q

T q w g ( c – e ) = . q+1

(2.4)

From (2.1) and (2.2), we conclude that T k

k = 0, 1, …, q – 1,

g c = 0,

(2.5)

while, from (2.4) and (2.5), we obtain q

T q w g c = . q+1

(2.6)

We use equalities (2.3), (2.5), and (2.6) as conditions ensuring the stage order q = q + 1 for method (1.4). To ensure the stage order q = q + 2, the additional conditions c

q+2

– ( q + 2 )Bc

q+1

T q+1

g c

= α(c

q+1

q

– ( q + 1 )Bc ),

q+1

(2.7)

w = α  +w , q+2 q

(2.8)

must be fulfilled. Here, α is a constant. If q is the stage order of method (1.2), then its order is (at least) p = q. Method (1.2) has the order p = q + 1 if T q

( q + 1 )b c = 1.

(2.9)

Method (1.2) has the order p = q + 2 if, along with (2.9), we have T q+1

( q + 2 )b c

= 1,

(2.10)

T – e⎞ q + Bc q = 1. ( q + 2 ) ( q + 1 )b A ⎛ c ⎝ w ⎠

(2.11)

For p > q + 1, relation (2.11) can be simplified. In this case, the order conditions for the onestep method imply that T

T

b d = b (c

q+1

q

– ( q + 1 )Bc ) = 0.

Using this expression and relation (1.3) in (2.11), we obtain T

q

( q + 2 ) ( q + 1 )b Bc = 1.

(2.12)

Thus, method (1.4) has the order p = q + 2 if p > q + 1 and conditions (2.9), (2.10), and (2.12) are ful filled. 3. STABILITY The stability of methods for solving ODEs is normally examined using model Dahlquist’s equation y' = λy, y(t0) = y0. Invoking formulas (1.2) to solve this equation, we obtain T

Y n = ( ee s + 1 + zA )Y n – 1 + zBY n ,

z = h n λ.

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

(3.1) Vol. 49

No. 11

2009

1840

SKVORTSOV

Solving (3.1) with respect to Yn and taking (1.3) into account, we have Y n = H ( z )Y n – 1 ,

–1

T

T

H ( z ) = ( I – zB ) ( ee s + 1 + zdg ),

(3.2)

where I is the identity matrix. The stability of the difference equation (3.2) is determined by the spectrum of the matrix H(z), that is, by the roots of the characteristic polynomial P(η, z) = |ηI – H(z)|. In general, P(η, z) regarded as a polynomial in η has s + 1 distinct roots depending on z. However, if A is chosen as a matrix of form (1.3), then the rank of H(z) is two, which implies that P ( η, z ) = η

s–1

2

[ η – p 1 ( z )η + p 0 ( z ) ].

(3.3)

Thus, the stability of the proposed methods is determined by the two nonzero roots 2 η 1, 2 ( z ) = 1 ( p 1 ( z ) ± p 1 ( z ) – 4p 0 ( z ) ) 2

(3.4)

of polynomial (3.3). According to the Definition V.1.1 given in [8], the stability domain is described by the inequalities η 1 ( z ) ≤ 1,

η 2 ( z ) ≤ 1.

(3.5)

Furthermore, both roots should not be simultaneously equal to 1 or –1. To obtain formulas for p0(z) and p1(z), we write H(z) as –1

T

H ( z ) = XY ,

X = ( I – zB ) [ e zd ],

Y = [ e s + 1 g ].

Using the fact that H(z) and V(z) = Y T X have the same nonzero eigenvalues, we obtain p 1 ( z ) = ν 11 ( z ) + ν 22 ( z ), ν 11 ( z ) =

T es + 1 ( I

p 0 ( z ) = ν 11 ( z )ν 22 ( z ) – ν 12 ( z )ν 21 ( z ), –1

ν 12 ( z ) = e s + 1 ( I – zB ) zd,

T

–1

–1

ν 22 ( z ) = g ( I – zB ) zd.

T

– zB ) e,

–1

T

ν 21 ( z ) = g ( I – zB ) e,

(3.6)

Note that ν11(z) is the stability function for the original onestep method. We have used relations (3.4)– (3.6) for constructing the stability domains of twostep methods. 4. EXPLICIT METHODS For explicit methods, we have bij = 0 if j ≥ i. In this case, the original method has the first stage order (that is, q = 1) and can be represented by the Butcher table 0 c 2 b 21 … b s1



… cs

,

… b s, s – 1

b s + 1, 1 … b s + 1, s – 1 b s + 1, s where s is the number of stages, which is equal to the number of evaluations of the righthand side per inte gration step. In accordance with (2.3), we set 2

d = c – 2Bc.

(4.1)

From (2.5) and (2.6), we obtain T

g e = 0,

T

g c = w/2.

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

(4.2) Vol. 49

No. 11

2009

A SIMPLE TECHNIQUE FOR CONSTRUCTING

1841

Twostep methods of the second stage order can be constructed on the basis of any onestep method with the order at least two. To this end, it suffices to define d by formula (4.1) and determine g from (4.2). Only two components of the vector g are used to solve Eqs. (4.2). The remaining components can be assigned zero values or chosen from other considerations. For instance, if g2, …, gs are given zero values, then we obtain T w g =  [ – 1, 0, …, 0, 1 ] . 2

(4.3)

Formulas (4.1) and (4.3) specify the simplest way of constructing explicit twostep methods. Let us discuss a specific method. If s = 2, then the optimal (in accuracy) secondorder onestep method is obtained by setting c2 = 2/3. The twostep method based on this onestep method has the coef ficients 0 0 0 B = 2/3 0 0 , 1/4 3/4 0

0 c = 2/3 , 1

0 d = 4/9 , 0

–1 w g =  0 . 2 1

Its order is p = 3, and its stage order is q = 2. The length of its stability domain along the real axis (for the constant step size) is l = 2.0. This method is denoted as TSRK23 (where the first digit indicates s and the second digit indicates p). Twostep methods of the third stage order can be constructed on the basis of onestep methods whose coefficients satisfy relation (2.7) with α = c2. In this case, we see from (2.8) that wc T 2 g c = w ⎛ 1 + 2⎞ . ⎝ 3 ⎠

(4.4)

The vector d is defined by (4.1), while g is found from (4.2) and (4.4). For s > 2, the number of the com ponents of g is greater than what is required for solving (4.2) and (4.4). Consequently, only the three com ponents g1, gk, and gs + 1 were given nonzero values, and k was chosen so as to make the stability domain as large as possible. For the chosen nonzero coefficients, the solution to Eqs. (4.2) and (4.4) is 3 + 2wc g1 = w  ⎛ 2 – 3⎞ , ⎝ ⎠ 6 ck

w ( 3 + 2wc 2 ) , g k =   6 ck ( ck – 1 )

w 3 + 2wc g s + 1 =  ⎛ 2 + 3⎞ . ⎠ 6 ⎝ 1 – ck

We set s = 3 and construct a fourthorder twostep method of stage order three based on the thirdorder onestep method. In this case, the original method is determined by the abscissas c2 and c3. The fourth order condition for the final stage implies that c 2 c 3 1/2 2

2

3

3

c 2 c 3 1/3

= 0,

(4.5)

c 2 c 3 1/4 while (2.7) and the thirdorder condition for the original method yield 2

3

2

3

2

c 2 ( c 3 – 3b 32 c 2 ) – c 2 ( c 3 – 2b 32 c 2 ) = 0,

2 – 3c 2 b 32 =  . 6c 3 ( c 3 – c 2 )

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 49

(4.6) No. 11

2009

1842

SKVORTSOV

Equations (4.5) and (4.6) have the unique solution c2 = 1/2, c3 = 1. Setting g3 = 0, we obtain the coeffi cients 0 B = 1/2 –1 1/6

0 0 2 2/3

0 0 0 1/6

0 0, 0 0

0 c = 1/2 , 1 1

0 d = 1/4 , –1 0

3 + 2w w g =  –12 – 4w . 6 0 9 + 2w

The corresponding twostep method is denoted by TSRK34. The length of its stability domain is l = 2.372. A convenient choice of the original method for constructing twostep methods of stage order three is the method whose coefficients satisfy the conditions i–1

2



i–1

2

b ij c j = c i ,

3

j=2

∑b c

2 ij j

3

= ci ,

i = 3, 4, …, s + 1.

(4.7)

j=2

In this case, only the second stage needs to be modified in order to raise the stage order to three. Conse 2 quently, the vector d has the only nonzero component d2 = c 2 , which simplifies the implementation of this method. Now, we set s = 4 and use relations (4.7) to construct a fourthorder method. Setting g2 = g4 = 0, we obtain the method TSRK44 with the coefficients 0 0 0 1/3 0 0 B = 1/8 3/8 0 1/2 – 3/2 2 1/6 0 2/3

0 0 0 0 1/6

0 0 0, 0 0

0 1/3 c = 1/2 , 1 1

0 1/9 d = 0 , 0 0

9 + 4w 0 g = w  –36 – 8w . 18 0 27 + 4w

The length of its stability domain is l = 2.479. A similar approach was used to construct the fifthorder twostep method TSRK65 based on the Dor mand–Prince method whose coefficients are given in [9]. In this case, the nonzero components of the vectors d and g are 1 d 2 = , 25

w g 1 =  ( 21 + 4w ), 18

10w g 3 = –  ( 15 + 2w ), 63

w g 7 =  ( 51 + 4w ). 42

The length of the stability domain of this method is l = 3.011. (For the original Dormand–Prince method, l = 3.307.) The above methods were tested on a number of nonstiff and moderately stiff problems. Ordinarily, they were more efficient than the conventional Runge–Kutta methods. We present some numerical results and, for comparison, the results obtained using the thirdorder Bogacki–Shampine method (see [10]) and the Dormand–Prince method, which are denoted by RK33 and RK65. These methods are regarded as the best explicit Runge–Kutta methods of low and moderate accuracy. As an accuracy indicator, we used the value ˜ i – yi ⎞ ⎞ , scd = – log ⎛max ⎛ y ⎝ i ⎝ y  ⎠⎠ i

(4.8)

where yi is the exact value of the ith component at the endpoint of the integration range, while y˜ i is the calculated value of this component. The computational complexity was estimated by the number Nf of the evaluations of the righthand side. In our implementation of methods with the automatic step size selection, we used an auxiliary formula that requires to calculate the vector yˆ n + 1 at each step. The error is estimated by the norm of the vector COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 49

No. 11

2009

A SIMPLE TECHNIQUE FOR CONSTRUCTING

1843

Table 1 µ

Method RK33 TSRK23 TSRK34 RK65 TSRK65

1

10

20

40

80

4.91 5.39 6.72 7.70 9.13

4.59 5.40 7.24 6.07 8.28

4.34 5.38 7.08 5.17 7.52

4.03 5.29 6.73 4.04 6.49

3.78 4.96 6.78 – –

Table 2 Rtol = 10–3

Rtol = 10–4

Rtol = 10–6

Rtol = 10–8

Method RK33 TSRK23 TSRK34 RK65 TSRK65

scd

Nf

scd

Nf

scd

Nf

scd

Nf

3.35 3.68 4.88 4.02 4.30

175 109 176 211 223

3.97 4.46 6.34 5.29 5.36

166 125 182 253 223

5.53 6.04 7.77 6.47 6.88

439 221 171 535 229

6.88 7.83 9.90 8.83 8.93

1243 843 496 1279 343

Table 3 Rtol = 10–3

Rtol = 10–4

Rtol = 10–6

Rtol = 10–8

Method RK33 TSRK23 TSRK34 RK65 TSRK65

scd

Nf

scd

Nf

scd

Nf

scd

Nf

0.74 1.58 0.14 0.53 1.07

856 731 630 979 949

1.36 1.76 2.23 1.93 1.79

1273 1135 914 1339 1039

3.22 3.88 4.72 3.79 4.62

4927 4635 2161 2647 1873

5.19 5.82 6.81 5.67 6.22

21433 21347 6324 5815 3301

s h yˆ n + 1 – yn + 1. We set yˆ n + 1 = yn +  (fn + fn + 1) for the method TSRK23 and yˆ n + 1 = Y n for the methods 2 TSRK34 and TSRK44. In TSRK65, the same formula as in the Dormand–Prince method was used to calculate yˆ n + 1 . The same standard procedure for controlling the step size (see [8]) was used in all the methods. The results obtained using TSRK34 and TSRK44 are very close; therefore, we present only the results for the former method.

To investigate the influence of the stiffness upon the accuracy of the numerical solution, we used the Kaps problem 2

y '1 = – ( μ + 2 )y 1 + μy 2 , y 1 ( 0 ) = 1,

2

y '2 = y 1 – y 2 – y 2 ,

y 2 ( 0 ) = 1,

(4.9)

0 ≤ t ≤ 1.

Its solution y1(t) = e–2t, y2(t) = e–t is independent of the stiffness parameter μ. We solved this problem for various μ using the step size h = s/120, which corresponds to 120 evaluations of the righthand side. The results (the values of scd) are presented in Table 1. The dashes correspond to the occasions where the numerical solution diverged. For μ = 100, the same problem was solved with the automatic step size selec tion. We assigned the tolerance Rtol for the relative error and calculated the tolerance Atol for the absolute error as Atol = 0.01 × Rtol. For the initial step size h0 = 0.01, the results are presented in Table 2. COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 49

No. 11

2009

1844

SKVORTSOV

We also present the results of solving the nonstiff test problem BRUS (see [9]). For this problem, we set Atol = 0.01 × Rtol and h0 = 0.01. The results obtained are shown in Table 3. Thus, the test calculations demonstrate that the twostep methods are most advantageous for moderately stiff problems and increased accuracy requirements. 5. DIAGONALLY IMPLICIT METHODS Suppose that, in the original method, we set bii = γ > 0 (i = 2, 3, …, s + 1) and bij = 0 for j > i. Then, the diagonally implicit RungeKutta methods are obtained. They have an explicit first stage (the socalled ESDIRK methods; see [11–14]) and the Butcher table 0 0 c 2 b 21

γ









cs

b s1

b s2



.

γ

(5.1)

1 b s + 1, 1 b s + 1, 2 … b s + 1, s γ b s + 1, 1 b s + 1, 2 … b s + 1, s γ (These methods were called FSALDIRK in [12, 13].) From a formal viewpoint, (5.1) is an (s + 1)stage method; however, actually, only s implicit stages are performed at each step because no calculations are needed at the explicit stage. Implicit methods of this type are most simple to implement; however, their stage order cannot exceed two, which hinders their efficient use when high accuracy is required. Suppose that the original method has the stage order two; that is, conditions (2.1) are satisfied with q = 2. In accordance with (2.3), we set 3

2

d = c – 3Bc ,

(5.2)

while, from (2.5) and (2.6), we obtain T

g e = 0,

T

T 2

g c = 0,

2

g c = w /3.

(5.3)

A twostep method of the third stage order can be constructed on the basis of every ESDIRK method of the second stage order having the order at least three. To this end, it suffices to choose d in form (5.2) and to determine the components of g so that equalities (5.3) are fulfilled. We additionally assume that the method has the L(α)stability. For this to be true, the coefficients of characteristic polynomial (3.3) must satisfy for an arbitrary w the relations p1(z) 0 and p0(z) 0 as z

ˆ = 1 0…0 . Then, it holds that ∞. We write the matrix B as B = 0 0…0 and define B ˜ ˜ b˜ B b˜ B ˆ –1 = B

1 0…0 , –1 ˜ ˜ –1 – B b˜ B

( I – zB )

–1

=

1 0…0 , –1 ˜ ˜ ˜ ˜ ˜ ) –1 ( I – zB ) zb ( I – zB

where ˜I is the identity matrix of size s × s. Since d1 = c1 = 0, we have –1 ˆ –1 e , lim ( I – zB ) e = B 1

z→∞

T

e 1 = [ 1, 0, …, 0 ] , (5.4)

–1 ˆ –1 d = B ˆ –1 ( 3Bc 2 – c 3 ) = 3c 2 – B ˆ –1 c 3 . lim ( I – zB ) zd = – B

z→∞

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 49

No. 11

2009

A SIMPLE TECHNIQUE FOR CONSTRUCTING

1845

Table 4 µ Method ESDIRK54 TSDIRK54

100

101

102

103

104

105

5.82 6.05

5.53 6.17

5.20 6.43

5.93 6.90

6.75 7.20

7.07 7.25

Table 5 ESDIRK54

TSDIRK54

h 1/20 1/50 1/100 1/200

scd(y)

scd(z)

scd(y)

scd(z)

4.05 5.14 6.02 6.91

3.31 4.17 4.81 5.44

5.00 6.54 7.72 8.91

5.14 6.07 6.89 7.74

Substituting (5.4) and (5.3) into (3.6), we obtain ˆ e, ν 11 ( ∞ ) = e s + 1 B 1 T

ˆ c , ν 12 ( ∞ ) = 3 – e s + 1 B

–1

T ˆ –1 e1 , ν 21 ( ∞ ) = g B

–1 3

T

(5.5)

2 T ˆ –1 3 ν 22 ( ∞ ) = w – g B c .

Now, using formulas (5.5) and representation (3.6) of the characteristic polynomial, we can derive con venient necessary conditions for the L(α)stability. The relation p1(∞) = 0 yields ˆ e = 0, es + 1 B 1

(5.6а)

T ˆ –1 3 2 g B c = w .

(5.6b)

T

–1

To have p0(∞) = 0, we must additionally satisfy one of the equalities ˆ c = 3, es + 1 B

(5.7а)

T ˆ –1 g B e 1 = 0.

(5.7b)

T

–1 3

Thus, a necessary condition for a twostep method to be L(α)stable is the fulfillment of (5.6) and one of equalities (5.7). For onestep methods, equality (5.6a) is a necessary condition for the L(α)stability, while condition (5.7a) ensures an improved accuracy in solving stiff problems. Procedures for constructing ESDIRK methods of the fourth and fifth order satisfying these conditions were given in [14]. It is also reasonable to use these methods as original ones for constructing twostep methods. In this case, the components of the vector g are determined from conditions (5.3) and (5.6b), while (5.7b) is a redundant equality. We constructed twostep methods of the fourth order using conditions (5.3) and (5.6b), as well as the formulas for calculating the coefficients of ESDIRK methods given in [14]. The method specified by the coefficients 0 1/4 B = 55/196 17/96 5/48 1/6

0 0 0 0 0 1/4 0 0 0 0 2/49 1/4 0 0 0 , 7/12 – 49/96 1/4 0 0 13/24 – 49/48 9/8 1/4 0 0 0 2/3 – 1/12 1/4

0 1/2 c = 4/7 , 1/2 1 1

0 – 1/16 d = – 61/686 , 0 0 0

COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 49

4 0 2  0 g = w 6 –8 1 3

No. 11

2009

1846

SKVORTSOV

is most convenient to implement. This method, which we denote by TSDIRK54, is L(α)stable for α = 89.58°. We present the results obtained by solving three test problems with a constant step size. For compari son, we also show the results obtained for the “optimal” onestep method ESDIRK54. For this method, seven of the nine error coefficients of the fifth order are zero when γ = 1/4. (The other two error coeffi cients cannot be zero in principle.) The coefficients of ESDIRK54 are given in [13, Formula (3.1)]. In the first test, we used the Kaps problem (4.9), which was solved with h = 1/12 for various values of the stiffness parameter μ. The results (the values of the parameter scd) are presented in Table 4. For all μ, the twostep method is superior to the onestep method, which is especially visible for a moderate stiffness (μ = 102). The second test problem (which is the stiff problem PLATE discussed in [8]) contains 80 equations, and its Jacobian matrix has a complex spectrum. Integrating this problem on the interval [0, 7] with the step size h = 1/24, we obtained scd = 4.18 for ESDIRK54 and scd = 5.59 for TSDIRK54. The twostep method also turned out to be more accurate in solving differential algebraic equations (DAEs) of indices two and three. As the third test problem, we chose the system of DAEs of index two given in [15]: 2 2

y 1' = y 1 y 2 z , 2

0 = y 1 y 2 – 1,

2 2

2

y 2' = y 1 y 2 – 3y 2 z,

y 1 ( 0 ) = y 2 ( 0 ) = z ( 0 ) = 1,

0 ≤ t ≤ 1.

The exact solution to this equation is y1(t) = et, y2(t) = e2t, z(t) = e–2t. Table 5 shows the results obtained for various values of the step size. The symbols scd(y) and scd(z) stand for the values of parameter (4.8) calculated for the differential variables y1, y2 and the algebraic variable z, respectively. The results presented above convincingly demonstrate the superiority of the twostep method, which is explained by the fact that its stage order is higher than that of the onestep method. REFERENCES 1. Z. Jackiewicz and S. Tracogna, “A General Class of TwoStep Runge–Kutta Methods for Ordinary Differential Equations,” SIAM J. Numer. Anal. 32, 1390–1427 (1995). 2. J. C. Butcher and S. Tracogna, “Order Conditions for TwoStep Runge–Kutta Methods,” Appl. Numer. Math. 24, 351–364 (1997). 3. E. Hairer and G. Wanner, “Order Conditions for General TwoStep Runge–Kutta Methods,” SIAM J. Numer. Anal. 34, 2087–2089 (1997). 4. Z. Bartoszewski and Z. Jackiewicz, “Construction of TwoStep Runge–Kutta Methods of High Order for Ordi nary Differential Equations,” Numer. Algorithms 18 (1), 51–70 (1998). 5. S. Tracogna and B. Welfert, “TwoStep Runge–Kutta: Theory and Practice,” BIT 40, 775–799 (2000). 6. Z. Jackiewicz and J. H. Verner, “Derivation and Implementation of TwoStep Runge–Kutta Pairs,” Jpn. J. Ind. Appl. Math. 19, 227–248 (2002). 7. J. Chollom and Z. Jackiewicz, “Construction of TwoStep Runge–Kutta Methods with Large Regions of Abso lute Stability,” J. Comput. Appl. Math. 157 (1), 125–137 (2003). 8. E. Hairer and G. Wanner, Solving Ordinary Differential Equations II: Stiff and DifferentialAlgebraic Problems (SpringerVerlag, Berlin, 1996; Mir, Moscow, 1999). 9. E. Hairer, S. P. Nörsett, and G. Wanner, Solving Ordinary Differential Equations. I: Nonstiff Problems (Springer Verlag, Berlin, 1987; Mir, Moscow, 1990). 10. P. Bogacki and L. F. Shampine, “A 3(2) Pair of Runge–Kutta Formulas,” Appl. Math. Lett. 2, 321–325 (1989). 11. A. Kværnø, “Singly Diagonally Implicit Runge–Kutta Methods with an Explicit First Stage,” BIT 44, 489–502 (2004). 12. L. M. Skvortsov, “Diagonally Implicit Runge–Kutta FSALMethods for Stiff and DifferentialAlgebraic Sys tems,” Mat. Model. 14 (2), 3–17 (2002). 13. L. M. Skvortsov, “Accuracy of Runge–Kutta Methods Applied to Stiff Problems,” Zh. Vychisl. Mat. Mat. Fiz. 43, 1374–1384 (2003) [Comput. Math. Math. Phys. 43, 1320–1330 (2003)]. 14. L. M. Skvortsov, “Diagonally Implicit Runge–Kutta Methods for Stiff Problems,” Zh. Vychisl. Mat. Mat. Fiz. 46, 2209–2222 (2006) [Comput. Math. Math. Phys. 46, 2110–2123 (2006)]. 15. L. Jay, “Convergence of a Class of Runge–Kutta Methods for DifferentialAlgebraic Systems of Index 2,” BIT 33, 137–150 (1993). COMPUTATIONAL MATHEMATICS AND MATHEMATICAL PHYSICS

Vol. 49

No. 11

2009

Suggest Documents