AN ERROR CORRECTED EULER METHOD FOR SOLVING STIFF ...

1 downloads 0 Views 381KB Size Report
the most reliable and costly Newton's method which requires the Jacobian matrix ...... [27] W. Liniger and R.A. Willoughby, Efficient integration methods for stiff ...
SIAM J. NUMER. ANAL. Vol. 49, No. 6, pp. 2211–2230

c 2011 Society for Industrial and Applied Mathematics 

AN ERROR CORRECTED EULER METHOD FOR SOLVING STIFF PROBLEMS BASED ON CHEBYSHEV COLLOCATION∗ PHILSU KIM† , XIANGFAN PIAO† , AND SANG DONG KIM† Abstract. In this paper, we present error corrected Euler methods for solving stiff initial value problems, which not only avoid unnecessary iteration process that may be required in most implicit methods but also have such a good stability as all implicit methods possess. The proposed methods use a Chebyshev collocation technique as well as an asymptotical linear ordinary differential equation of first-order derived from the difference between the exact solution and the Euler’s polygon. These methods with or without the Jacobian are analyzed in terms of convergence and stability. In particular, it is proved that the proposed methods have a convergence order up to 4 regardless of the usage of the Jacobian. Numerical tests are given to support the theoretical analysis as evidences. Key words. Chebyshev collocation method, Runge–Kutta method, stiff initial value problem, A-stability AMS subject classifications. 65L04, 65L07, 65L20 DOI. 10.1137/100808691

1. Introduction. Most popular methods for solving stiff initial value problems φ (t) = f (t, φ(t)), t > t0 ; φ(t0 ) = φ0 are types of implicit methods. For example, one can refer to many approaches based on Runge–Kutta or power series (see [2, 8, 9, 11, 19, 21, 37, 38], etc.) and also to various schemes based on modifications of the classical BDF (see [5, 12, 13, 15, 18, 20, 22, 25, 30, 35], etc.). For implicit methods, one has to solve nonlinear systems for which one may use the most reliable and costly Newton’s method which requires the Jacobian matrix evaluated at each iteration (see [27], for example). To reduce its cost by an iteration method, numerous different approaches have been reported in [3, 4, 7, 10, 14, 23, 24]. One possibility to get a low complexity for a stiff solver is to avoid the iteration process. The Rosenbrock (or linearly implicit) method (see [33]), which has been favorably modified over many years (see [34, 36]), uses the Jacobian matrix directly in a numerical formula rather than within the iterations of Newton’s method. In general, the s-stage Rosenbrock methods require to solve s linear systems containing the Jacobian matrix at each integration step (see [20, p. 111]). Another important issue for stiff problems is to check the stability of a numerical scheme. The A-stability for a numerical method is important because one can choose the step size based only on accuracy, without worrying about its stability constraints. It has been well known that an explicit k-step method cannot be A-stable and the order of an A-stable linear multistep method cannot exceed 2 (see [16, 6]). But, Ramos [31] recently suggested a nonstandard explicit numerical integration method which keeps A-stable with second-order accuracy for solving stiff initial value problems. Despite such developments on explicit methods, most implicit methods mentioned above have been used to take advantage of stability for solving stiff problems. ∗ Received by the editors September 15, 2010; accepted for publication (in revised form) July 29, 2011; published electronically November 1, 2011. This research was supported by basic science research program through the National Research Foundation of Korea (NRF) funded by the Ministry of Education, Science and Technology (grant 2010-0008317). http://www.siam.org/journals/sinum/49-6/80869.html † Department of Mathematics, Kyungpook National University, Daegu 702-701, Korea (kimps@ knu.ac.kr, [email protected], [email protected]).

2211

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

2212

PHILSU KIM, XIANGFAN PIAO, AND SANG DONG KIM

The primary goal of this paper is to construct error corrected Euler’s methods which do not require such iteration steps for nonlinear discrete systems in most implicit methods but have both such good stability properties as implicit methods have and increase order of accuracy up to 4. These objects can be obtained by solving an asymptotical linear first-order ordinary differential equation (ODE) for the difference between the exact solution and the Euler’s polygon first and then adding its solution to the Euler’s approximation. More concisely, at each time step [tm , tm+1 ], the algorithm we propose can be stated with the explicit form ym+1 = ym + hf (tm , ym ) + βn = Euler approximation + correction term, where ym is the approximation of φ(t) at the time tm , h = tm+1 − tm is the time step size and the correction term βn is obtained by solving a linear system (see (3.19)) whose leading vector depends only on the Euler’s polygon defined by the previous step approximation ym . For the approximation of the asymptotical linear ODE, the Chebyshev collocation method (CCM) in [29, 30] is used which is an implicit type method and employs the Chebyshev interpolation polynomial. Two schemes of explicit types according to the usages of the Jacobian (i.e., the partial derivative fφ ) are proposed. It is proved that both methods have a convergence order up to 4. Also, the stability analysis is performed and almost L-stability is shown, which coincides with the result in [30]. This paper is organized as follows. In section 2 we first review basic formulations and some properties for the Chebyshev interpolation polynomials required for our development. Section 3 is devoted to the derivation of the approximate scheme based on the CCM. In section 4, we analyze the convergence for the proposed schemes and present a geometric interpretation for the scheme. The stability properties for the developed schemes are discussed in section 5. Two test problems are performed in section 6 to give numerical evidences for the theoretical analysis in sections 4 and 5. Finally, we provide some comments and conclusions in section 7. 2. Review on Chebyshev interpolation polynomials. In this section, we will review the Chebyshev interpolation polynomial for a given function g ∈ C k (−1, 1), where C k is the space of all functions whose k times derivatives are continuous on (−1, 1) for the rest of the paper. Most contents in this section can be found in [26]. Let Tk (s) = cos(k cos−1 (s)) be the first kind Chebyshev polynomial of degree k and let {sj }nj=0 be the Chebyshev–Gauss–Lobatto (CGL) points such that (2.1)

sj := cos

(n − j)π , n

j = 0, 1, . . . , n.

Let lk (s) be the Lagrange basis defined by (2.2)

n αk  lk (s) = Tj (sk )Tj (s), n j=0



αk =

1 2

for k = 0, n, for k = 1, . . . , n − 1,

where the double prime indicates that both the first and last terms in the summation are to be halved. Then, the Chebyshev interpolation polynomial for a function g can be written as pn (s) =

n 

g(sk )lk (s).

k=0

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

ERROR CORRECTION METHODS FOR SOLVING STIFF IVPS

2213

The convergence for the Chebyshev interpolation polynomial can be summarized as follows. Theorem 2.1. Assume that g ∈ C n+2 (−1, 1). For a nonnegative integer k ≤ n + 2, let Mgk := max |g (k) (s)|.

(2.3)

s∈[−1,1]

Then, the truncation ρn (s) := g(s) − pn (s) and its derivative can be estimated by   Mgn+1 Mgn+2 1 n+1 , |ρ˙ n (s)| ≤ n−1 2nMg + . (2.4) |ρn (s)| ≤ n−1 2 (n + 1)! 2 (n + 1)! n+2 Proof. First we rewrite the interpolation polynomial pn (s) in terms of the Lagrangian form pn (s) =

n  j=0

qn (s) g(sj ), n  (s − θj ) (θj − θk ) k=0 k=j

where (2.5)

qn (s) =

n 

(s − θj ) = 2−n+1 (s2 − 1)Un−1 (s),

θj = cos sj ,

j=0

where Un−1 (s) is the second kind Chebyshev polynomial of degree n − 1. Then, the truncation error ρn (s) becomes (2.6)

ρn (s) =

qn (s)g (n+1) (ξs ) , (n + 1)!

ξs ∈ (−1, 1),

where the mean-value point ξs depends continuously on s and (2.7)

d (n+1) g (n+2) (χs ) g (ξs ) = ds n+2

for some χs ∈ (−1, 1) (see [17, 32, (18)] and the references therein). Thanks to the change of variable s = cos t, it may be shown that the polynomial qn (s) in (2.5) can be simplified to qn (s) = qn (cos t) = −

sin t sin nt . 2n−1

This equation yields (2.8)

|qn (s)| ≤

1 2n−1

and

   dqn (s)  2n    ds  ≤ 2n−1 .

Thus, (2.6) and the first inequality of (2.8) show the first inequality of (2.4). Similarly, by differentiating both sides of (2.6), the second inequality of (2.4) can be obtained from (2.7) and the second inequality of (2.8).

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

2214

PHILSU KIM, XIANGFAN PIAO, AND SANG DONG KIM

Before closing this section, we observe basic properties for the basis function lk (s) in (2.2). First, note that the polynomials lk (s) satisfy (2.9)

lk (sj ) = δjk ,

n 

lk (s) ≡ 1,

s ∈ [−1, 1],

k=0

where δjk denotes the Kronecker delta function and that T˙k = thogonality property such that  π klπ T˙k (cos θ)T˙l (cos θ) sin2 θ dθ = δkl . (2.10) 2 0

dTk ds

satisfies the or-

Let ηj be arbitrary distinct points in [−1, 1] arranged by −1 ≤ η1 < · · · < ηn ≤ 1. Using these points {ηj } and functions lk , define the matrix L as

dlk L = Ljk , where Ljk := l˙k (ηj ) = (ηj ), j, k = 1, . . . , n. ds Then, L is nonsingular, which is shown in the following lemma. Lemma 2.2. The matrix L is nonsingular. Lx = 0, where the superscript T Proof. For the vector x = [x1 , . . . , xn ]T satisfying denotes the transpose, the polynomial p(s) = nk=1 xk l˙k (s) should be identically zero because p(s) has n zeros {ηj }nj=1 , but the degree p(s) is n − 1. Hence, the definition of lk (s) in (2.2) and the orthogonality (2.10) lead to the linear system n 

(2.11)

Tj (sk )xk αk = 0,

j = 1, . . . , n.

k=1

Thus, it follows that for each fixed l = 1, . . . , n, ⎞ ⎛  n  n n n      1 0= Tj (sk )xk αk Tj (sl ) = xk αk ⎝ Tj (sk )Tj (sl ) − ⎠ , 2 j=1 j=1 k=1

k=1

where the prime notation indicates that the last term in the summation is to be halved. Applying the first identity of (2.9) into the last summation above, we have   n n  nδlk 1 1 0= xk αk − xk αk , l = 1, . . . , n, = nxl − αk 2 2 k=1

k=1

which implies that all coefficients xl are the same, that is, x1 = x2 = · · · = xn ≡ C. If C is zero, then the proof will be done. Otherwise, from (2.11) we have  nk=1 Tj (sk ) = 0, which is impossible. Hence C = 0. These arguments complete the proof. Let us define the vector a as (2.12) a := [l˙0 (η1 ), . . . , l˙0 (ηn )]T . Then, the above lemma gives the following corollary. Corollary 2.3. All components of the vector L−1 a should be exactly −1. Proof. By differentiating both sides of the second equation of (2.9) and evaluating the equation at ηj , it is easy to show that −l˙0 (ηj ) =

n 

l˙k (ηj ),

j = 1, . . . , n.

k=1

This system can be written in the matrix form Lx = −a, x = [1, . . . , 1]T . Thus, the invertibility of L leads to the conclusion.

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

ERROR CORRECTION METHODS FOR SOLVING STIFF IVPS

2215

3. Derivation of an approximation method. The target problem for developing an approximation method is a scalar stiff initial value problem of the form (3.1)

dφ = f (t, φ(t)), dt

t ∈ (t0 , T ];

φ(t0 ) = φ0 .

In this initial problem (3.1), the function f is assumed to satisfy all the necessary requirements for the existence of a unique solution. In particular, the uniform boundedness for the partial derivative fφ in the range of the solution is assumed (that is, fφ ∞ < ∞, where  · ∞ denotes the supremum norm). The goal of this section is to derive an accurate algorithm to obtain the next approximation ym+1 at the point tm+1 = tm + h for a given approximation ym at the time tm to the exact solution φ(t) for (3.1). The uniform time step h := tm+1 − tm , m = 0, 1, . . . , will be used from now on. As mentioned in the introduction, a standard approach to get the approximation ym+1 for the original equation (3.1) creates a nonlinear system of equations with the Jacobian for f . Thus, in this section, we begin with a derivation of an asymptotic linear type ODE to the original problem (3.1). Now assuming the approximation ym at tm , consider the Euler polygon y(t) (see [20]) on the interval [tm , tm+1 ] such that (3.2)

y(t) := ym + (t − tm )f (tm , ym ),

t ∈ [tm , tm+1 ].

Let ψ(t) be the difference between the exact solution φ(t) and the Euler polygon y(t), which is (3.3)

ψ(t) := φ(t) − y(t).

Then, it can be shown that ψ(t) satisfies an ODE (3.4)

ψ  (t) = g(t)ψ(t) + F (t),

t ∈ (tm , tm+1 ),

where g(t) and F (t) are given by  (3.5)

1

g(t) =

fφ (t, y(t) + ξψ(t))dξ 0

and (3.6)

F (t) = f (t, y(t)) − f (tm , ym ),

t ∈ (tm , tm+1 ).

Using (3.3), an approximation of the solution ψ(t) of (3.4) can be used to get the next approximation ym+1 of φ(tm+1 ) for a given ym at tm . Since the function g in (3.5) contains the unknown ψ(t) yet, its modification is required as follows. First, by the change of variable t = ts = tm + h2 (1 + s) from the computational region [tm , tm+1 ] ¯ to the reference domain [−1, 1], we consider the solution ψ(s) on [−1, 1], which is (3.7)

¯ := ψ(t) = ψ(ts ), ψ(s)

s ∈ [−1, 1].

Then, instead of (3.4), one may have (3.8)

h ¯ + F (ts ) , ψ¯  (s) = g(ts )ψ(s) 2

s ∈ (−1, 1).

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

2216

PHILSU KIM, XIANGFAN PIAO, AND SANG DONG KIM

Due to the mean value theorem, there is a function ν(t, ξ) between y(t) + ξψ(t) and y(t) so that  1 g(t) − fφ (t, y(t)) = ψ(t) ξfφφ (t, ν(t, ξ))dξ. 0

Hence, we arrive at |g(t) − fφ (t, y(t))| ≤ C|ψ(ξ)|, where C = max |fφφ (t, χ)|. Thus, it is possible to replace the function g(·) by fφ (·, y(·)) in (3.8) within a range of the difference function ψ(ξ). Therefore, one has the following asymptotic first-order linear ODE in the reference domain (−1, 1) instead of (3.8): (3.9)



h ¯ + F (ts ) + O hψ(s) ¯ 2 . fφ (ts , y(ts ))ψ(s) ψ¯  (s) = 2

Note that the original problem (3.1) is nonlinear in general, but (3.9) has a linearity ¯ 2 ) is ignored. This linearity with respect to ψ provided the asymptotic term O(hψ(s) in (3.9) could make it easier to develop an approximation scheme for ym+1 , which will be explored in this paper. Remark 3.1. The calculation of the partial derivative fφ in (3.9) may be costly. One way to avoid the calculation of the partial derivative fφ is to approximate fφ in (3.9) with the forward difference quotient fφ (ts , y(ts )) 

f (ts , y(ts ) + hλ ) − f (ts , y(ts )) . hλ

¯ 2 + hλ+1 ψ(s)). ¯ In this case, the asymptotic part in (3.9) will be changed with O(hψ(s) Here, λ is a parameter to be determined later. Now, considering Remark 3.1, we will write the asymptotic linear ODEs for two cases in one form

h ¯ + F (ts ) + ω(s), ϕ(s)ψ(s) (3.10) ψ¯  (s) = 2 where ω(s) is either (3.11)

¯ 2 ω(s) = O hψ(s)

if ϕ(s) = fφ (ts , y(ts ))

or (3.12)

f (ts , y(ts ) + hλ ) − f (ts , y(ts )) ¯ + hψ(s) ¯ 2 if ϕ(s) = ω(s) = O hλ+1 ψ(s) . hλ

Remark 3.2. It will be analyzed in the following section that the asymptotic part ω(s) defined in either (3.11) or (3.12) is so small that it can be ignored. Hence, by truncating the term ω(s) from (3.10), the equation will be completely linear and the approximation scheme for the solution will be an explicit type to solve quite easily, which is a remarkable fact. The remainder of this section is devoted to deriving an approximation scheme for (3.10) based on the Chebyshev interpolation polynomial discussed in section 2 and to discuss the approximation of ym+1 with the relations (3.2), (3.3), (3.7), and the Lagrange basis lk (s). With the CGL points sk ∈ [−1, 1] in (2.1) and the basis function

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

ERROR CORRECTION METHODS FOR SOLVING STIFF IVPS

2217

¯ lk (s) in (2.2), let us now approximate the solution ψ(s) of (3.10) by the Chebyshev interpolation polynomial pn (s) such that pn (s) =

(3.13)

n 

¯ k )lk (s), ψ(s

k=0

whose error ρn (s) is given by the relation ¯ = pn (s) + ρn (s). ψ(s) From now on, we assume that there is a fixed positive constant N0 and we will use the degree n in the range 1 ≤ n ≤ N0 for pn (s). Then, combining (3.13) with (3.10) ¯ k ), k = 1, . . . , n, are determined by collocating yields the way how the coefficients ψ(s the residual r(s), which is   n n   h ˙ ¯ ¯ ϕ(s) ψ(sk )lk (s) − ψ(sk )lk (s) + F (ts ) = r(s). 2 k=0

k=0

By collocating the residual r(s) at n-points sj , j = 1, . . . , n, we have the discrete system n 

(3.14)

k=1

¯ 0 )aj0 + rj , ¯ k )ajk = h F (ts ) − ψ(s ψ(s j 2

1 ≤ j ≤ n,

where (3.15)

h ajk := l˙k (sj ) − ϕ(sj )δjk , 2

rj :=

h ϕ(sj )ρn (sj ) − ρ˙ n (sj ) + ω(sj ), 2

where ω(s) is defined in either (3.11) or (3.12). Here, we define vectors r and b as (3.16)

T  r = r1 , . . . , rn ,

T  b = a10 , . . . , an0 .

¯ 0 ) and rj in (3.14) can be neglected because (i) Remark 3.3. Two quantities ψ(s ¯ ψ(s0 ) = φ(tm ) − ym is the error induced from the previous time step [tm−1 , tm ] and is quite small (see Theorem 4.4), (ii) the first two terms of rj are the main parts of rj , which are the errors induced from the truncation error ρn (s) for the interpolation polynomial and are quite small (see Corollary 4.1). For convenience, let us define matrices as





(3.17) A = ajk , L = Ljk , J = Jjk , 1 ≤ j, k ≤ n, where ajk is defined in (3.15) and Ljk := l˙k (sj )

and Jjk := ϕ(sj )δjk ,

and vectors as (3.18)

 T d = β1 , . . . , βn ,

 T f = F (ts1 ), . . . , F (tsn ) .

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

2218

PHILSU KIM, XIANGFAN PIAO, AND SANG DONG KIM

¯ k ), we will approximate Then, based on Remark 3.3, instead of solving (3.14) for ψ(s ¯ the exact solution ψ(sk ) by βk which satisfies the discrete Chebyshev collocation system   h h (3.19) Ad = L − J d = f . 2 2 On the other hand, the intrastep error t = [1 , . . . , n ]T satisfies the system (3.20)

¯ 0 )b, At = r − ψ(s

where r and b are defined in (3.16) and (3.21)

¯ k ) − βk k := ψ(s

for

k = 1, 2, . . . , n.

The solvability of the linear system (3.19) is given in the following lemma. Lemma 3.4. Assume that fφ is uniformly bounded and that there is a positive integer N0 such that 1 ≤ n ≤ N0 . Then for a sufficiently small time size h, the matrices L and J in (3.17) are uniformly bounded and the matrix A is invertible. Furthermore, the matrix A−1 is uniformly bounded and h −1 (3.22) A−1 = L − J = L−1 + O(h). 2 Proof. Note that L is nonsingular from Lemma 2.2 and L depends only on the degree n of the Chebyshev interpolation polynomial. Thus, L and L−1  (here  ·  denotes a matrix-norm) are uniformly bounded independent of the step size h. Also, the uniform boundedness of fφ gives that for fixed n, J  is uniformly bounded independent of the step size h. Hence, the invertibility of A and (3.22) are followed by Lemma 2.2 and geometric series. Further, the inequality  −1    L−1  h   A−1  ≤ L−1   I − L−1 J ≤ h  1 − 2 L−1 J   2 shows the uniform boundedness for A−1 provided the time step h is sufficiently small. Solving the system (3.19), we obtain, in particular, the required approximation ¯ n ) ≈ βn . Also, the definitions value at the final point on the interval, ψ(tm+1 ) = ψ(s ¯ of y(t), ψ(t), ψ(s) and n in (3.2), (3.3), (3.7), and (3.21), respectively, give

(3.23)

φ(tm+1 ) = y(tm+1 ) + ψ(tm+1 ) ¯ n) = y(tm+1 ) + ψ(s = ym + hf (tm , ym ) + βn + n ,

where βn and n are the last components of the vectors d and t in (3.19) and (3.20), respectively. Therefore, one may define the approximation for φ(tm+1 ) by truncating the intrastep error n in (3.23) as follows: (3.24)

ym+1 = ym + hf (tm , ym ) + βn .

Here, one may say that the approximation procedure in (3.24) is an explicit type because the last term βn is obtained by solving the linear system (3.19) and the righthand side of (3.19) depends only on the previous value ym from the definitions of F (tsj ) and y(t) given in (3.6) and (3.2), respectively.

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

ERROR CORRECTION METHODS FOR SOLVING STIFF IVPS

2219

The explicit type algorithm for calculating ym+1 in (3.24) can be summarized as follows: Algorithm 3.5. ECEM(f, t0 , y0 , tend , h) 1. Remark: The problem being solved is φ = f (t, φ), φ(t0 ) = y0 , for t0 ≤ t ≤ tend , using the method described earlier in the section. The approximate solution values are printed at each node point. 2. Choose the step size h and the degree n of the interpolation polynomial. 3. Construct the matrix L. 4. Let t1 := t0 + h. 5. If t1 > tend , then exit. 6. Construct the matrices J , A and vector f . 7. Calculate the last component βn of h2 A−1 f . 8. Calculate y1 = y0 + hf (t0 , y0 ) + βn . 9. Print t1 , y1 . 10. Set t0 := t1 and y0 := y1 . Then go to step 4. In the following section, we will provide the error analysis for φ(tm ) − ym by ¯ k ) − βk . analyzing the intrastep error k = ψ(s 3.1. Simple extension to systems of ODEs. In this subsection, we give a brief sketch on a direct extension of the proposed scheme (3.24) to a system of ODEs as follows. Consider (3.25)

Φ (t) = F(t, Φ(t)),

t ∈ (t0 , T ];

Φ(t0 ) = Φ0 ,

where Φ = [φ1 (t), . . . , φd (t)]T and F = [f1 (t, Φ(t)), . . . , fd (t, Φ(t))]T . Let us assume that Ym is a given approximation of Φ(t) at time t = tm and consider the Euler polygon defined by Y(t) = Ym + (t − tm )F(tm , Ym ),

t ∈ [tm , tm+1 ].

Also, let G(t) = [g1 (t), . . . , gd (t)]T be the leading vector-valued function given by G(t) = F(t, Y(t)) − F(tm , Ym ). Now, by applying directly the procedures for deriving the system (3.19), it will become   h h (3.26) Id ⊗ L − J c = g, 2 2 where Id ⊗ L denotes the tensor product of the identity matrix Id with size d and L; the vector g is given by T  g = g1 (ts1 ), . . . , g1 (tsn ), g2 (ts1 ), . . . , gd (tsn ) , and the matrix J = (J (μ,ν) ), 1 ≤ μ, ν ≤ d is defined by

J (μ,ν) = ϕμ,ν (sj )δjk , n×n

where ϕμ,ν (s) is defined by either its forward approximation or ϕμ,ν (s) =

∂ fμ (ts , Y (ts )). ∂yν

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

2220

PHILSU KIM, XIANGFAN PIAO, AND SANG DONG KIM

Note that for each i = 1, . . . , d, the (n × i)th component of c will be an approximation of the ith component of the vector Φ(tm+1 ) − Y(tm+1 ). Hence after solving the system (3.26), we define a vector c(n) := [cn , c2n , . . . , cdn ]T , where cj denotes the jth component of c. Then one can approximate Φ(tm+1 ) with the formula Φ(tm+1 ) ≈ Y(tm+1 ) + c(n) = Ym + hF(tm , Ym ) + c(n) . 4. Convergence analysis. In this section, we will analyze the actual error em = φ(tm ) − ym ,

m = 1, 2, . . .

between the exact solution φ of (3.1) and the approximate solution ym obtained by Algorithm 3.5. ¯ − We begin this section with the estimation of the truncation error ρn (s) = ψ(s) pn (s) for the Chebyshev interpolation pn (s) defined in (3.13), which can be easily estimated by Theorem 2.1 as follows. Corollary 4.1 (local truncation error estimate). Assume that the solution φ(t) for the problem (3.1) is in the space C n+2 ((t0 , T ]), n ≥ 1, and for a nonnegative integer k ≤ n + 2 and m ≥ 0, let Nφk,m :=

max t∈[tm ,tm+1 ]

|φ(k) (t)|.

¯ − pn (s) for the Chebyshev interpolation Then, the local truncation error ρn (s) = ψ(s) pn (s) can be estimated by  n+1 Nφn+1,m h |ρn (s)| ≤ n−1 , 2 (n + 1)! 2    n+1 hNφn+2,m h 1 n+1,m 2nNφ . |ρ˙ n (s)| ≤ + 2 2n−1 (n + 1)! 2(n + 2) ¯ Proof. First note that ψ(s) ∈ C n+2 [tm , tm+1 ] for each m = 0, 1, . . . . Hence, ¯ according to Theorem 2.1, it is enough to estimate Mψk¯ in (2.3) for the function ψ(s) defined in (3.7). By the chain rule, (3.7) shows   k  h h (k) (k) ¯ ψ ψ (s) = tm + (1 + s) . 2 2 Thus, it follows that Mψk¯

= max |ψ¯(k) (s)| = s∈[−1,1]

 k  k  k h h h k,m (k) max |ψ (t)| = Nψ = Nφk,m , 2 t∈[tm ,tm+1 ] 2 2

where the last equation holds for k > 1, which follows from (3.2) and (3.3). These arguments complete the proof. From Corollary 4.1, each component rj of the vector r in (3.16) can be estimated as follows. Lemma 4.2. Assume that the parameter λ in Remark 3.1 is larger than 2 and the solution φ(t) for the problem (3.1) is in the space C n+2 ((t0 , T ]), n ≥ 1. Then, for each j = 1, . . . , n, the remainder rj in (3.15) can be estimated by  n+1 h Ef,φ,n,m + Cj (h2j + h3 |j | + h5 ), (4.1) |rj | ≤ 2

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

ERROR CORRECTION METHODS FOR SOLVING STIFF IVPS

2221

where Cj is a positive constant independent of h, j is the jth component of t defined in (3.21) and Ef,φ,n,m is the constant defined by   hNφn+1,m fφ ∞ hNφn+2,m 1 n+1,m + n−1 2nNφ . + Ef,φ,n,m := 2n (n + 1)! 2 (n + 1)! 2(n + 2) Proof. Using Taylor’s expansion for the function F (t) in (3.6) and the definition of the Euler polygon y(t) in (3.2), it follows that F (t) = O(h). Then from (3.18), (3.19), and Lemma 3.4 one may have

(4.2) βj = O h 2 , where βj is the jth component of the solution d of the system (3.19). Thus, for both functions ω(s) defined in either (3.11) or (3.12), the term ω(sj ) in (3.15) can be easily estimated as follows. For the case of ω(s) in (3.11), using (4.2) and (3.21), we have

¯ j )2 | |ω(sj )| = O |hψ(s 

  ¯ 2 2 (4.3) ¯ = O h(ψ(s j ) − βj ) + 2h(ψ(sj ) − βj )βj + hβj  ≤ Cj (h2j + h3 |j | + h5 ), where Cj is a positive constant independent of h. For the case of ω(s) in (3.12), we also have a similar estimation if λ ≥ 2 as follows: 

   ¯ 2 λ+1 ¯ |ω(sj )| = O hψ(s ψ(sj ) j) + h 

  ¯ 2 λ 2 λ  (4.4) ¯ = O h(ψ(s j ) − βj ) + (ψ(sj ) − βj )(2βj + h ) + βj + h βj  ≤ Cj (h2j + h3 |j | + h5 ), where Cj is also a positive constant independent of h. Note that both functions ϕ(s) defined in either (3.11) or (3.12) can be bounded by fφ ∞ . Thus, by applying Corollary 4.1 to qj := rj − ω(sj ) =

h ϕ(sj )ρn (sj ) − ρ˙ n (sj ), 2

one may get (4.5)

|qj | ≤

 n+1 h Ef,φ,n,m , 2

j = 1, . . . , n.

Combining three estimations (4.3), (4.4), and (4.5) leads to (4.1). From Lemma 4.2, we can estimate the last component n of the intrastep error t in (3.20) as follows. Lemma 4.3. Assume that the same assumptions of Lemma 4.2 holds and the ¯ 0 ) is sufficiently small. Then, for a fixed quantity em = φ(tm ) − ym = ψ(tm ) = ψ(s n ≥ 1 and a sufficiently small step size h, the last component n of the vector t can be estimated by ¯ 0 )|)|ψ(s ¯ 0 )| + Dhmin{n+1,5} , |n | ≤ (1 + Ch)2 (1 + Ch|ψ(s where C and D are some constants independent of time step size h.

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

2222

PHILSU KIM, XIANGFAN PIAO, AND SANG DONG KIM

Proof. Before starting the proof, let a ¯ij be the (i, j) component of the matrix A−1 for simplicity. Note that the vector b in (3.16) is exactly the same with the vector a in (2.12) because (3.15) shows aj0 = l˙0 (sj ). Hence, using Lemma 3.4, A−1 b can be written as A−1 b = A−1 a = L−1 a + O(h) and hence from Corollary 2.3, its jth component (A−1 b)j can be estimated by |(A−1 b)j | ≤ 1 + O(h).

(4.6)

Also, if r¯j is the jth component of the vector A−1 r for the vector r in (3.16), then Lemmas 3.4 and 4.2 show that |¯ rj | ≤

(4.7)

n 

|¯ ajk |ςk + O hmin{n+1,5} ,

k=1

where a ¯jk is the (j, k) component of A−1 and each ςk is defined by

(4.8) ςk = Cj h2k + h3 |k | , k = 1, . . . , n. Thus, using (4.6) and (4.7), the jth component j of the intrastep error t = A−1 (r − ¯ 0 )b) of (3.20) can be estimated by ψ(s ¯ 0 )||(A−1 b)j | ≤ O(hmin{n+1,5} ) + |j | ≤ |¯ rj | + |ψ(s

n 

¯ 0 )|. |¯ ajk |ςk + (1 + O(h))|ψ(s

k=1

Hence, if we let |j0 | be the maximum among |j |, j = 1, . . . , n, then (4.8) and (4.10) show that for each j = 1, . . . , n, ¯ 0 )| + c2 hmin{n+1,5} + c3 |j | ≤ (1 + c1 h)|ψ(s (4.9)

n 

h|k |(|k | + h2 )

k=1

¯ 0 )| + c2 hmin{n+1,5} + nc3 h|j |(|j | + h2 ) ≤ (1 + c1 h)|ψ(s 0 0 min{n+1,5} ¯ ≤ (1 + c4 h)|ψ(s0 )| + c2 h + c4 h|j |(|j | + h2 ) 0

0

for some positive constants ci (i = 1, 2, 3) independent of h and c4 = max{c1 , nc3 }. Note that ¯ 0 )| + c2 hmin{n+1,5} , δ(h) := (1 + c4 h)|ψ(s is sufficiently small. Hence the inequality (4.9) gives c4 h|j0 |2 − (1 − c4 h3 )|j0 | + δ(h) ≥ 0.

(4.10)

Then, the inequality (4.10) implies that |j0 | should be small because δ(h) is suffi¯ 0 ). Further, since the discriminant of ciently small by the assumptions for h and ψ(s the quadratic function in the left-hand side of (4.10) is positive for sufficiently small h and δ(h), the inequality can be solved as follows: |j0 | ≤

1 − c4 h3 − g(h) , 2c4 h

g(h) =

 (1 − c4 h3 )2 − 4c4 hδ(h).

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

2223

ERROR CORRECTION METHODS FOR SOLVING STIFF IVPS

φ(t) δ=(1+ch)2 (1+ch e ) e m m + d hmin (n+1,5) − e m

βn δ em

φ(t )

m+1

y(t)

m

y

e

m

tm

t

m+1

Fig. 4.1. Geometric meaning of the algorithm.

From Taylor series for g(h), we can have ¯ 0 )| + O(h2 ) |j0 | = (1 + c5 h)|ψ(s for some constant c5 . By substituting it into (4.9) and taking j = n, we see that ¯ 0 )| + c2 hmin{n+1,5} + c4 h((1 + c5 h)|ψ(s ¯ 0 )| + c6 h2 )2 |n | ≤ (1 + c4 h)|ψ(s for some positive constant c6 independent of h. Finally, taking C = max{c4 , c5 , 2c6 }

and D = 2 max{c2 , c4 c26 },

one may have

¯ 0 )| 1 + c4 h(1 + Ch)|ψ(s ¯ 0 )| + Ch2 |n | ≤ Dhmin{n+1,5} + (1 + Ch)|ψ(s ¯ 0 )|)|ψ(s ¯ 0 )| + Dhmin{n+1,5} , ≤ (1 + Ch)2 (1 + Ch|ψ(s which completes the proof. Equation (3.23) and Lemma 4.3 say that the actual error em+1 satisfies the difference equation: (4.11)

|em+1 | ≤ (1 + Ch)2 (1 + Ch|em |)|em | + Dhmin{n+1,5} ,

m ≥ 0;

e0 = 0.

This difference relation gives the geometric meaning as shown in Figure 4.1 for the Algorithm 3.5. That is, the algorithm consists of two steps: (1) first ym+1 is predicted with the Euler’s polygon y(t) defined in (3.2), and (2) ym+1 is corrected with the approximated value βn for the solution ψ(t) of the ODE (3.4). In this sense, one may assert that the algorithm is a type of predictor-corrector method. As predictorcorrector methods, Adams–Bashforth and Adams–Moulton methods are well known and these are perfectly different with Algorithm 3.5. It is remarkable that Algorithm 3.5 is an explicit one. Also, if the term βn in Algorithm 3.5 is removed, then the algorithm coincides with Euler’s method. That is, one may say that the term βn is an error collected term for Euler’s method.

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

2224

PHILSU KIM, XIANGFAN PIAO, AND SANG DONG KIM

From the difference equation (4.11), we can estimate the actual error em as follows. Theorem 4.4 (convergence). For sufficiently small h, the actual error em satisfies the inequality (4.12)

|em | ≤

D (exp(3CT ) − 1)hmin{n+1,5} , (1 + Ch)3 − 1

m ≥ 0,

where C and D are positive constants independent of h and m and T is the end of the time in (3.1). Proof. By mathematical induction, it is easy to show that the difference equation (4.11) can be solved by (4.13)

|em | ≤

(1 + Ch)3m − 1 Dhmin{n+1,5} , (1 + Ch)3 − 1

m ≥ 0.

If mh ≤ T , then 1 + Ch ≤ exp(Ch) and (1 + Ch)m ≤ exp(3mCh) ≤ exp(3CT ). Thus, the inequality (4.13) shows that the actual error em has the bound (4.12). Remark 4.5. Theorem 4.4 shows that one can make a scheme of the convergence order up to 4 apart from the choice of the function ϕ in (3.11) or (3.12) if one uses the Chebyshev interpolation polynomial of degree n with n ≤ 4. 5. Stability analysis. For the stability analysis of Algorithm 3.5, we will try Dahlquist’s test problem, y  = λy. Let L = (Ljk ), A = (ajk (λh)), and b = (b1 , b2 , . . . , bn )T be matrices and vector, whose entries are defined by Ljk := l˙k (sj ),

z ajk (z) := Ljk − δjk , 2

bj := 1 + sj .

Then, we have the following proposition. Proposition 5.1. Algorithm 3.5 with the discrete Chebyshev collocation system (3.19) applied to y  = λy yields (5.1)

ym+1 = Sn (λh)ym ,

m≥0

with the stability function Sn (z) defined by (5.2)

Sn (z) = 1 + z +

z 2 2

μn (z),

where μn (z) is the last component of the vector (L − z2 I)−1 b. Here, I denotes the n × n identity matrix. Proof. For Dahlquist’s test problem y  = λy, f (t, y) = λy is a linear function and hence the diagonal matrix J in (3.19) becomes J = λI. Further, the vector f in the right-hand side of (3.19) becomes f=

hλ2 ym b 2

by the definitions of F and Euler’s polygon y(t) in (3.6) and (3.2), respectively. Hence, the discrete Chebyshev collocation system (3.19) applied to Dahlquist’s test problem y  = λy becomes   2  hλ λh I d= (5.3) Ad = L − ym b 2 2

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

ERROR CORRECTION METHODS FOR SOLVING STIFF IVPS

2225

with the unknown d = [β1 , . . . , βn ]T . Thus, the formula (3.24) becomes (5.4)

ym+1 = (1 + λh)ym + βn ,

where βn is the solution of the system (5.3). Therefore, solving (5.3) and inserting the solution into (5.4) leads to (5.1). Another useful formula for Sn (z) is the following. Proposition 5.2. The stability function Sn (z) of (5.2) satisfies  2  z L − z2 I b 2 det r 1+z , (5.5) Sn (z) = det(L − z2 I) where r = [0, 0, . . . , 0, −1] is the 1 × n vector. Proof. The two equations (5.3) and (5.4) can be written as  2     z L − z2 I 0 d b , 2 = ym r 1 ym+1 1+z where z = λh. Thus, Cramer’s rule implies that the denominator of Sn (z) is det(L − z 2 I) and its numerator is 

z det L − 2 I r

2  z b . 2 1+z

For the function Sn (z) defined in (5.2) (or (5.5)), the stability domain of the method is defined as (see [20, p.16]) Γn := {z ∈ C : |Sn (z)| < 1} . When the left-half complex plane is contained in Γn , the method is called A-stable [20, p. 42]. Also, a method is said to be A(α)-stable if the sector {z : | arg(−z)| < α,

z = 0}

is contained in the stability region Γn [20, p. 45]. Further, a method is called L-stable [20, p. 44] if it is A-stable and if, in addition, lim Sn (z) = 0.

(5.6)

z→∞

Using the symbolic calculation with Mathematica, the explicit formula for the stability function Sn (z) given in (5.5) can be obtained up to the order as possible as the memory of the Mathematica program is allowed. Here, we list the explicit formula and draw the corresponding stability domain Γn . The stability functions Sn (z), (n = 1, . . . , 4) can be obtained as follows: (1) (4, 1) , S2 (z) = , (1, −1) (4, −3, 1) (96, 32, 3) (384, 144, 20, 1) , S4 (z) = , S3 (z) = (96, −64, 19, −3) (384, −240, 68, −11, 1)

S1 (z) =

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

2226

PHILSU KIM, XIANGFAN PIAO, AND SANG DONG KIM 6

10 8

4 6 4

2

2

0

0 −2

−2 −4 −6

−4

−8

−6 −2

−1

0

1

2

3

4

5

−10 −0.05

8

10

6

8

0

0.05

0

0.05

0

0.05

6

4 4

2

2

0

0 −2

−2

−4

−4 −6

−6

−8

−8 −2

0

2

4

6

8

−10 −0.05 10

8

8 6

6 4

4

2

2 0

0

−2

−2

−4 −4

−6 −6 −8 −2

−8 0

2

4

6

8

10

12

14

−10 −0.05

Fig. 5.1. (Left) Stability domain. (Right) Detail near the origin. From the first row to the final row, the convergence order n is n = 2, 3, 4.

where the notation (a0 , . . . , ak ) means the polynomial kj=0 aj z j with respect to z. Figure 5.1 shows the stability domains for Sn , n = 2, 3, 4 and it can be observed that A-stability holds only for n = 2. But, it can be noted that all methods up to the convergence order 4 are A(α)-stable with a large α, even though the magnitude of α goes to small when the convergence orders are increasing. Further (5.6) is satisfied for all cases. Thus, one may say that the proposed scheme is almost L-stable (see [30]). 6. Numerical test. In this section, we test three examples to give numerical evidence for the theoretical results of the proposed method. Example 1. As the first example, we test the Prothero–Robinson equation, which is a particular case of the family of scalar equations proposed by Prothero and Robinson in [28] and constitutes a stiff problem, φ (t) = ν(φ(t) − g(t)) + g  (t),

t ∈ (0, 10];

φ(0) = 1,

where the eigenvalue ν is ν = −106 and g(t) = sin(t). The exact solution is given by φ(t) = sin(t). In Table 6.1, for different step sizes h = 2−n , the results obtained from the proposed fourth-order method and the fourth-order CCM [30] are compared,

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

2227

ERROR CORRECTION METHODS FOR SOLVING STIFF IVPS

Table 6.1 Results for Example 1 using the fourth-order CCM [30] and the proposed fourth-order method.

n −1 0 1 2 3 4

Feval 40 80 160 320 640 1280

CCM [30] Err(h) 7.6985 × 10−9 5.0844 × 10−10 3.0297 × 10−11 1.8114 × 10−12 1.1013 × 10−13 6.7723 × 10−15

R(h) 3.92 4.07 4.06 4.04 4.02

Feval 15 30 60 120 240 480

Present method Err(h) 7.9576 × 10−9 5.0844 × 10−10 3.2457 × 10−11 2.0329 × 10−12 1.2708 × 10−13 1.2708 × 10−15

R(h) 3.97 3.97 4.00 4.00 4.00

Table 6.2 Results for Example 2 with κ = 1 using three methods, JECEM, ECEM, and CCM [30].

n 1 2 3 4 5

Feval 60 120 240 480 960

JECEM Err(h) 1.62 × 10−4 9.20 × 10−6 5.52 × 10−7 3.29 × 10−8 2.00 × 10−9

R(h) 4.14 4.06 4.07 4.04

ECEM Err(h) 9.99 × 10−4 4.06 × 10−5 1.53 × 10−6 6.40 × 10−8 3.04 × 10−9

R(h) 4.62 4.73 4.58 4.40

Feval 246 400 800 1282 2560

CCM [30] Err(h) 5.63 × 10−6 4.77 × 10−7 3.58 × 10−8 2.39 × 10−9 1.54 × 10−10

R(h) 3.56 3.74 3.90 3.95

where the column Err(h) shows the maximum of the absolute error over the integration interval given by Err(h) = maxj {|φexact (tj ) − φcomputed(tj )|}. In the same table, “Feval” expresses the number of evaluations of the vector f at step 6 of Algorithm 3.5 for the function in the right-hand side of (3.1) and the column R(h) means the convergence order defined by R(h) = log(Err(h)/Err(h/2))/ log 2. The results show that both CCM [30] and the proposed method have similar absolute errors. These results also reveal that the convergence order of the proposed method is 4 as expected by our theoretical analysis. However, with respect to function evaluations, the present method is superior to CCM [30] because of its explicit representation. Example 2. Consider the nonlinear initial value problem in [1] κφ(t)(1 − φ(t)) 5 dφ = , (0, 10]; φ(0) = , dt 2φ(t) − 1 6  5 −κt whose solution is φ(t) = 12 + 14 − 36 e with a parameter κ. To investigate the effects of the usage of the partial derivative fφ , we denote the method using fφ by JECEM and the method using the forward difference approximation with λ = 3 in (3.12) for fφ by ECEM. For small parameter k = 1, we solved the problem by the fourth-order schemes for various step sizes h = 2−n , the results of three methods JECEM, ECEM, and CCM [30] are compared in Table 6.2. The numerical results, as expected in the theoretical analysis, show that both JECEM and ECEM have the same convergence order 4 with the same cost for the function evaluations. Even though CCM in [30] comparing to JECEM and ECEM has the similar convergence order 4 and a better quality of the maximum error, its cost for the function evaluations are increasing rapidly as the step size h is decreasing. In this point of view, one may conclude that both JECEM and ECEM are superior to CCM. Also, if the use of partial derivative fφ is quite costly, one may say that ECEM is more superior to the other two methods. For the quite large parameter k = 50, we solved the problem by two third-order methods ECEM and CCM. The numerical results in Table 6.3 show

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

2228

PHILSU KIM, XIANGFAN PIAO, AND SANG DONG KIM

Table 6.3 Results for Example 2 with κ = 50 using two third-order methods ECEM and CCM [30].

n 7 8 9 10 11

ECEM Err(h) R(h) 8.99 × 10−5 7.98 × 10−6 3.49 8.23 × 10−7 3.28 9.34 × 10−8 3.14 1.11 × 10−8 3.07

CPU 0.15 0.22 0.42 0.89 1.75

CCM [30] Err(h) R(h) 2.69 × 10−5 4.30 × 10−6 2.65 6.04 × 10−7 2.83 8.02 × 10−8 2.91 1.03 × 10−8 2.96

CPU 0.14 0.27 0.53 1.03 2.09

Table 6.4 Results for Example 3 using two third-order methods ECEM and CCM [30].

n 7 8 9 10 11

ECEM Err(h) φ1 (t) φ2 (t) 1.03 × 10−9 3.05 × 10−10 7.44 × 10−11 3.82 × 10−11 6.58 × 10−12 4.79 × 10−12 6.96 × 10−13 5.98 × 10−13 8.21 × 10−14 7.48 × 10−14

CPU 0.016 0.016 0.047 0.078 0.172

CCM [30] Err(h) φ1 (t) φ2 (t) 3.10 × 10−10 3.05 × 10−10 3.83 × 10−11 3.83 × 10−11 4.87 × 10−12 4.85 × 10−12 7.25 × 10−13 7.18 × 10−13 3.20 × 10−13 3.12 × 10−13

CPU 0.047 0.047 0.094 0.187 0.343

that quite less CPU time of ECEM compared to CCM is required to get a similar maximum error. Example 3. Consider the stiff system of initial value problems taken from [31] φ1 (t) = −1002φ1(t) + 1000φ22 (t),

φ1 (0) = 1,

= φ1 (t) − φ2 (t)(1 + φ2 (t)),

φ2 (0) = 1,

φ2 (t)

whose solution is φ1 (t) = e−2t and φ2 (t) = e−t . The problem is solved with two thirdorder methods ECEM and CCM. According to Table 6.4, ECEM requires meaningfully less CPU time than CCM to get similar maximum errors. 7. Conclusion. Error correction methods for solving stiff initial value problems are developed using a Chebyshev interpolation polynomial. These methods are involved by employing either the partial derivative fφ or its forward difference approximation. Also, a unified convergence analysis is provided for both methods. The stability properties are analyzed and it is shown that the proposed methods are almost L-stable. It is remarkable phenomena that the good stability and convergence are obtained even though the scheme is a type of explicit method and is not a required evaluation of the partial derivatives. But one may note that the solution of the linear system (3.19) requires a similar cost to a Newton iteration in a standard implicit Runge–Kutta method and the computations in (3.9) can be regarded as counterparts of computations which occur in traditional implementations of implicit methods. Thus, we will discuss the reduction of computational costs under the same accuracy and stability in a forthcoming paper to employ the full advantage aspects of explicitness in (3.9). For example, one may consider another extension based on Gauss–Seidel iteration technique as follows: for t ∈ [tm , tm+1 ], consider the componentwise formula for i = 1, . . . , d (7.1)

(j)

dφi dt

(j)

(j)

(j)

(j−1)

(j−1)

= fi (t, φ1 (t), . . . , φi−1 (t), φi (t), φi+1 (t), . . . , φd

(t)),

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

ERROR CORRECTION METHODS FOR SOLVING STIFF IVPS

2229

(j)

where φi (t) denotes the jth iteration for φi (t) and one can choose the initial guess (0) φi (t), i = 2, . . . , d by the Euler polygon (0)

φi

= Ymi + (t − tm )fi (tm , Ym ),

t ∈ [tm , tm+1 ].

Here, Ymi denotes the ith component of the vector Ym . Note that each componentwise formula (7.1) can be regarded with the scalar equation as we discussed in this paper. Hence, the proposed scheme can be applied directly to find an approximation (j) for the jth iteration φi (t) for each i = 1, . . . , d. In a forthcoming work, we will also deal with these topics with applications to several time dependent partial differential equations. Acknowledgment. The authors would like to thank the anonymous referees who suggested many helpful comments and corrections which improved this paper. REFERENCES [1] J. Alverez and J. Rojo, An improved class of generalized Runge-Kutta methods for stiff problems. Part I: The scalar case, Appl. Math. Comput., 130 (2002), pp. 537–560. [2] J. Alverez and J. Rojo, An improved class of generalized Runge-Kutta methods for stiff problems. Part II: The separated system case, Appl. Math. Comput., 159 (2004) pp. 717– 758. [3] P. Amodio and L. Brugnano, A note on the efficient implementation of implicit methods for ODEs, J. Comput. Appl. Math., 87 (1997) pp. 1–9. [4] T.A. Bickart, An efficient solution process for implicit Runge-Kutta methods, SIAM J. Numer. Anal., 14 (1977) pp. 1022–1027. [5] P.N. Brown, G.D. Byrne, and A.C. Hindmarsh, VODE: A variable-coefficient ODE solver, SIAM J. Sci. Statist. Comput., 10 (1989), pp. 1038–1051. [6] L. Brugnano, F. Iavernaro, and D. Trigiante, Numerical solution of ODEs and the Columbus’ Egg: Three simple ideas for three difficult problems, Math. Eng. Sci. Aerospace, 1 (2010), pp. 407–426. [7] L. Brugnano and C. Magherini, Blended implementation of block implicit methods for ODEs, Appl. Numer. Math., 42 (2002), pp. 29–45. [8] J.C. Butcher, Implicit Runge-Kutta processes, Math. Comp., 18 (1964), pp. 50–64. [9] J.C. Butcher, Integration processes based on Radau quadrature formulas, Math. Comp., 18 (1964), pp. 233–244. [10] J.C. Butcher, On the implementation of implicit Runge-Kutta methods, BIT, 16 (1976), pp. 237–240. [11] J.C. Butcher and G. Wanner, Runge-Kutta methods: Some historical notes, Appl. Numer. Math., 22 (1996), pp. 113–151. [12] J.R. Cash, On the integration of stiff systems of ODE’s using extended backward differentiation formulae, Numer. Math., 34 (1980), pp. 235–246. [13] J.R. Cash and S. Considine, An MEBDF code for stiff initial value problems, ACM Trans. Math. Software, 18 (1992), pp. 142–155. [14] G.J. Cooper and J.C. Butcher, An iteration scheme for implicit Runge-Kutta method, IMA J. Numer. Anal., 3 (1983), pp. 127–140. [15] C.F. Curtiss and J.O. Hirschfelder, Integration of stiff equations, Proc. Nat. Acad. Sci. USA, 38 (1952), pp. 235–243. [16] G. Dahlquist, A special stability problem for linear multistep methods, BIT, 3 (1963), pp. 27– 43. [17] W. Fraser and M.W. Wilson, Remarks on the Clenshaw-Curtis quadrature scheme, SIAM Rev., 8 (1966), pp. 322–327. [18] C. Fredebeul, A-BDF: A generalization of the backward differentiation formulae, SIAM J. Numer. Anal., 35 (1998), pp. 1917–1938. [19] N. Guzel and M. Bayram, On the numerical solution of stiff systems, Appl. Math. Comput., 170 (2005), pp. 230–236. [20] E. Hairer and G. Wanner, Solving Ordinary Differential Equations. II Stiff and DifferentialAlgebraic Problems, Springer Ser. Comput. Math., Springer-Verlag, Berlin, 1996.

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

2230

PHILSU KIM, XIANGFAN PIAO, AND SANG DONG KIM

[21] E. Hairer and G. Wanner, Stiff differential equations solved by Radau methods, J. Comput. Appl. Math., 111 (1999), pp. 93–111. [22] A.C. Hindmarsh, ODEPACK, A systematized collection of ODE solvers, Scientific Computing, R.S. Stepleman et al., eds, IMACS, New Brunswick, NJ, 1983, pp. 55–64. [23] F. Iavernaro and F. Mazzia, Solving ordinary differential equations by generalized Adams methods: Properties and implementation techniques, Appl. Numer. Math., 28 (1998), pp. 107–126. [24] F. Iavernaro and F. Mazzia, Block-boundary value methods for the solution of ordinary differential equations, SIAM J. Sci. Comput., 21 (1999), pp. 323–339. [25] L.G. Ixaru, G. Vonden Berghe, and H. De Meyer, Exponentially fitted variable two-step BDF algorithm for first order ODEs, Comput. Phys. Comm., 150 (2003), pp. 116–128. [26] J. Kwon, S.D. Kim, X. Piao, and P. Kim, A Chebyshev collocation method for stiff initial values and its stability, Kyungpook Math. J., to appear. [27] W. Liniger and R.A. Willoughby, Efficient integration methods for stiff systems of ordinary differential equations, SIAM J. Numer. Anal., 7 (1970), pp. 47–66. [28] A. Prothero and A. Robinson, On the stability and accuracy of one-step methods for solving stiff systems of ordinary differential equations, Math. Comp., 28 (1974), pp. 145–162. [29] H. Ramos and R. Garc´ıa-Rubio, Analysis of a Chebyshev-based backward differentiation formulae and relation with Runge-Kutta collocation methods, Int. J. Comput. Math., 88 (2011), pp. 555–577. [30] H. Ramos and J. Vigo-Aguiar, A fourth-order Runge-Kutta method based on BDF-type Chebyshev approximations, J. Comput. Appl. Numer., 204 (2007), pp. 124–136. [31] H. Ramos, A non-standard explicit integration scheme for initial-value problems, Appl. Math. Comput., 189 (2007), pp. 710–718. [32] R.D. Riess and L.W. Johnson, Error estimates for Clenshaw-Curtis quadrature, Numer. Math., 18 (1972), pp. 345–353. [33] H.H. Rosenbrock, Some general implicit processes for the numerical solution of differential equations, Comput. J., 5 (1962/63), pp. 329–330. [34] L.F. Shampine and M.W. Reichelt, The MATLAB ODE suite, SIAM J. Sci. Comput., 18 (1997), pp. 1–22. [35] M.M. Stabrowski, An efficient algorithm for solving stiff ordinary differential equations, Simul. Pract. Theory, 5 (1997), pp. 333–344. [36] J.G. Verwer, E.J. Spee, J.G. Blom, and W.H. Hundsdorfer, A second-order Rosenbrock method applied to photochemical dispersion problems, SIAM J. Sci. Comput., 20 (1999), pp. 1456–1480. [37] J. Vigo-Aguiar and H. Ramos, A family of A-stable Runge-Kutta collocation methods of higher order for initial-value problems, IMA J. Numer. Anal., 27 (2007), pp. 798–817. [38] X.Y. Wu and J.L. Xia, Two low accuracy methods for stiff systems, Appl. Math. Comput., 123 (2001), pp. 141–153.

Copyright © by SIAM. Unauthorized reproduction of this article is prohibited.

Suggest Documents