Solving first-order initial-value problems by using an ...

22 downloads 7120 Views 279KB Size Report
explicit non-standard A-stable one-step method in variable ..... for selecting the size of the initial step, that we call hini, (see [9,20]), we can simply take ..... for stiff systems of ordinary differential equations, in: Proceedings of the conference on ...
Applied Mathematics and Computation 268 (2015) 796–805

Contents lists available at ScienceDirect

Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Solving first-order initial-value problems by using an explicit non-standard A-stable one-step method in variable step-size formulation Higinio Ramos a,b, Gurjinder Singh c, V. Kanwar c,∗, Saurabh Bhatia c a

Scientific Computing Group, Universidad de Salamanca, Plaza de la Merced Salamanca 37008, Spain Escuela Politécnica Superior de Zamora, Campus Viriato, Zamora 49022, Spain c University Institute of Engineering and Technology, Panjab University, Chandigarh 160 014, India b

a r t i c l e

i n f o

a b s t r a c t

Keywords: Non-linear algorithms Ordinary differential equations Initial value problems Rational approximation Stability

This paper presents the construction of a new family of explicit schemes for the numerical solution of initial-value problems of ordinary differential equations (ODEs). The one-parameter family is constructed by considering a suitable rational approximation to the theoretical solution, resulting a family with second-order convergence and A-stable. Imposing that the principal term in the local truncation error vanishes, we obtain an expression for the parameter value in terms of the point (xn , yn ) on each step. With this approach, the resulting method has third order convergence maintaining the characteristic of A-stability. Finally, combining this last method with other of order two in order to get an estimation for the local truncation error, an implementation in variable step-size has been considered. The proposed method can be used in a wide range of problems, for solving numerically a scalar ODE or a system of first order ODEs. Several numerical examples are given to illustrate the efficiency and performance of the proposed method in comparison with some existing methods in the literature. © 2015 Elsevier Inc. All rights reserved.

1. Introduction Consider the initial value problem (IVP)

y = f (x, y); y(a) = y0 ,

x ∈ [a, b],

(1)

where we assume for now that y, f (x, y) ∈ R. Further, we assume that f satisfies the conditions of the Existence and Uniqueness Theorem, in order to guarantee that the problem in (1) has a unique continuously differentiable solution, which we denote by y(x). It is well-known that an initial value problem of the form in (1) can be solved analytically just in a few cases. In case it is not possible, we are interested in an approximate discrete solution, say yn ≈ y(xn ), at the nodal points xn = a + nh; n = 0, 1, 2, 3, . . . , N, where h is called the step size. The step size may be constant or variable along the integration interval. Much research work has been carried out to solve a given ODE or a system of first order ODEs numerically. Many linear one-step and multi-step methods already exist in the literature, which play a significant role in numerical solution of ODEs. But when the given system of first order ODEs is stiff usually the methods to cope with the system are implicit, requiring more computational ∗

Corresponding author. Tel.: +91 9878369981. E-mail addresses: [email protected] (H. Ramos), [email protected] (G. Singh), [email protected], [email protected] (V. Kanwar), [email protected] (S. Bhatia). http://dx.doi.org/10.1016/j.amc.2015.06.119 0096-3003/© 2015 Elsevier Inc. All rights reserved.

H. Ramos et al. / Applied Mathematics and Computation 268 (2015) 796–805

797

work than the explicit methods. So, we need some form of the Newton’s iteration scheme to solve the resulting implicit system of algebraic equations. To use Newton’s iteration scheme, it is necessary to calculate inverses of the jacobians, although this may be avoided by solving a linear system on each step, which results in a saving of computation time [28]. Anyway, this is a major inconvenience with implicit systems. Another kind of initial-value problems for which classical numerical methods usually perform poorly is that of singular problems or singularly-perturbed problems. The first-ones are characterized by the presence of a singularity in the solution or in some of the low order derivatives, while singularly-perturbed IVPs usually contain a small parameter that multiplies the first-order derivative and are characterized by the presence of thin layers where the solution varies very fast, whereas away from the layers the solution behaves regularly and varies slowly. It is the purpose of this paper to construct new explicit schemes to cope with stiff systems and other types of special initial value problems as those just mentioned, for which conventional numerical integrators result inefficient. Some useful references are given in [1–30]. In the present paper we present a new family of explicit schemes for the numerical integration of the initial value problem (1). The proposed family has second order convergence and is A-stable. The proposed scheme has been tested on a variety of IVPs of first order ODEs. The comparison among the proposed scheme with some existing methods demonstrates that the proposed scheme gives more accurate results as compared to some existing methods. The paper is organized as follows: In Section 2, the derivation of the family of methods is given. Error analysis is carried out in Section 3 and stability analysis is considered in Section 4. Section 5 includes the important issue of variable step-size formulation. Some comments on the applicability of the method for solving systems are included in Section 6, while some implementation details to make more effective the application of the method are considered in Section 7. Finally, some numerical results are presented in Section 8, and a section with conclusions puts an end to the article. 2. Development of new schemes We consider the following approximation yn+1 to the theoretical solution y(x) of (1) at the point xn+1 = xn + h as

yn+1 = y(xn ) + h

y (xn ) , α + β h + γ h2

(2)

where it is assumed that we choose the parameter values in such a way that α + β h + γ h2 = 0 where α , β , γ ∈ R. The Eq. (2) can be associated with the following operator

L[y(xn ); h] = [y(xn+1 ) − y(xn )][α + β h + γ h2 ] − h[y (xn )].

(3)

Expanding y(xn+1 ) in the neighborhood of xn by Taylor’s expansion, we obtain

L[y(xn ); h] = [−1 + α ]y (xn )h + [α y (xn ) + 2β y (xn )]

h2 + O(h3 ). 2

(4)

For obtaining methods of order two, the coefficients of h and h2 in Eq. (4) must be zero simultaneously. Therefore, by equating the coefficients of h and h2 to zero, we obtain the following system of equations

α y (x

−1 + α = 0,

n

) + 2β y (xn ) = 0.

Solving this system of equations, we obtain

α=1 and

β=

−y (xn ) . 2y (xn )

The substitution of these values of α and β in (2) results in

y(xn+1 ) = y(xn ) +

2h(y (xn ))2 + O(h3 ), 2(1 + γ h2 )y (xn ) − hy (xn )

(5)

where γ is a free parameter. From Eq. (5), we obtain the numerical method given by

yn+1 = yn +

2h(yn )2 . 2(1 + γ h2 )yn − hyn

(6)

Having in mind the differential equation in (1), it may also be re-written in an equivalent form as

yn+1 = yn +

2h( fn )2 , 2(1 + γ h2 ) fn − h fn

where yn = y(xn ), yn+1 ≈ y(xn+1 ), fn = yn = f (xn , yn ) and fn = yn = f  (xn , yn ). This a new one-parameter family of nonlinear schemes for the numerical solution of (1).

(7)

798

H. Ramos et al. / Applied Mathematics and Computation 268 (2015) 796–805

Note 1. By putting γ = 0 in (7), we can obtain the following method given in [1]

yn+1 = yn +

2h( fn )2 . 2 fn − h fn

(8)

Note 2. By putting any real value of γ in (7), we can obtain new nonlinear schemes, for instance, let γ = 1, we have

yn+1 = yn +

2h( fn )2 . 2(1 + h2 ) fn − h fn

(9)

Note 3. In view that

fn =

∂f ∂f (x , y ) + (xn , yn ) fn , ∂x n n ∂y

(10)

for an autonomous problem of the form y = f (y), the proposed scheme (7) becomes

yn+1 = yn +

2h fn , 2(1 + γ h2 ) − h Jn

(11)

where now we have that fn = f (yn ) and we have used the notation Jn = ∂∂ yf (yn ). 3. Error analysis and derivation of a third-order method The construction of the proposed family suggests that the new family (6) has second order convergence. Consider the the following operator associated to the proposed scheme in (6) given by



L[z(x); h] = z(x + h) − z(x) −



2h(z (x))2 , 2(1 + γ h2 )z (x) − hz (x)

(12)

where z(x) is an arbitrary analytic function defined on [a, b]. Expanding the above expression by Taylor’s series about x and collecting terms in h, after substituting z(x) by the solution y(x) of (1) and x by xn we obtain the following local truncation error



LT E =



1 (3) 3(yn )2 y − + 6γ yn h3 + O(h4 ), 6 n 2yn

(13)

(3)

where yn , yn , yn denote the numerical approximations to the first, second and third order derivatives of y(x) at xn , respectively. Hence, the proposed scheme (6) has at least second order convergence. Now, in order to get an improved method we consider the principal term of the local truncation error in (13) and impose that the coefficient vanishes. Solving the resulting equation for γ we obtain that

γ=

3(yn ) − 2yn y(n3) 2

12(yn )

2

.

(14)

Assuming that yn = 0, after substituting this value of γ in the formula (6), we obtain a numerical method given by

yn+1 = yn +

12 h(yn )3





12(yn )2 − 6 yn yn h + 3(yn )2 − 2yn y(n3) h2

(15)

(3)

where yn = y(xn ) , yn+1 ≈ y(xn+1 ) , and yn , yn , yn are approximations of the successive derivatives of the solution at xn . In view of the differential equation in (1) the method in (15) may be rewritten in terms of the r.h.s. of the differential equation as

yn+1 = yn +

12( fn )

2

12 h( fn )3 − 6 fn fn h + (3( fn )2 − 2 fn fn )h2

(16)

where

fn = f (xn , yn ) ,

fn =

df (xn , yn ) , dx

fn =

d2 f (xn , yn ). dx2

In particular, when the differential equation is of autonomous type, y = f (y), it is readily deduced that the method may be rewritten in simplified form as (see [3])

yn+1 = yn +

12h fn , 12 − 6Jn h + ( Jn2 − 2 fn Hn ) h2

where yn = y(xn ), yn+1  y(xn+1 ), fn = f (yn ) and

Jn =

y ∂f (yn ) = n , ∂y yn

Hn =

y y(3) − (y )2 ∂2 f (yn ) = n n  3 n . 2 ∂y (yn )

(17)

H. Ramos et al. / Applied Mathematics and Computation 268 (2015) 796–805

799

A rediscovering of this method, intended for solving non-autonomous stiff problems has appeared in [10]. In order to obtain the local truncation error of the method in (16) we proceed as it is usual and consider the functional given by

L[z(x); h] = z(x + h) − z(x) −

12h z (x)3





12z (x)2 − 6z (x)z (x) h + 3z (x)2 − 2z (x)z(3) (x) h2

(18)

where z(x) is an arbitrary function defined on [a, b] and differentiable as often as we need, with the exception of some possible singularities. After expanding in Taylor series about x and collecting terms in h we obtain

L[z(x); h] =

1 24





4z(3) (x)z (x) 3z (x)3 − + z(4) (x) h4 + O(h5 ),  2 z (x) z (x)

(19)

which means that the method in (16) has third-order of accuracy. Substituting z(x) by the solution y(x) of (1) and x by xn , the local truncation error of the method may be written as

1 LT E = 24



4y(n3) yn 3(yn )3 − + y(n4) yn (yn )2

(3)



h4 + O(h5 ),

(4)

where yn , yn , yn and yn denote respectively the numerical approximations to the first, second, third and fourth order derivatives of y(x) at the point xn . In view of the differential equation in (1) the local truncation error may be also rewritten as

LT E =

1 24



4 f  f  3( fn )3 − n n + fn(3) 2 fn ( fn )



h4 + O(h5 ),

which is not very practical, but for the autonomous case it may be simplified yielding

LT E =

1 3 f Kn h4 + O(h5 ), 24 n

(20)

∂3

where Kn = ∂ y3f (yn ). The principal term in (20) could be used as an estimate of the local truncation error. 4. Stability analysis The linear stability analysis of the above schemes is examined as usually by applying them to the Dahlquist’s test problem

y = λy, Re(λ) < 0.

(21)

Theorem 4.1. Assuming that h and γ are chosen such that 1 + γ h2 > 0 the proposed family in (6) is A-stable. Proof. By applying the proposed family (6) to the test Eq. (21), we obtain

yn+1 = yn +

2hλ2 (yn )2 , 2(1 + γ h2 )λyn − hλ2 yn

(22)

which may be simplified as

yn+1 = yn +

2λhyn . 2(1 + γ h2 ) − λh

(23)

Now, consider the stability equation given by



yn+1 =



2(1 + γ h2 ) + z yn , 2(1 + γ h2 ) − z

(24)

2(1+γ h2 )+z

where z = λh and R(z) = 2(1+γ h2 )−z is the corresponding stability function. From Eq. (24), we have that

     yn+1   2(1 + γ h2 ) + z  < 1,  y = 2(1 + γ h2 ) − z  n

(25)

for Re(z) < 0, and those values of γ such that 1 + γ h2 > 0. Hence, the proposed family is A-stable for those value of γ ∈ R such that 1 + γ h2 > 0, and thus the absolute stability region of the method consists of the whole left-half complex plane.  Theorem 4.2. The method in (15) is A-stable. Proof. By applying the method in (15) to the test Eq. (21) yields the difference equation

yn+1 =

h2 λ2 + 6hλ + 12 yn . h2 λ2 − 6hλ + 12

800

H. Ramos et al. / Applied Mathematics and Computation 268 (2015) 796–805

Setting z = λ h the stability function is obtained as

R(z) =

z2 + 6z + 12 , z2 − 6z + 12

(26)

which is the (2, 2)−Padé approximation to the exponential ez , and thus the method is A-stable (see [27]). So, the absolute stability region of the method consists of the whole left-half complex plane.  It must be mentioned here the Dahlquist barrier concerning that only linear multistep methods of order less than three can be A-stable. As the methods considered in this article are not linear, they do not need to be subject to the restrictions imposed by this barrier. 5. Formulation using variable step-size The methods presented above have been formulated using a fixed stepsize h. However, as some authors have remarked, to be efficient, an integrator based on a particular formula must be suitable for a variable step-size formulation [22,26]. There are mainly two ways to do that, using a formula where the coefficients depend on the ratios of the step sizes (as in the variablecoefficient linear multistep codes) or using a second method (as in the embedding Runge–Kutta pairs). In all the cases the goal is to adjust the step sizes so as to keep the estimated local errors smaller than a given tolerance and, at the same time, to solve the problem as efficiently as possible [7]. So, a reliable estimate of the local error is needed. For that purpose we consider the method of second order in (8) and the method of third order in (16). If we denote by y1n the approximation obtained with the method of second order and by y2n the approximation obtained with the third-order method, the difference ERR = |y1n − y2n | will be used as an estimate of the local error on each step. This is an embedding-like procedure in the sense that the values needed by the method of less order must be used by the other method also, and so there is no additional cost in the computation of these values. The strategy considered for changing the step size is that used on multistep codes, [8,20]: given a tolerance, TOL, for a selected norm,  · , the classical step-size prediction derived from equating this tolerance to the norm of the local truncation error yields a new step-size hnew given by

hnew ≈ ν hold

 T OL 1/( p+1) ERR

(27)

where p is the order of the method, and 0 < ν < 1 is a safety factor whose purpose is to avoid failed steps. Normally some restrictions must be considered in order to avoid large fluctuations in step-size: step-size is not allowed to decrease by more than a factor δ 1 or increase by more than a factor δ 2 . This may be included in the implementation using an If statement:

If

δ1 hold ≤ hnew ≤ δ2 hold then hnew = hold ,

where δ 1 , δ 2 are two constants close to unity, with δ 1 < 1 and δ 2 > 1. This strategy is applied successively to predict the step size for the next step after a successful step, i.e. when ERR < TOL. Although there are different strategies for selecting the size of the initial step, that we call hini , (see [9,20]), we can simply take a very small starting step value as in [11], and then the algorithm will correct this value if necessary, according to the strategy for changing the step-size. 6. Considerations on the applicability of the methods to differential systems The above methods may be applied to a system of first-order ordinary differential equations using a componentwise implementation (see [29], p. 218). Details on the application of the method in (8) to a first-order system coming from an equivalent third-order initial-value problem may be found in [2]. Consider a system of m equations, which may be written in vector form as

y = f(x, y),

y(a) = y0 ,

a≤x≤b

)T , f(x, y)

where y = (y1 , . . . , ym = ( f1 (x, y1 , . . . , ym ), . . . , fm (x, y1 , . . . , ym ))T and y0 = (y1,0 , . . . , ym,0 )T . The above methods for scalar equations being one-step methods, may be written as

yn+1 = yn + h f (xn , yn , h), where f (xn , yn , h) is the incremental function, and the subscript f on the right-hand side indicates that the dependence of f on its variables is through the function f (and its derivatives). Applying this method to each of the scalar equations in the differential system results in

yn+1 = yn + h (xn , yn , h) where

T  (xn , yn , h) = f1 (xn , y1,n . . . , ym,n ), . . . , fm (xn , y1,n . . . , ym,n ) .

H. Ramos et al. / Applied Mathematics and Computation 268 (2015) 796–805

801

Concerning the stability of one-step methods for the system of first-order differential equations, it is well-known that for linear methods the theory establishes that the stability analysis of the singlestep method as applied to the differential system can be discussed by applying it to the scalar equation in (21) (see [30]). This is the case for linear multistep methods, Runge– Kutta methods (RK) or Runge–Kutta–Nyström methods, but it may not be true for nonlinear single-step methods. For example, in the case of rational RK methods (which are nonlinear one-step methods), some authors as M. Calvo and M. Mar-Quemada [16] have analyzed the stability properties for a diagonal linear system with dimension arbitrary (y = y ,  = diag (λ1 , λ2 , . . . , λk )) which may be different to the scalar case (k = 1). In addition, G. Sottas [17] has shown that the step sizes which can be expected when solving a stiff differential system with a rational or with an explicit linear RK-method are of the same order of magnitude. Nevertheless, for the stability behavior of the rational method in (15) for the system of first-order differential equations it is sufficient to consider the above system (see [18])

y =  y,

 = diag (λ1 , λ2 , . . . , λk ).

If we apply the method in (15) to this test equation we get

R = diag (R1 , R2 , . . . , Rk )

yn+1 = R yn ,

where each Ri is a rational function of the type in (26) depending only on hλi . For this reason the stability considerations can be restricted to a scalar test equation. Thus, the method is A-stable in the classical sense also for systems. For a nonlinear multistep method (with more than one step) it occurs that the method may be A-stable when applied to a scalar problem, but can be conditionally stable for solving linear differential systems. 7. Implementation details In order to implement the method in (15) we consider the method in (7) and we calculate at the beginning of the process with the help of a computer algebra system the function

γ (x, y) =

3(y (x)) − 2y (x)y(3) (x) 2

12(y (x))

2

x,y) f (x,y) using that y (x) = f (x, y), y (x) = df (dx , y(3) (x) = d dx . 2 In this way we consider on each step the approximation given by 2

yn+1 = yn +

2h( fn )2 , 2(1 + h2 γ (xn , yn )) fn − h fn

and thus we do not have to evaluate the higher derivatives appearing in (14) on each step. This results in three function evaluations per step, those of fn , fn and γn = γ (xn , yn ). In the numerical results using the variable stepsize formulation we have taken the safety factor as ν = 0.9. 8. Numerical results To test the performance of the proposed schemes we have presented different problems that have appeared in the literature. For the first, second and fourth problems, all the numerical experiments were performed in double precision using Mathematica 9.0 on a personal computer with an Intel(R) Celeron(R) based processor (2.40 GHz) and the computational work for the third, fifth and sixth problems has been done in Matlab version 7.9.0.529 (R2009b) on a personal computer (32 bit operating-system). For comparison purposes the following methods have been used: • The second order A-stable method in (8) [1], using constant step-size, which is indicated as MA2. • The third order A-stable method in (15), using constant step-size, which is indicated as MA3. • The third order A-stable method in (15) using the above strategy to be formulated in variable step-size, which is indicated as MA3VS. • The third order Taylor method, using constant step-size, which is indicated as MTL3. • The rational third order method in [6] using constant step-size, which is indicated as MNI3. • The well-known Dormand–Prince 5(4) pair [12], or Dopri5(4), which is formed by Runge–Kutta methods with orders 5 and 4. It can also be found as the algorithm ode45 in Matlab. • A modified implicit Rosenbrock method (implicit one-step Runge–Kutta method) of order 2 [13]. This method is currently used in Matlab as the algorithm ode23s which was specifically designed to solve stiff systems (see [19]). The ode45 and ode23s are currently used in Matlab toolbox for integrating systems of differential equations. At each step, the local error E(i) in the ith component of the solution is estimated and is required to be less than or equal to the acceptable error, which is a function of two user-defined tolerances RelTol and AbsTol. The following criterion for the selection of tolerance is used for all the methods

E (i) ≤ max (RelTol ∗ abs(y(i)), AbsTol ), where RelTol and AbsTol are positive real numbers. For the third, fifth and sixth problems RelTol = 0.001 and AbsTol = 0.000001. These are the values of RelTol and AbsTol which are the default values in Matlab.

802

H. Ramos et al. / Applied Mathematics and Computation 268 (2015) 796–805 Table 1 Results for problem 1. Fixed step-size N

E(xN )

182 289 462 635 975 1437 2119 3113

Variable step-size Time

3.90232 2.88947 × 10−1 9.17158 × 10−2 1.44616 × 10−1 9.72407 × 10−2 6.73331 × 10−2 4.63231 × 10−2 3.18602 × 10−2

0.016 0.016 0.031 0.062 0.093 0.156 0.328 0.641

E(xN )

Time −6

8.82844 × 10 2.02776 × 10−6 4.36543 × 10−7 7.71190 × 10−8 1.92111 × 10−8 4.22068 × 10−9 1.03579 × 10−9 3.77049 × 10−10

0.015 0.015 0.032 0.063 0.094 0.172 0.297 0.531

FIXED STEP-SIZE

2 1

1.0

0.5

0.5

1.0

1 2

VARIABLE STEP-SIZE

2 1

1.0

0.5

0.5

1.0

1 2 Fig. 1. Exact and discrete solutions for problem 1 with the method in (15) using fixed and variable step-size implementations.

8.1. Problem 1 Firstly we consider an autonomous nonlinear problem for which there is a singular feature due to the presence of a pole on the derivatives of the solution. The problem is

y (x) =

1 , y(x)2

√ 3 y( − 1 ) = − 3

√ 3 with exact solution given by y(x) = 3x. The integration interval will be [−1, 1] and hini = 0.01. In Table 1 we have compared the performances of the fixed and variable step size implementations, taking the same number of steps for both methods, according to the selected tolerances. We have included the absolute errors at the final point, E(xN ), with xN = 1, and the CPU time. Fig. 1 shows the exact and discrete solutions (when N = 289) with the two implementations, where we can see how the implementation in variable step size follows the dynamic of the solution, and the contrary occurs for the fixed step size mode. We have considered this problem in order to highlight the importance of the variable step mode. The rest of the methods have not been considered due to poorer results. 8.2. Problem 2 Now we consider the non-linear singularly perturbed problem of Riccati type given by

y (x) = −y(x)(y(x) − 1) cos (x),

y(0) = 0.5

(28)

H. Ramos et al. / Applied Mathematics and Computation 268 (2015) 796–805

803

Table 2 Maximum absolute errors for problem 2. TOL

N

4.838

64

5.840

128

6.795

256

7.725

512

8.642

1024

9.553

2048

MNI3

MA3

MTL3

MA3VS

Dopri5(4)

2.036 × 10−2 − 1.905 × 10−3 − 1.223 × 10−3 (0.0156) 1.546 × 10−4 (0.0312) 1.938 × 10−5 (0.0468) 2.425 × 10−6 (0.1248)

2.126 × 10−3 − 1.203 × 10−4 − 7.343 × 10−6 (0.0156) 4.585 × 10−7 (0.0156) 2.895 × 10−8 (0.0468) 1.851 × 10−9 (0.1092)

6.392 × 10−2 − 9.364 × 10−3 − 2.212 × 10−3 (0.0156) 5.172 × 10−4 (0.0312) 1.249 × 10−4 (0.0624) 3.072 × 10−5 (0.1248)

7.221 × 10−8 − 7.289 × 10−9 − 8.140 × 10−10 (0.0156) 9.616 × 10−11 (0.0312) 1.169 × 10−11 (0.0624) 1.444 × 10−12 (0.1248)

2.635 × 10−6 (0.047) 3.672 × 10−9 (0.094) 2.087 × 10−10 (0.157) 4.926 × 10−12 (0.266) 1.218 × 10−13 (0.453) 1.220 × 10−15 (0.811)

Table 3 Data for problem 3. hini

Method

Results

E(xN )

10−4

MA3VS Dopri5(4) ode23s MA3VS Dopri5(4) ode23s MA3VS Dopri5(4) ode23s

No. of steps (N) = 15 Failure at x = 0.9999993 Failure at x = 0.9985486 No. of steps (N) = 18 Failure at x = 0.9999649 Failure at x = 0.9985979 No. of steps (N) = 21 Failure at x = 0.9999984 Failure at x = 0.9985441

8.8818 × 10−16 − − 6.2172 × 10−15 − − 4.3077 × 10−14 − −

10−5

10−6

which has the exact solution

y(x) =

1 1 + exp

 − sin (x)  .

This problem exhibits an initial layer at x = 0. We have solved the problem in the interval [0, 1] taking = 0.01, as in [10] in order to compare the performance of the variable-step implementation with that of the third-order methods in this article and the method Dopri(5)4. The tolerances have been selected in order to have the same number of steps for all the methods considered. The initial step in the variable mode was taken as hini = 10−6 . We observe from the data in Table 2 that the accuracies of the higher order method Dopri(5)4 are better but it is computationally more expensive. 8.3. Problem 3 The last scalar problem we present is the IVP taken from [1] given by

y (x) = −4 + 4y(x) − y(x)2 ,

y(0) = 1,

x ∈ [0, 2] .

(29)

which has a singularity at x = 1. In Table 3 the results with different methods are presented. The exact solution is y(x) = The absolute errors, E(xN ), at the final point xN = 2 are included. We observe that the ode45/Dopri5(4) and ode23s solvers fail in solving the problem, and the computations stop just before the singularity. 2x−1 x−1 ,

8.4. Problem 4 As we noticed before, the proposed scheme may also be applied to a system of first order ODEs. If we have y, f ∈ Rm in (1), then we have just to consider the formula for every component. Let be the stiff system taken from [1], which have appeared different times in literature

y1 (x) = −1002y1 (x) + 1000y2 (x)2 , y2 (x) = y1 (x) − y2 (x)(1 + y2 (x)),

y1 (0) = 1 y2 (0) = 1,

which will be integrated in the interval [0, 5], being the exact solution

y1 (x) = e−2x ,

y2 (x) = e−x .

In Table 4 we have included the absolute errors at the final point, xN = 5, and the maximum absolute errors along the integration interval for each component of the solution, the number of function evaluations, and the CPU time. Tolerances have been

804

H. Ramos et al. / Applied Mathematics and Computation 268 (2015) 796–805 Table 4 Data for problem 4 using the same number of steps: N = 2233. E (xN = 5)

Method

Emax

y1 (x)

y2 (x) −11

4.77 × 10 2.46 × 10−9

MA3VS Dopri5(4)

y1 (x) −11

6.98 × 10 2.46 × 10−12

Feval

CPU

6699 13398

0.469 0.750

y2 (x) −8

8.76 × 10 5.92 × 10−9

1.07 × 10−10 6.91 × 10−12

Table 5 Data for problem 5. hini

Method

Results

E(y1 (xN ))

E(y2 (xN ))

10−3

MA3VS Dopri5(4) ode23s MA3VS Dopri5(4) ode23s

No. of steps (N) = 11 Failure at t = 0.9999755 Failure at t = 0.9978786 No. of steps (N) = 15 Failure at t = 0.9999975 Failure at t = 0.9979052

7.1054 × 10−15 − − 2.6645 × 10−15 − −

1.3500 × 10−12 − − 1.9096 × 10−14 − −

10−4

Table 6 Data for problem 6. hini

N

Method

E(y1 (xN ))

E(y2 (xN ))

Feval

10−3

10 15 13 14 17 15

MA3VS Dopri5(4) ode23s MA3VS Dopri5(4) ode23s

2.4441 × 10−11 6.7487 × 10−7 1.0539 × 10−4 3.0342 × 10−9 1.7613 × 10−6 1.0094 × 10−4

5.6507 × 10−9 5.2529 × 10−5 1.7045 × 10−11 6.1414 × 10−7 1.4092 × 10−4 6.0743 × 10−11

30 90 39 42 102 45

10−4

calculated in order that the methods considered used the same number of steps, N = 2233. We observe that the accuracy of the methods is similar. Although the code Dopri5(4) presents less variation along all the integration interval the computational time is bigger. 8.5. Problem 5 Consider the system of first order ODEs taken from [2]

y1 (x) = y1 (x)2 ;

y1 (0) = 1,

y2 (x) = y1 (x) + y1 (x)y2 (x);

y2 (0) = 1,

1 where x ∈ [0, 2]. The exact solution of the system is y1 (x) = 1−x , y2 (x) = x+1 1−x , which has a singularity at x = 1. The absolute errors for each component at the final point xN = 2, named as E(y1 (xN )) and E(y2 (xN )) are included in Table 5.

8.6. Problem 6 Consider the stiff system taken from [14]

y1 (x) = y2 (x) − y1 (x)2 − (1 + x);

y2 (x) = 1 − 20(y2 (x)2 − (1 + x)2 );

y1 (0) = 1, y2 (0) = 1,

where x ∈ [0, 1]. The theoretical solution of the system is y1 (x) =

1 1+x , y2 (x)

= 1 + x. The eigenvalues of the jacobian matrix are

and −40(1 + x). In Table 6, we have included the absolute errors at the final point xN = 1 named as E(y1 (xN )) and E(y2 (xN )) and the number of function evaluations (Feval). It must be mentioned here that in the computational cost of ode23s there are also other factors involved, namely: partial derivatives, LU decompositions and solutions of linear systems which are not mentioned in the Table 6. −2 1+x

9. Conclusions In this paper, a new family of explicit schemes for numerically solving initial value problems of different nature is presented. The proposed family is constructed by considering a suitable rational approximation to the theoretical solution of (1), which has second order convergence and is A-stable. Further, a third order method maintaining the characteristic of A-stability is obtained by imposing the condition that the coefficient of the principal term of the local truncation error of the proposed family of methods vanishes. An implementation in variable step-size mode is considered by combining the proposed second and third order

H. Ramos et al. / Applied Mathematics and Computation 268 (2015) 796–805

805

methods. The numerical results obtained by the proposed methods and the existing methods overwhelmingly support that the proposed scheme outperform the existing classical methods to cope with initial value problems of different nature. Further, the proposed schemes can also be used to solve a wide class of initial value problems (scalar and system). Acknowledgments The first author wants to dedicate this work to the memory of Jens–Peer Kuska, who provided him the Mathematica package RungeKuttaNDSolve where implementations of different Runge–Kutta methods are considered. The authors would like to thank the anonymous reviewers for their helpful and constructive comments that greatly contributed to improve the final version of the paper. References [1] H. Ramos, A non-standard explicit integration scheme for initial-value problems, Appl. Math. Comp. 189 (2007) 710–718. [2] H. Ramos, Contributions to the development of differential systems exactly solved by multi step finite-difference schemes, Appl. Math. Comp. 217 (2010) 639–649. [3] H. Ramos, A nonlinear explicit one-step integration scheme for singular autonomous initial value problems, in: T. Simos (Ed.), AIP Conference Proceedings 936, New York, 2007, pp. 448–451. [4] Xin-Yuan Wu, Jian-Lin Xia, Two low accuracy methods for stiff systems, Appl. Math. Comp. 123 (2001) 141–153. [5] R.R. Ahmad, N. Yaacob, A.H.Mohd Murid, Explicit methods in solving stiff ordinary differential equations, Int. J. Comput. Math. 81 (2004) 1407–1415. [6] F.D.Van Niekerk, Rational one-step methods for initial value problems, Comput. Math. Appl. 16 (12) (1988) 1035–1039. [7] L.F. Shampine, A. Witt, Control of local error stabilizes integrations, J. Comp. Appl. Math. 62 (1995) 333–351. [8] M. Calvo, J.I. Montijano, L. Randez, On the change of step size in multistep codes, Numer. Alg 4 (1993) 283–304. [9] H.A. Watts, Starting step size for an ODE solver, J. Comp. Appl. Math. 9 (1983) 177–191. [10] E.R. El-Zahar, A non-linear absolutely-stable explicit numerical integration algorithm for stiff initial-value problems, Ameri. J. Appl. Sci. 10 (2013) 1363–1370. [11] A.E. Sedgwick, An effective variable order variable step Adams method, Dept. of Computer Science, Rept. 53, University of Toronto, Toronto, Canada, 1973. [12] J.R. Dormand, P.J. Prince, A family of embedded Runge-Kutta formulae, J. Comput. Appl. Math. 6 (1980) 19–26. [13] H.H. Rosenbrock, Some general implicit processes for the numerical solution of differential equations, Comput. J. 5 (1963) 329–330. [14] René Alt, A-stable one-step methods with step-size control for stiff systems of ordinary differential equations, J. Comp. Appl. Math. 4 (1) (1978). [15] J.D. Lambert, Nonlinear methods for stiff systems of ordinary differential equations, in: Proceedings of the conference on numerical solution of ordinary differential equations 363, University of Dundee, 1973, pp. 75–88. [16] M. Calvo, M. Mar-Quemada, On the stability of rational Runge-Kutta methods, J. Comput. Appl. Math. 8 (1982) 289–293. [17] G. Sottas, Rational Runge-Kutta methods are not suitable for stiff systems of ODEs, J. Comput. Appl. Math. 10 (1984) 169–174. [18] E. Hairer, Unconditionally stable explicit methods for parabolic equations, Numer. Math. 35 (1980) 57–68. [19] L.F. Shampine, M.W. Reichelt, The MATLAB ODE Suite, SIAM J. Sci. Comput. 18 (1997) 1–22. [20] L.F. Shampine, M.K. Gordon, Computer Solutions of Ordinary Differential Equations: The Initial Value Problem, Freeman, San Francisco, CA, 1975. [21] R.E. O’Malley, Singular Perturbation Methods for Ordinary Differential Equations, Springer-Verlag, New York, 1991. [22] J.D. Lambert, Numerical Methods for Ordinary Differential Systems: The Initial Value Problem, first ed., John Wiley, New York, 1991. [23] M.K. Jain, S.R.K. Iyenger, R.K. Jain, Numerical Methods for Scientific and Engineering Computation, sixth ed., New Age International (P) Limited, Publishers, New Delhi, 2012. [24] A. Iserles, A First Course in the Numerical Analysis of Differential Equations, Cambridge University Press, Cambridge, U.K, 2009. [25] K.E. Atkinson, An Introduction to Numerical Analysis, John Wiley, New York, 1989. [26] E. Hairer, S.P. Norsett, G. Wanner, Solving Ordinary Differential Equations I, Springer, Berlin, 1993. [27] E. Hairer, G. Wanner, Solving Ordinary Differential Equations II, Springer, Berlin, 1996. [28] P. Deuflhard, Newton Methods for Nonlinear Problems, Springer, Berlin, 2004. [29] J.D. Lambert, Computational Methods in Ordinary Differential Equations, Wiley, London, 1973. [30] M.K. Jain, Numerical Solution of Differential Equations, Wiley, New Delhi, 1984.

Suggest Documents