Numer Algor (2012) 59:333–346 DOI 10.1007/s11075-011-9492-3 ORIGINAL PAPER
An algorithm for second order initial and boundary value problems with an automatic error estimate based on a third derivative method Samuel N. Jator · Jiang Li
Received: 3 May 2011 / Accepted: 25 July 2011 / Published online: 13 August 2011 © Springer Science+Business Media, LLC 2011
Abstract A third derivative method (TDM) with continuous coefficients is derived and used to obtain a main and additional methods, which are simultaneously applied to provide all approximations on the entire interval for initial and boundary value problems of the form y = f (x, y, y ). The convergence analysis of the method is discussed. An algorithm involving the TDMs is developed and equipped with an automatic error estimate based on the double mesh principle. Numerical experiments are performed to show efficiency and accuracy advantages. Keywords Algorithm · Second order · Initial value · Boundary value · Third derivative Mathematics Subject Classification (2010) 65L05 · 65L06 · 65L10 · 65L12 1 Introduction Second order differential equations (SODEs) are encountered in several areas of engineering and applied science, such as celestial mechanics, circuit theory, control theory, chemical kinetics, astrophysics, and biology. In practice, SODEs are frequently treated as either initial value problems (IVPs) or
S. N. Jator (B) Department of Mathematics and Statistics, Austin Peay State University, Clarksville, TN 37044, USA e-mail:
[email protected] J. Li Department of Computer Science and Information Technology, Austin Peay State University, Clarksville, TN 37044, USA
334
Numer Algor (2012) 59:333–346
boundary value problems (BVPs). In the case of some periodic IVPs, where the main frequency may be known, the theoretical solutions are obtainable (see Franco [11]); while for BVPs, some singularly perturbed BVPs have theoretical solutions (see Brugnano and Trigiante [6]). Nevertheless, for a vast number of SODEs, the exact solution cannot be found by analytical means. Hence, the need to develop numerical techniques for solving them continue to be vital. We remark that many classical numerical methods used for solving second order IVPs cannot be applied to second order BVPs. Hence, we are motivated to propose a numerical method that can efficiently solve both IVPs and BVPs. In the past decades, tremendous attention has been focused on developing methods for the direct solution of y = f (x, y) subject to initial conditions (see Lambert and Watson [19], Twizell and Khaliq [26], Coleman and Ixaru [8], Ananthakrishnaiah [2], Simos [23], Hairer et al. [13], Van der Houwen and Sommeijer [27], and Tsitouras [25]). There has also been the search for methods for the direct solution of y = f (x, y, y ) coupled with initial conditions (see Vigo-Aguiar and Ramos [28], Chawla and Sharma [7], Jator, [16], Mahmoud and Osman [20], Awoyemi [4]). Despite the successes of these methods, they are not designed to handle second order boundary value problems which are generally solved by collocation and finite difference methods (see Conte and de Boor [9], Gladwell and Sayers [12], Keller [18], and Ascher et al. [3]). Recently Amodio and Iavernaro [1] and Yusuph and Onumanyi [29] proposed methods for directly solving y = f (x, y) coupled with either initial or boudary conditions. In the spirit of Amodio and Iavernaro [1], Jator and Li [17] developed boundary value methods for y = f (x, y, y ) coupled with either initial or boundary conditions. In this paper, we propose a TDM for the general second-order ordinary differential equation of the form y = f (x, y, y ), x [a, b ],
(1)
subject to the given initial or boundary conditions y(a) = y0 , y (a) = y0 , y(a) = y0 , y(b ) = y N , y (a) = y0 , y (b ) = yN , where y, f Rm , and the given boundary conditions are not a restriction on the TDM. We note that Keller [18] has given the theorem and the proof of the general conditions which ensure that the solution to (1) will exist and be unique. Most existing methods for the direct solution of initial value problems (IVPs) are implemented in a step-by-step fashion in which on the partition ℵh , an approximation is obtained at xn only after an approximation at xn−1 has been computed, where ℵh : a = x0 < x1 < . . . < x N = b , xn = xn−1 + h, n = 1, ..., N,
Numer Algor (2012) 59:333–346
335
h = bN−a is the constant step-size of the partition ℵh , N is a positive integer, and n is the grid index . A different approach has been proposed by boundary value methods (BVMs), which discretizes the problem using linear multistep methods and simultaneously solves the resulting system as given in Amodio and Iavernaro [1] and Jator and Li [17]. This approach is naturally adopted for boundary value problems (BVPs). We adopt the boundary value technique whereby all approximations (y1 , y2 , . . . , y N )T (T is the transpose) for the solution of (1) are simultaneously generated on the entire interval. The advantage of this approach is that the global errors at the end of the interval are smaller than those produced by the step-by-step methods due to the fact that the accumulation of error at each step is inherent in the step-by-step methods. It is known that BVMs can only be successfully implemented if used together with appropriate additional methods (see Brugnano and Trigiante [6]). In this light, we have proposed a main and additional methods which are obtained from the same continuous scheme derived through multistep collocation (see Onumanyi et al. [21]). The continuous representation generates the main TDM and additional methods which are combined and used to simultaneously produce approximations y j, j = 1, ...,N for the solution of (1) at points x j, j = 1, ...,N on ℵh . The paper is organized as follows. In Section 2, we derive an approximation U(x) for y(x) which is used to obtain the main and additional TDMs. The convergence analysis of the method is also given in Section 2. Section 3 is devoted to the computational aspects and an algorithm equipped with an automatic error estimate based on the double mesh principle. Numerical examples are given in Section 4 to show speed and accuracy advantages. Finally, the conclusion of the paper is discussed in Section 5.
2 Development of method In this section, we develop a two-step TDM for (1) on the interval from xn to xn+2 = xn + 2h, where h is the chosen step-length. In particular, we initially assume that the solution on the interval [xn , xn+2 ] is locally approximated by the polynomial U(x) =
7
j x j,
(2)
j=0
where j are unknown coefficients. Since this polynomial must pass through the points (xn , yn ), (xn+1 , yn+1 ), (xn+2 , yn+2 ), we demand that the following eight equations must be satisfied. U (xn ) = yn+ j, j = 0, 1,
(3)
U xn+ j = fn+ j, U xn+ j = gn+ j, j = 0, 1, 2.
(4)
336
Numer Algor (2012) 59:333–346
The eight undetermined coefficients j obtained by solving equations (3) and (4) are substituted into (2) and after some manipulation, we obtain the continuous representation of the TDM which is given by U(x) = α0 (x)yn + α1 (x)yn+1 + h2
2 j=0
β j(x) fn+ j + h3
2
γ j(x)gn+ j,
(5)
j=0
where α0 (x), α1 (x), β j(x), and γ j(x), j = 0, 1, 2 are continuous coefficients. We assume that yn+ j = U(xn + jh) is the numerical approximation to the analytical solution y(xn+ j), yn+ j = U (xn + jh) is an approximation to y (xn+ j), fn+ j = U (xn + jh) is an approximation to y (xn+ j), and gn+ j = U (xn + jh) is an approximation to y (xn+ j), where df (x, y(x), y (x)) xn+ j fn+ j = f xn+ j, yn+ j, yn+ j , gn+ j = | yn+ j , and dx gn+ j = g xn+ j, yn+ j, yn+ j , j = 0, 1, 2. The method (5) is used to obtain a main method and additional methods which are combined to provide a global solution for (1) on ℵh . Main method The main method is obtained by evaluating (5) at x = xn+2 . Thus, yn+2 = U(xn + 2h) gives the following method. yn+2 − 2yn+1 + yn =
h2 2 fn + 11 fn+1 + 2 fn+2 15 h3 + gn − gn+2 , n = 0, . . . , N − 2. 40
(6)
We note that the discretization of (1) using (6) gives more unknowns than equations which if solved will lead to an indeterminate. Hence, we are compelled to look for additional methods. Fortunately, (5) is continuous and is used to provide the needed methods via its first derivative (7). ⎞ ⎛ 2 2 1 β j(x) fn+ j + h3 γ j (x)gn+ j⎠ . (7) U (x) = ⎝α0 (x)yn + α1 (x)yn+1 + h2 h j=0 j=0 Additional methods Noting that y0 = U (a), y1 = U (a + h), and yn+2 = U (a + 2h), we obtain the following additional methods from (7). −y1 = −hy0 − y0 +
h2 h3 − 13 f0 − 7 f1 − f2 + − 59g0 + 128g1 + 11g2 , 42 1680 (8)
hy1 − y1 = −y0 +
h2 h3 187 f0 + 616 f1 + 37 f2 + 16g0 − 76g1 − 5g2 , 1680 840 (9)
Numer Algor (2012) 59:333–346
hyn+2 − yn+1 + yn =
337
h2 11 fn +63 fn+1 +31 fn+2 70 h3 + 53gn +128gn+1 −101gn+2 , n = 0,. . ., N−2. 1680 (10)
Local truncation error and order The local truncation errors associated with the methods (6), (8)–(10) are given by 29 h8 y(8) (xi + θi ) + O(h9 ), 302400 i = 2, . . . , N, |θi | ≤ 1,
τi =
τ1 =
−1 8 (8) h y (ξ ) + O(h9 ), 17280
hτ1 =
hτi =
31 h8 y(8) (xi + θi ) + O(h9 ), 201600
29 h8 y(8) (ξ ) + O(h9 ), x0 ≤ ξ ≤ x1 . 604800
Convergence analysis The methods (6), (8), (9), and (10) can be compactly written in matrix form by introducing the following notations. Let A be a 2N × 2N matrix defined by
A11 A12 A= , A21 A22 where the elements of A are N × N matrices given as ⎡ ⎤ −1 0 0 0 ··· 0 ⎡ ⎤ ⎢−2 1 0 0 · · · 0⎥ 0 0 0 ··· 0 ⎢ ⎥ ⎢ 1 −2 1 0 ⎢0 0 0 ··· 0⎥ · · · 0⎥ ⎢ ⎥ ⎢ ⎥ ⎢ 0 1 −2 1 0 · · · 0⎥ ⎢ ⎥ A11 = ⎢ ⎥ , A12 = ⎢ 0 0 0 · · · 0 ⎥ , ⎢ .. ⎥ ⎢ .. . .. . . .. ⎥ ⎢ . ⎣ .. . . .⎦ .⎥ ⎢ ⎥ ⎣ 0 0 · · · 1 −2 1 0⎦ 0 0 0 ··· 0 0 0 ··· 1 −2 1 ⎡ ⎤ −1 0 0 · · · 0 ⎡ ⎤ ⎢−1 0 0 · · · 0 ⎥ 1 0 ··· 0 ⎢ ⎥ ⎢ 1 −1 0 · · · 0 ⎥ ⎢0 1 0 ··· 0⎥ ⎢ ⎢ ⎥ ⎥ ⎢ 0 1 −1 0 · · · 0 ⎥ ⎢ ⎥ A21 = ⎢ ⎥ , A22 = ⎢ 0 0 1 0 · · · 0 ⎥ . ⎢ .. ⎥ ⎢ . . . . . .. .. ⎥ . . .. ⎥ ⎢ . ⎣ .. ⎦ ⎢ ⎥ ⎣ 0 · · · 0 1 −1 0 ⎦ 0 0 0 ···0 1 0 · · · 0 1 −1 Similarly, let B be a 2N × 2N matrix defined by
B11 B12 B= , B21 B22
338
Numer Algor (2012) 59:333–346
where the elements of B are N × N matrices given as ⎡ 7 ⎤ 1 0 0 ··· 0 − 42 − 42 2 11 ⎢ 0 0 ··· 0 ⎥ 15 ⎢ 15 ⎥ 11 2 ⎢ 2 0 ··· 0 ⎥ 15 15 ⎢ 15 ⎥ 11 2 2 ⎢ 0··· 0 ⎥ B11 = h2 ⎢ 0 ⎥, 15 15 15 ⎢ . .. ⎥ .. ⎢ .. . .⎥ ⎢ ⎥ 11 2 2 ⎣ 0 0···0 0⎦ 0 ⎡
B12
⎢ ⎢ ⎢ ⎢ 2⎢ =h ⎢ ⎢ ⎢ ⎢ ⎣
11 128 0 1680 1680 −1 0 0 40 −1 1 0 40 40 −1 1 0 40 40
.. . 0 0
⎡ 616 42
B21
⎢ 63 ⎢ 70 ⎢ 11 ⎢ 70 ⎢ = h2 ⎢ 0 ⎢ . ⎢ .. ⎢ ⎣0 0 ⎡
B22
15 15 2 15
0··· 0
15 11 15
··· ··· ··· 0···
2 15
⎤
0 0 0 0 .. .
⎥ ⎥ ⎥ ⎥ ⎥ ⎥, ⎥ .. ⎥ . ⎥ 1 1 · · · 0 40 − 40 0 ⎦ 1 1 · · · 0 40 − 40 ⎤ 37 0 ··· 0 42 31 0 ··· 0 ⎥ 70 ⎥ 63 31 0 · ·· 0 ⎥ 70 70 ⎥ 63 31 11 0··· 0 ⎥ ⎥, 70 70 70 .. ⎥ .. . .⎥ ⎥ 11 63 31 0···0 0⎦ 70 70 11 70
0··· 0
70 63 70
31 70
⎤ 76 5 − 840 − 840 0 ··· 0 ⎢ 128 − 101 0 ··· 0 ⎥ 1680 ⎢ 1680 ⎥ 128 101 ⎢ 53 − 0 · · · 0 ⎥ 1680 ⎢ 1680 1680 ⎥ 128 53 101 ⎢ − 1680 0··· 0 ⎥ = h2 ⎢ 0 ⎥. 1680 1680 ⎢ . .. ⎥ .. ⎢ .. . . ⎥ ⎢ ⎥ 128 101 ⎣ 0 · · · 0 53 − 1680 0 ⎦ 1680 1680 128 53 101 0 ··· 0 − 1680 1680 1680
We further define the following vectors: Y = y(x1 ), . . . , y(x N ), hy (x1 ), . . . , hy (x N ))T , F = ( f1 , . . . , f N , hg1 , . . . , hg N
T
13 59 3 C = hy0 + y0 + h2 f0 + h g0 , y0 − 42 1680 187 2 16 3 11 h f0 − h g0 , y0 − h2 f0 − − 42 840 70
,
2 2 1 h f0 − h3 g0 , 0, . . . , y0 15 40 T 53 3 h g0 , 0, . . . , 0 , 1680
Numer Algor (2012) 59:333–346
339
T L(h) = τ1 , . . . , τ N , hτ1 , . . . , hτ N , where L(h) is the local truncation error. The exact form of the system is given by (11) AY − BF(Y) + C + L(h) = 0,
(11)
and the approximate form of the system is given by (12) AY − BF(Y) + C = 0,
(12)
where Y = (y1 , . . . , y N , hy1 , . . . , hyN )T is the approximation of the solution vector Y. Subtracting (11) from (12) we obtain AE − BF(Y) + BF(Y) = L(h),
(13)
where E = Y − Y = (e1 , e2 , . . . , e N , e1 , e2 , . . . , eN )T . Using the mean-value theorem, we write (13) as (A − BJ)E = L(h), where the Jacobian matrix J and its entries J11 , J12 , J21 , and J22 are defined as follows: ⎡ ∂f ⎡ ∂ f1 ⎤ ⎤ 1 · · · ∂∂yf1 · · · ∂∂yfN1
∂ y1 ∂ y1 N ⎢ . ⎢ J J .. ⎥ .. ⎥ ⎥ , J12 = ⎢ .. ⎥ .. J = 11 12 , J11 = ⎢ . . . ⎦, ⎣ ⎣ ⎦ J21 J22 ∂ fN ∂ fN ∂ fN ∂ fN · · · ∂ yN · · · ∂ y ∂ y1 ∂ y1 N ⎤ ⎤ ⎡ ∂g ⎡ ∂g1 ∂g1 ∂g1 1 · · · ∂ y · · · ∂ yN ∂ y N ⎥ ⎢ ∂ .y1 ⎢ .1 . .. ⎥ ⎥ , J22 = h ⎢ . ⎥ . . J21 = h ⎢ . ⎦ . ⎦. ⎣ . ⎣ . ∂g N ∂g N ∂g N ∂g N · · · ∂ yN · · · ∂ y ∂ y1 ∂ y 1
N
Let M = −BJ be a matrix of dimension 2N. We have (A + M)E = L(h),
(14)
and for sufficiently small h, A + M is a monotone matrix and thus invertible (see Jain and Aziz [15]). Therefore, (A + M)−1 = D = (di, j) ≥ 0,
and
2N
di, j = O(h−2 ).
(15)
j=1
If E = maxi |ei | and from (14), E = (A + M)−1 L(h), using (15) and the truncation error vector L(h), it follows that E = O(h6 ). Therefore, the TDM is a sixth-order convergent method.
340
Numer Algor (2012) 59:333–346
3 Computational aspects Case 1 If the first derivative is absent from the right hand side of (1), all approximations (y1 , y2 , . . . , y N )T for the solution of (1) are simultaneously generated on the entire interval using the main method (6), for n = 0, 1, . . . , N − 2 to get N − 1 equations and the additional method (8) is used to complete the set of equations required to simultaneously solve the N by N system of equations of either the IVP or BVP of the form (1). Case 2 If the first derivative is present on the right hand side of (1), all approximations (y1 , y2 , . . . , y N )T and (y1 , y2 , . . . , yN )T of the solution of (1) are simultaneously generated on the entire interval using the main methods (6) and (10), for n = 0, 1, . . . , N − 2 to get 2N − 2 equations and the additional methods (8) and (9) are used to complete the set of equations required to simultaneously solve the 2N by 2N system of equations of either the IVP or BVP of the form (1). Summarily, the main methods (6) and (10) and the additional methods (8) and (9) are combined to give a single matrix of finite difference equations which is solved to simultaneously provides the values of the solution and the first derivatives generated by the sequences {yn }, {yn }, n = 1, . . . , N where the single block matrix equation is solved while adjusting for boundary conditions. In particular, for linear problems, we can solve (1) directly from the start with Gaussian elimination using partial pivoting, and for nonlinear problems, we use a modified Newton–Raphson method.
4 Numerical examples In this section, we give five numerical examples to illustrate the accuracy of the method. We find the maximum absolute error (Err) of the approximate solution on the partition ℵh as Err = Max|y(x) − y|. The rate of convergence (ROC) was calculated using the formula ROC = log2 (Err2h /Err h ), where Err h is the Err obtained using the step size h, while the error estimate (Err Est) was computed using the double mesh principle as ErrEst(2h) = Max|y2h − yh |, where yh is the approximate solution obtained for a given h. We note that the method requires only two function evaluations (FNCs) per step. All computations were carried out using our written code in Mathematica 7.0. Example 4.1 We consider the Bessel’s equation 2 2 2 x y + xy + (x − 0.25)y = 0, y(1) = sin 1 0.6713967071418031, π √ y (1) = (2 cos 1 − sin 1)/ 2π 0.0954005144474746, for which the theoretical solution is given by y(x) = J1/2 (x) = π2x sin x.
Numer Algor (2012) 59:333–346 Table 1 Absolute errors at x = 8 for Example 4.1
341 Steps
Vigo-Aguiar–Ramos ( p = 8) Absolute errors
TDM ( p = 6) Absolute errors
67 82 97 112 125
7.1122 × 10−7 9.2632 × 10−8 8.7834 × 10−9 1.2108 × 10−10 2.7068 × 10−11
1.2855 × 10−9 3.9664 × 10−10 1.4797 × 10−10 6.3336 × 10−11 3.3050 × 10−11
This problem was chosen to demonstrate the performance of the TDM on the general second order IVP which includes y on the right-hand side. The errors in the solution were obtained at x = 8 using the TDM. Similar results, which are reproduced in Table 1 were obtained for the same problem by VigoAguiar and Ramos [28] using the variable-step Falker method of order eight ( p = 8) in the predictor-corrector mode. It is seen that although we used fixed step-sizes, the TDM generally performs better than the method in [28]. In Table 2, we show that the calculated ROC of the TDM is consistent with the theoretical order ( p = 6) behavior of the method. In particular, it is obvious from Table 2 that TDM exhibits an order 6 behavior, since on halving the step size, Err is reduced by a factor of about 65, which can be expressed approximately as 26 = 64. It is also shown that the automatic error estimate incorporated into the method performs excellently. An error tolerance of 10−12 was supplied to automatically generate the errors for the specified values of N = as given in Table 2. Example 4.2 We consider the singularly perturbed BVP εy + xy = −επ 2 cos(π x) − π x sin(π x), y(−1) = −2, y(1) = 0, ε = 10−4 , √ for which the exact solution is given by y(x) = cos(π x) + er f (x/ 2ε)/ √ er f (1/ 2ε). This problem was chosen to demonstrate the performance of the TDM on a singularly perturbed BVP with Dirichlet boundary conditions. The problem was also solved by Brugnano and Trigiante [6] using the Extended Trapezoidal Rule (ETR), Extended Trapezoidal Rules of Second kind (ET R2 ) and Top Order Method (TOM). They noted that the TOM performed better than the ETR and ET R2 , hence we have chosen the TOM for comparison with the
Table 2 Results for Example 4.1
N
FNCs
Err
16 32 64 128 256
34 66 130 258 514
1.2324 × 10−5 3.1336 × 10−7 6.0936 × 10−9 1.0275 × 10−10 1.6365 × 10−12
ROC
Err Est
5.28 5.68 5.89 5.97
1.2013 × 10−5 3.0727 × 10−7 5.9909 × 10−9 1.0111 × 10−10 1.6127 × 10−12
342
Numer Algor (2012) 59:333–346
TDM which is of order 6 as the TOM. The results given in [6] are reproduced in Table 3 and compared with the results given by TDM. It is seen from Table 3 that the TDM method performs better than the TOM. In Table 3, we show that the calculated ROC of both methods are consistent with the theoretical order ( p = 6) behavior of the methods. In particular, it is obvious from Table 3 that the TDM exhibits an order 6 behavior, since on halving the step-size, Err is reduced by a factor of about 26 . The TDM is also equipped with an automatic error estimate that performs excellently as shown in Table 3. An error tolerance of 10−11 was supplied to automatically generate the errors for the five step sizes given in Table 3. Example 4.3 We consider the BVP y + xy = 3 − x − x2 + x3 sin x + 4x cos x, y (0) = −1, y (1) = 2 sin(1), for which the exact solution is given by y(x) = (x2 − 1) sin x. In order to test the performance of TDM on a BVP with Neumann boundary conditions, we consider this example which was also solved by Ramadan et al. [22] (RLZ) using a method of order 4. Although the TDM is expected to perform better than the method in [22], since it is of order 6, it is evident from Table 4 that for even larger step-sizes the TDM still performs better. For instance, for N = 4 the maximum absolute error for the TDM is smaller than the maximum absolute error for N = 64 for the method in [22]. It is noticed that the calculated ROC of the TDM is more consistent with the theoretical order behavior of the method than the method in [22]. In particular, it is obvious from Table 4 that the TDM exhibits an order 6 behavior, since on halving the step-size, Err is reduced by a factor of about 26 . We note that the automatic error estimate incorporated into the TDM also performs excellently. An error tolerance of 10−15 was supplied to automatically generate the errors for the specified values of N as given in Table 4. Example 4.4 We consider the nonlinear Fehlberg problem y1 = −4x2 y1 −
2 y21
+
y22
y2 , y2 =
2 y21
+
y22
y1 − 4x2 y2 ,
Table 3 Results for Example 4.2 h 0.01 0.005 0.0025 0.00125 0.000625
TOM ( p = 6) Err 7.078 × 10−3 1.338 × 10−4 1.600 × 10−6 1.016 × 10−8 1.593 × 10−10
ROC
TDM ( p = 6) Err
5.73 6.39 7.30 6.00
3.342 × 10−4 5.552 × 10−6 8.206 × 10−8 1.293 × 10−9 2.027 × 10−11
ROC
Err Est
5.91 6.08 5.99 6.00
3.336 × 10−4 5.471 × 10−6 8.077 × 10−8 1.273 × 10−9 1.995 × 10−11
Numer Algor (2012) 59:333–346
343
Table 4 Results for Example 4.3 N 4 8 16 32 64 128
RLZ ( p = 4) Err 2.24 × 10−2 2.67 × 10−3 3.24 × 10−4 3.99 × 10−5 4.94 × 10−6 6.16 × 10−7
ROC
TDM ( p = 6) Err
3.07 3.04 3.02 3.01 3.00
1.62 × 10−6 2.43 × 10−8 3.73 × 10−10 5.79 × 10−12 8.98 × 10−14 1.11 × 10−15
y1 (0) = 0,
y1 (0)
= −2
ROC
Err Est
6.06 6.02 6.01 6.01 6.34
1.60 × 10−6 2.39 × 10−8 3.67 × 10−10 5.70 × 10−12 8.88 × 10−14 1.55 × 10−15
π , y2 (0) = 1, y2 (0) = 0, 2
for which the exact solution is given by y1 (x) = cos(x2 ), y2 (x) = sin(x2 ). This problem was chosen to demonstrate the performance of the TDM on a nonlinear system with variable coefficients. The problem was also solved in [24] using the eighth-order, eight-stage RKN (H8) method constructed by Hairer [14] and the eighth-order, nine-stage method (BG8) constructed by Beentjes Gerritsen [5]. The maximum norm of the global error for the ycomponent is given in the form 10−CD , where CD denotes the number correct decimal digits at the endpoint (see [24]). We have chosen to compare these methods of order 8 with the TDM of order 6, because the orders of the methods are moderately close. The results obtained using the H8 and BG8 methods are reproduced in Table 5 and compared to the results given by the TDM. It is seen from Table 5 that for approximately the same FNCs our method performs better than those in [24] in terms of accuracy (larger CD values), despite the fact that those methods are of a higher order. In Table 6, we show that the calculated ROC of the TDM is consistent with the theoretical order ( p = 6) behavior of the method, since on halving the step size, Err is reduced by a factor of about 26 . It is also shown that the automatic error estimate incorporated into the method performs excellently. An error tolerance of 10−12 was supplied to automatically generate the errors for the specified values of N as given in Table 6.
Table 5 A comparison of the correct decimal digit at the endpoint for Example 4.4
FNCs
H8 CD
BG4 CD
TDM FNCs
CD
400 800 1,600 3,200 6,400
0.3 2.6 5.2 7.6 10.0
0.9 3.1 5.6 8.0 10.4
402 802 1,602 3,202 6,402
3.27 5.09 6.90 8.70 10.51
344
Numer Algor (2012) 59:333–346
Table 6 Results for Example 4.4
N
Err
200 400 800 1,600 3,200
5.37027 × 10−4 8.18468 × 10−6 1.27207 × 10−7 1.98508 × 10−9 3.09498 × 10−11
ROC
Err Est
5.88 5.97 5.99 6.07
5.28842 × 10−4 8.05748 × 10−6 1.25222 × 10−7 1.95414 × 10−9 3.05351 × 10−11
Example 4.5 We consider the nonlinear perturbed system which was also solved by Fang et al. [10] on the range [0, 10], with ε = 10−3 . y1 + 25y1 + ε(y21 + y22 ) = εϕ1 (x), y1 (0) = 1, y1 (0) = 0, y2 + 25y2 + ε(y21 + y22 ) = εϕ2 (x), y2 (0) = ε, y2 (0) = 5, where ϕ1 (x) = 1 + ε2 + 2ε sin(5x + x2 ) + 2 cos(x2 ) + (25 − 4x2 ) sin(x2 ), ϕ2 (x) = 1 + ε2 + 2ε sin(5x + x2 ) − 2 sin(x2 ) + (25 − 4x2 ) cos(x2 ), and the exact solution is given by y1 (x) = cos(5x) + ε sin(x2 ), y2 (x) = sin(5x)+ ε cos(x2 ), represents a periodic motion of constant frequency with small perturbation of variable frequency. This problem was chosen to demonstrate the performance of the TDM on a nonlinear perturbed system. The problem was solved by Fang et al. [10] using a variable step-size fifth-order trigonometrically fitted Runge–Kutta–Nyström method and a fifth-order Runge–Kutta–Nyström method (ARKN5(3)) which was constructed by Franco [11]. In Table 7, we show that the calculated ROC of the TDM is consistent with the theoretical order ( p = 6) behavior of the method, since on halving the step size, Err is reduced by a factor of about 26 . An error tolerance of 10−11 was supplied to automatically generate the errors for specific values of N as given in Table 7. In Table 8, the results obtained in [10] for the ARKN5(3) are reproduced and compared to the TDM since their orders are very close. We remark that although the ARKN5(3) is expected to perform better because it was constructed using trigonometric basis functions and implemented as a variable-step method, the TDM is moderately competitive to ARKN5(3), especially as the step-size is decreased.
Table 7 Results for Example 4.5
N
Err
100 200 400 800 1,600
4.23708 × 10−5 6.43196 × 10−7 1.01396 × 10−8 1.58173 × 10−10 2.47746 × 10−12
ROC
Err Est
6.04 5.99 6.00 6.00
4.17276 × 10−5 6.33214 × 10−7 9.9814 × 10−9 1.55697 × 10−10 2.4383 × 10−12
Numer Algor (2012) 59:333–346 Table 8 A comparison of methods for Example 4.5
345 ARKN5(3) N (rejected ones)
− log10 (Err)
TDM N
− log10 (Err)
42 (15) 86 (7) 260 (5) 812 (3)
2.82 4.96 7.16 9.37
48 88 280 800
2.35 4.03 7.10 9.80
5 Conclusions We have proposed an algorithm with an automatic error estimate based on a TDM for solving the general second-order IVPs and BVPs directly without first adapting the second order ordinary differential equation to an equivalent first order system. We are convinced that a method that can solve both IVPs and BVPs just by adjusting for the boundary conditions is very competitive. The accuracy of the method is demonstrated via several numerical examples as given in Tables 1–8. We remark that the main drawback of the TDM is the need to obtain the third derivatives by differentiating the right-hand side of (1). However, third derivatives can easily be generated for a wide range of problems. It is clear from the results given in tables that the TDM can be adapted to cope with a variety of challenging problems such as periodic IVPs and singularly perturbed BVPs. Acknowledgements The first author is grateful to Austin Peay State University, Clarksville, Tennessee, USA for granting him the Faculty Development Leave in spring 2010 to do this work.
References 1. Amodio, P., Iavernaro, F.: Symmetric boundary methods for second initial and boundary value problems. Mediterr. J. Math. 3, 383–398 (2006) 2. Ananthakrishnaiah, U.: P-stable Obrechkoff methods with minimal phase-lag for periodic initial value problems. Math. Comput. 49, 553–559 (1987) 3. Ascher, U.M., Mattheij, R.M.M., Russell, R.D.: Numerical Solutions of Boundary Value Problems for Ordinary Differential Equations, 2nd edn. SIAM, Philadephia, PA (1995) 4. Awoyemi, D.O.: A new sixth-order algorithm for general second order ordinary differential equation. Int. J. Comput. Math. 77, 117–124 (2001) 5. Beentjes, P.A., Gerritsen, W.J.: High order Runge–Kutta methods for the numerical solution of second order differential equations without first derivative. Report NW 34/76. Center for Mathematics and Computer Science, Amsterdam (1976) 6. Brugnano, L., Trigiante, D.: Solving Differential Problems by Multistep Initial and Boundary Value Methods. Gordon and Breach Science Publishers, Amsterdam (1998) 7. Chawla, M.M., Sharma, S.R.: Families of three-stage third order Runge–Kutta–Nystrom methods for y = f (x, y, y ). J. Aust. Math. Soc. 26, 375–386 (1985) 8. Coleman, J.P., Gr. Ixaru, G.: P-stability and exponential-fitting methods for y = f (x, y). IMA J. Numer. Anal. 16, 179–199 (1996) 9. Conte, S.D., de Boor, C.: Elementary numerical analysis. In: An Algorithmic Approach, 3rd edn. McGraw-Hill, Tokyo, Japan (1981) 10. Fang, Y., Song, Y., Wu, X.: A robust trigonometrically fitted embedded pair for perturbed oscillators. J. Comput. Appl. Math. 225, 347–355 (2009)
346
Numer Algor (2012) 59:333–346
11. Franco, J.M.: Runge–Kutta–Nyström methods adapted to the numerical integration of perturbed oscillators. Comput. Phys. Commun. 147, 770–787 (2002) 12. Gladwell, I., Sayers, D.K.: Computational Techniques for Ordinary Differential Equations. Academic, London, New York (1976) 13. Hairer, E., Nörsett, S.P., Wanner, G.: Solving Ordinary Differential Equations I. Springer, Berlin (1987) 14. Hairer, E.: Méthodes de Nyström pour l’équation différentielle y = f (x, y). Numer. Math. 25, 283–300 (1977) 15. Jain, M.K., Aziz, T.: Cubic spline solution of two-point boundary value with signifigant first derivatives. Comput. Methods Appl. Mech. Eng. 39, 83–91 (1983) 16. Jator, S.N.: A sixth order linear multistep method for the direct solution of y = f (x, y, y ). Int. J. Pure Appl. Math. 40, 457–472 (2007) 17. Jator, S.N., Li, J.: Boundary value methods via a multistep method with variable coefficients for second order initial and boundary value problems. Int. J. Pure Appl. Math. 50, 403–420 (2009) 18. Keller, H.B.: Numerical Methods for Two-Point Boundary Value Problems. Dover, New York (1992) 19. Lambert, J.D., Watson, A.: Symmetric multistep method for periodic initial value problem. J. Instrum. Math. Appl. 18, 189–202 (1976) 20. Mahmoud, S.M., Osman, M.S.: On a class of spline-collocation methods for solving secondorder initial-value problems. Int. J. Comput. Math. 86, 613–630 (2009) 21. Onumanyi, P., Sirisena, U.W., Jator, S.N.: Continuous finite difference approximations for solving differential equations. Int. J. Comput. Math. 72, 15–27 (1999) 22. Ramadan, M.A., Lashien, I.F., Zahra, W.K.: Polynomial and nonpolynomial spline approaches to the numerical solution of second order boundary value problems. Appl. Math. Comput. 184, 476–484 (2007) 23. Simos, T.E.: Dissipative trigonometrically-fitted methods for second order IVPs with oscillating solution. Int. J. Mod. Phys. 13, 1333–1345 (2002) 24. Sommeijer, B.P.: Explicit, high-order Runge–Kutta–Nyström methods for parallel computers. Appl. Numer. Math. 13, 221–240 (1993) 25. Tsitouras, Ch.: Explicit eight order two-step methods with nine stages for integrating oscillatory problems. Int. J. Mod. Phys. 17, 861–876 (2006) 26. Twizell, E.H., Khaliq, A.Q.M.: Multiderivative methods for periodic IVPs. SIAM J. Numer. Anal. 21, 111–121 (1984) 27. Van der Houwen, P.J., Sommeijer, B.P.: Predictor-corrector methods for periodic secondorder initial value problems. IMA J. Numer. Anal. 7, 407–422 (1987) 28. Vigo-Aguiar, J., Ramos, H.: Variable stepsize implementation of multistep methods for y = f (x, y, y ). J. Comput. Appl. Math. 192, 114–131 (2006) 29. Yusuph, Y., Onumanyi, P.: New Multiple FDMs through multistep collocation for y = f (x, y). In: Proceedings of the National Mathematical Center, Abuja Nigeria (2005)