December S1793962310000286
16,
2010 9:48 WSPC/262-IJMSSC/S1793-9623
International Journal of Modeling, Simulation, and Scientific Computing Vol. 1, No. 4 (2010) 509–522 c World Scientific Publishing Company DOI: 10.1142/S1793962310000286
Int. J. Model. Simul. Sci. Comput. 2010.01:509-522. Downloaded from www.worldscientific.com by Prof. Dr. Muhammad Noor on 11/06/14. For personal use only.
SIMPLE YET EFFICIENT ALGORITHM FOR SOLVING NONLINEAR EQUATIONS
SANJAY KUMAR KHATTRI∗ Department of Engineering Stord Haugesund University College, Norway
[email protected] MUHAMMAD ASLAM NOOR Mathematics Department COMSATS Institute of Information Technology Chak Shahzad, Park Road, Islamabad, Pakistan and Mathematics Department, College of Science King Saud University, Riyadh, Saudi Arabia
[email protected]
Received 26 May 2010 Revised 7 September 2010 Accepted 22 September 2010 In this work, we develop a simple yet robust and highly practical algorithm for constructing iterative methods of higher convergence orders. The algorithm can be easily implemented in software packages for achieving desired convergence orders. Convergence analysis shows that the algorithm can develop methods of various convergence orders which is also supported through the numerical work. The algorithm is shown to converge even if the derivative of the function vanishes during the iterative process. Computational results ascertain that the developed algorithm is efficient and demonstrate equal or better performance as compared with other well known methods and the classical Newton method. Keywords: Iterative methods; higher order; Newton; derivative vanish; convergence; algorithm; software; global. Mathematics Subject Classification 2000: 65H05, 65D05, 41A25, 65Bxx
∗ Corresponding
author. 509
December S1793962310000286
510
16,
2010 9:48 WSPC/262-IJMSSC/S1793-9623
S. K. Khattri & M. A. Noor
1. Introduction Various problems arising in diverse disciplines of engineering, sciences and nature can be described by a nonlinear equation of the following form1–4 :
Int. J. Model. Simul. Sci. Comput. 2010.01:509-522. Downloaded from www.worldscientific.com by Prof. Dr. Muhammad Noor on 11/06/14. For personal use only.
f (x) = 0.
(1)
Therefore solving the preceding equation is a very important task and there exists numerous methods for solving the above equation (see Refs. 1–20 and references therein). Though there exist many iterative methods but still the Newton’s method, one of the best known and probably the oldest, is extensively used for solving the nonlinear equation (1). In classical form, the Newton’s method for solving Eq. (1) is expressed as follows (NM) xn+1 = xn −
f (xn ) , f (xn )
n = 0, 1, 2, 3, . . . , and |f (xn )| = 0.
(2)
The Newton’s method converges quadratically.1 Many methods have been developed which improve the convergence rate (see Refs. 1–17 and references therein). For fourth-order methods we refer to Refs. 1, 5–7, for sixth-order convergent methods we refer to Refs. 8–12, for eighth-order convergent methods we refer to Ref. 13 and references therein and for 16th-order iterative method we refer to Ref. 14. One practical drawback of so many methods is their independent nature. For example, if one has a software package which solves nonlinear equations by the well-known fourth-order convergent Jarrat method.7 One may find it difficult to modify it and implement the sixth-order convergent method by Neta et al.8 or implement sixth-order convergent method by Sharma et al.11 And, further modify it to achieve even higher convergence rates by implementing eighth-order method of Bi et al.13 As a consequence, we are interested in an algorithm which can generate higher- or lower-order methods through a simple modification. Though the Newton’s method converges quadratically, and the methods5–14 offer higher convergence orders. We may notice that if the derivative of the function vanishes, that is |f (xn )| = 0, during the iterative process then the sequence generated by the Newton’s iteration (2) or the methods5–14 are not defined. Because the cardinal sin of division by zero results in a mathematical breakdown. This is our second motivation, we are interested in higher-order methods which may converge even if the derivative vanishes during the iterative process. As a consequence, this work contributes an algorithm for solving nonlinear scalar equations which possesses the following three very important features: (1) Algorithm is simple to program and implement. (2) Algorithm can generate iterative methods of desired convergence orders. (3) Methods of the algorithm may converge even if the derivative vanishes.
December S1793962310000286
16,
2010 9:48 WSPC/262-IJMSSC/S1793-9623
Simple Yet Efficient Algorithm for Solving Nonlinear Equations
511
2. Algorithm and Convergence Analysis of Its Various Methods Consider the following m-step scheme y1 = xn −
f (xn ) , f (xn ) + α(n) f (xn )
y2 = y1 − f (y1 )Φ(xn , y1 ),
Int. J. Model. Simul. Sci. Comput. 2010.01:509-522. Downloaded from www.worldscientific.com by Prof. Dr. Muhammad Noor on 11/06/14. For personal use only.
y3 = y2 − f (y2 )Φ(xn , y1 ), .. .
(3)
ym−1 = ym−2 − f (ym−2 )Φ(xn , y1 ), ym = ym−1 − f (ym−1 )Φ(xn , y1 ). Here, Φ(xn , y1 ) =
[1 + 2 f (y1 )/f (xn ) − α(n) f (xn )/(f (xn ) + α(n) f (xn ))] , f (xn ) + α(n) f (xn )
and xn+1 := ym here, α(n) is a free real parameter. To ensure that the denominator in Eq. (3) is never zero (because of vanishment of derivative) during an iterative process, the α(n) may have same sign as f (xn ) f (xn ). We will see that the m steps of the above equation generates an iterative method of order 2 × m. For example, the choice m = 1 generates a second-order iterative method while the choice m = 3 will generate a sixth-order iterative method. We may notice that the choice m = 1 and α = 0 produces the classical Newton method (2). The choice m = 1, the first step of the Eq. (3), gives the quadratically convergent method developed by Wu et al.15–19 and Kanwar et al.20 Equation (3) has two advantages over the Newton’s method and the methods.5–14,21 First one is that the sequence generated by Eq. (3) is well defined even if f (xn ) ≈ 0. And, the second advantage is that Eq. (3) offers more numerical stability which is also observed during numerical experimentation. It may be noticed that a 2 × m order method formed by Eq. (3) requires m functions and one derivative evaluation per iterative step. We prove the convergence of various methods of Eq. (3) through the following theorem. Theorem 1. Let the function f : D ⊂ R → R has a root γ ∈ D in the open interval D. Furthermore the first, second and the third derivatives of the function f (x) belongs to the open interval D. Then the m-steps of Eq. (3) defines an iterative method of order 2 × m. Here, m = 1, 2, 3, . . . . Proof. The Taylor series expansion of the function f (x) around the solution γ is given as f (xn ) = c1 en + c2 en 2 + c3 en 3 + c4 en 4 + O(en 5 ),
(4)
December S1793962310000286
512
16,
2010 9:48 WSPC/262-IJMSSC/S1793-9623
S. K. Khattri & M. A. Noor
where we have taken into account f (γ) = 0. From the preceding equation, we write f (xn ) = c1 + 2 c2 en + 3 c3 en 2 + 4 c4 en 3 + O en 4 ,
(5)
from Eqs. (4) and (5), we have f (xn ) + α f (xn ) = c1 + (2 c2 + α c1 )en + (α c2 + 3 c3 )en 2 + (α c3 + 4 c4 )en 3 + O en 4 ,
(6)
Int. J. Model. Simul. Sci. Comput. 2010.01:509-522. Downloaded from www.worldscientific.com by Prof. Dr. Muhammad Noor on 11/06/14. For personal use only.
furthermore dividing Eq. (4) by Eq. (6), we obtain f (xn ) c 2 + α c1 2 = en − en + α f (xn ) c1 −2 c3 c1 + 2α c2 c1 + 2 c2 2 + α2 c1 2 3 + en + O(en 4 ). c1 2
f (xn )
(7)
From the first step of our Eq. (3) y1 := xn −
f (xn ) , f (xn ) + α f (xn )
substituting from Eq. (7) into the above equation gives the following error equation y1 = γ +
c2 + α c1 2 −2 c3 c1 + 2 α c2 c1 + 2 c2 2 + α2 c1 2 3 en − en + O(en 4 ). c1 c1 2
(8)
This proves that for m = 1, the equation defines a second-order iterative method. A Taylor expansion of f (y1 ) around the solution γ is given as f (y1 ) =
∞
cm (y1 − γ)m ,
with m ≥ 1,
m=1
substituting from Eq. (8) into the above equation f (y1 ) = (c2 + α c1 )en 2 +
1 12 c3 c1 − 12 α c2 c1 − 12 c2 2 − 6 α2 c1 2 3 en + O(en 4 ), 6 c1 (9)
dividing Eq. (9) by Eq. (4) yields f (y1 ) 1 2 c 2 + 2 α c1 1 24 c3 c1 − 36 α c2 c1 − 36 c2 2 − 12 α2 c1 2 2 = en + en f (xn ) 2 c1 12 c1 2 3 c4 c1 2 − 5 α c3 c1 2 − 10 c1 c2 c3 + 10 c1 α c2 2 + 5 α2 c1 2 c2 + 8 c32 + α3 c1 3 + e3n + O(en 4 ). c1 3 From Eq. (3), we write Φ(xn , y1 ) :=
[1 + 2 f (y1 )/f (xn ) − α(n) f (xn )/(f (xn ) + α(n) f (xn ))] , f (xn ) + α(n) f (xn )
(10)
December S1793962310000286
16,
2010 9:48 WSPC/262-IJMSSC/S1793-9623
Simple Yet Efficient Algorithm for Solving Nonlinear Equations
513
substituting from Eqs. (6), (7) and (10) into the preceding equations, we obtain Φ(xn , y1 ) =
1 −c3 c1 + 6 α c2 c1 + 6 c2 2 + α2 c1 2 2 − en c1 c1 3 2 + 4 (c4 c1 2 − 5 α c3 c1 2 + 18 c1 α c2 2 − 11 c1 c2 c3 c1 + 8 α2 c1 2 c2 + 14 c2 3 + α3 c1 3 )en 3 + O(en 4 ).
(11)
Int. J. Model. Simul. Sci. Comput. 2010.01:509-522. Downloaded from www.worldscientific.com by Prof. Dr. Muhammad Noor on 11/06/14. For personal use only.
From Eq. (3), we have y2 := y1 − f (y1 )Φ(xn , y1 ), substituting from Eqs. (8), (9) and (11) into the preceding equation y2 = γ +
(c2 + α c1 )(α2 c1 2 + 5 α c2 c1 − c3 c1 + 5 c2 2 ) 4 en + O(e5n ). c31
(12)
This proves that for m = 2, the scheme defines a fourth-order convergent iterative method. The Taylor series expansion of f (y2 ) around the solution γ is given as f (y2 ) :=
∞
cm (y2 − γ)m ,
m=1
substituting from Eq. (12) into the preceding equation f (y2 ) =
1 (−α c3 c1 2 + 10 c1 α c2 2 − c1 c2 c3 + 6 α2 c1 2 c2 + 5 c2 3 + α3 c1 3 )en 4 c21 1 − 3 (2 α c4 c1 3 + 2 c1 2 c2 c4 − 13 c1 3 α2 c3 + 66 α2 c2 2 c1 2 c1 + 80 c1 c2 3 α − 32 c1 c2 2 c3 + 24 α3 c1 3 c2 + 2 c3 2 c1 2 + 3 α4 c1 4 + 36 c2 4 − 42 c1 2 c2 α c3 )en 5 + O(en 6 ),
(13)
from the third step of Eq. (3), we may write y3 := y2 − Φ(xn , y1 ) f (y2 ), substituting from Eqs. (11)–(13) into the preceding equation, we obtain y3 = γ +
1 (c2 + α c1 )(α2 c1 2 + 5 α c2 c1 − c3 c1 + 5 c2 2 ) c51
× (−c3 c1 + 6 α c2 c1 + 6 c2 2 + α2 c1 2 )en 6 + O(en 7 ).
(14)
This proves that for m = 3, Eq. (3) defines a sixth-order convergent iterative method. The Taylor series expansion of f (y3 ) around the solution γ is given as f (y3 ) :=
∞ m=1
cm (y3 − γ)m ,
December S1793962310000286
514
16,
2010 9:48 WSPC/262-IJMSSC/S1793-9623
S. K. Khattri & M. A. Noor
substituting from Eq. (14) into the preceding equation f (y3 ) =
1 (−c3 c1 + 6 α c2 c1 + 6 c2 2 + α2 c1 2 )(−α c3 c1 2 + 10 c1 α c2 2 c41 − c1 c2 c3 + 6 α2 c1 2 c2 + 5 c2 3 + α3 c1 3 )en 6 −
1 (−2 c3 3 c1 3 + 5 α6 c1 6 c51
+ 4 c1 5 α3 c4 + 25 c1 4 α2 c3 2 + 22 c1 2 c2 3 c4 − 28 c1 5 α4 c3 + 1034 α3 c2 3 c1 3 Int. J. Model. Simul. Sci. Comput. 2010.01:509-522. Downloaded from www.worldscientific.com by Prof. Dr. Muhammad Noor on 11/06/14. For personal use only.
+ 1520 c12 c2 4 α2 + 66 c1 2 c2 2 c3 2 + 380 α4 c1 4 c2 2 + 1156 c1c2 5 α − 366 c1c2 4 c3 + 70 α5 c1 5 c2 + 356 c2 6 − 4 c1 4 c3 α c4 + 26 c1 4 α2 c2 c4 + 44 c13 c2 2 α c4 − 712 c1 3 α2 c2 2 c3 + 86 c1 3 c2 c3 2 α − 4 c1 3 c2 c3 c4 − 244 c14 α3 c2 c3 − 858 c12 c2 3 α c3 )en 7 + O(en 8 ),
(15)
from the fourth-step of Eq. (3), we may write y4 := y3 − Φ(xn , y1 ) f (y3 ), substituting from Eqs. (11), (14) and (15) into the preceding equation, we obtain y4 = γ +
1 (c2 + α c1 )(α2 c1 2 + 5 α c2 c1 − c3 c1 + 5 c2 2 ) c71
× (−c3 c1 + 6 α c2 c1 + 6 c2 2 + α2 c1 2 )2 en 8 + O(en 9 ).
(16)
This proves that for m = 4, Eq. (3) defines a eighth-order convergent iterative method. The Taylor series expansion of f (y4 ) around the solution γ is given as f (y4 ) :=
∞
cm (y4 − γ)m ,
m=1
substituting Eq. (16) into the preceding equation f (y4 ) =
1 (180 c2 7 + 20 c1 4 c2 α2 c3 2 − 164 c14 α3 c2 2 c3 − 322 c1 3 c2 3 α2 c3 c61 + 34 c13 c2 2 c3 2 α − 37 c1 5 α4 c2 c3 − 288 c1 2 c2 4 α c3 + α7 c1 7 + 3 c1 5 α3 c3 2 − c1 4 c3 3 α − 3 c1 6 α5 c3 + 720 c1 c2 6 α − 96 c1 c2 5 c3 + 130 α5 c1 5 c2 2 + 1176 c12 c2 5 α2 + 17 c1 2 c2 3 c3 2 − c1 3 c2 c3 3 + 485 α4 c1 4 c2 3 1 + 1008 c13 α3 c2 4 + 18 α6 c1 6 c2 )en 8 − 7 (7 α8 c1 8 + 12672 c1c2 7 α c1 + 5228 α5c1 5 c2 3 + 22960 c12 c2 6 α2 + 1162 α6c1 6 c2 2 + 140 α7 c1 7 c2 + 14008 α4c2 4 c1 4 + 23072 c13 c2 5 α3 + 2976 c28 − 37 c1 5 α2 c3 3 + 75 c16 α4 c3 2 − 47 c1 7 α6 c3 − 3520 c1c2 6 c3 − 100 c1 3 c2 2 c3 3
December S1793962310000286
16,
2010 9:48 WSPC/262-IJMSSC/S1793-9623
Simple Yet Efficient Algorithm for Solving Nonlinear Equations
515
+ 1032 c12 c2 4 c3 2 + 2 c3 4 c1 4 − 11792 c12 c2 5 α c3 − 682 c16 α5 c2 c3 + 672 c15 α3 c3 2 c2 + 2002 c14 α2 c2 2 c3 2 − 10680 c14 α3 c2 3 c3 − 130 c14 c2 c3 3 α − 3832 c15 α4 c2 2 c3 − 15760 c13 c2 4 α2 c3 + 2428 c13 c2 3 c3 2 α + 328 c15 α3 c2 2 c4 + 644 c14 c2 3 α2 c4 + 74 c1 6 α4 c2 c4 + 576 c1 3 c2 4 α c4 − 12 c16 α3 c3 c4 + 6 c1 5 c3 2 α c4 + 6 c1 4 c2 c3 2 c4 − 68 c1 3 c2 3 c3 c4
Int. J. Model. Simul. Sci. Comput. 2010.01:509-522. Downloaded from www.worldscientific.com by Prof. Dr. Muhammad Noor on 11/06/14. For personal use only.
− 136 c14 c2 2 c3 α c4 − 80 c1 5 c2 α2 c3 c4 + 6 c1 7 α5 c4 + 192 c12 c2 5 c4 )en 9 + O(en 10 ),
(17)
from the fifth step of Eq. (3), we may write y5 := y4 − Φ(xn , y1 ) f (y4 ), substituting Eqs. (11), (16) and (17) into the preceding equation, we obtain y5 = γ +
1 (c2 + α c1 )(α2 c1 2 + 5 α c2 c1 − c3 c1 + 5 c2 2 ) c91
× (−c3 c1 + 6 α c2 c1 + 6 c2 2 + α2 c1 2 )3 en 10 + O(en 11 ).
(18)
This proves that for m = 5, Eq. (3) defines a 10th-order convergent iterative method. Analogously, we may show that for m = 6 Eq. (3) defines a 12th-order convergent iterative method and so on. This completes our proof. A pseudo-code for Eq. (3) is presented in Algorithm 1 which shows that the algorithm is easy to implement. Another positive point of this algorithm is that it allows easy control over the convergence order of iterative methods. For example, choosing m = 3 in Algorithm 1 generates a sixth-order convergent iterative method while choosing m = 4 gives an eighth-order iterative method. This feature of the algorithm is useful in many situations. For example, one wishes to solve a problem or many problems with iterative methods of different convergence orders. Algorithm 1. Algorithm defining iterative method of convergence order 2 × m for solving nonlinear equation f (x) = 0. 1: while |f (xn )| > or |xn+1 − xn | > do 2: xn+1 := xn − f (xn )/ (f (xn ) + α f (xn )) 3: Φ := (1 + 2 f (xn+1 )/f (xn ) − α f (xn )/(f (xn ) + α f (xn )))/(f (xn ) + α f (xn )) 4: for i = 1 to i < m step 1 do 5: xn+1 := xn+1 − f (xn+1 ) Φ 6: end for 7: end while We notice that a 2 × m order method formed by Algorithm 1 requires m functions f (xn ) and one derivative f (xn ) evaluation during each iterative step. The efficiency index of an iterative method is given as: ξ 1/k .8–12 Here, ξ is the
December S1793962310000286
516
16,
2010 9:48 WSPC/262-IJMSSC/S1793-9623
S. K. Khattri & M. A. Noor
Int. J. Model. Simul. Sci. Comput. 2010.01:509-522. Downloaded from www.worldscientific.com by Prof. Dr. Muhammad Noor on 11/06/14. For personal use only.
convergence order of the method and k is the number of function plus derivative evaluations. The sixth-order methods8,11,12 require evaluations of three functions and one derivative while the sixth-order methods9,10 require evaluations of two functions and two derivatives during each iteration. Consequently the efficiency index of the sixth-order methods8–12 is 61/4 . A sixth-order iterative method formed through the algorithm (selecting m = 3 in Algorithm 1) requires evaluation of three functions and one derivative. Therefore, the efficiency index of a sixth-order method generated through Algorithm 1 is also 61/4 and which is better than the efficiency index of the Newton’s method.1,6,8–12 3. Numerical Work If the convergence order ξ of an iterative method is defined through the equation lim
n→∞
|en+1 | = c = 0, |en |ξ
then the computational order of convergence (COC) may be approximated as follows22 : ρ≈
ln |(xn+1 − γ)/(xn − γ)| , ln |(xn − γ)/(xn−1 − γ)|
All the computations reported here are done in the programming language C++ . Scientific computations in many areas of science and engineering demand very high degree of numerical precision.14,23 For this purpose, we are using the high precision C++ library ARPREC.23 The ARPREC library supports arbitrarily high level of numeric precision.23 In the program, the precision in decimal digits is set with the command “mp::mp init(2005)”.23 For convergence, it is required that the distance between two consecutive approximations (|xn+1 − xn | with n ≥ 0) is less than . And, the absolute value of the function (|f (xn )|) also referred to as residual has to be less than . Apart from the convergence criteria, our Algorithm 1 also uses maximum allowed iterations as stopping criterion. Thus our algorithm stops if (i) |xn+1 − xn | < ; (ii) |f (xn )| < ; (iii) itr > maxitr. Here, = 10−320 , itr is the iteration counter for the algorithm and the maximum allowed iterations, maxitr = 100. 3.1. Comparison with the classical Newton method In Algorithm 1, we choose α = 0. The algorithm is tested for the following functions, and which are taken from Ref. 13 f1 (x) = x5 + x4 + 4x2 − 15, γ ≈ 1.347, f2 (x) = sin(x) − x/3, f3 (x) = 10xe
−x2
− 1,
f4 (x) = cos(x) − x,
γ ≈ 2.278, γ ≈ 1.679, γ ≈ 0.739,
December S1793962310000286
16,
2010 9:48 WSPC/262-IJMSSC/S1793-9623
Simple Yet Efficient Algorithm for Solving Nonlinear Equations
f5 (x) = e−x f6 (x) = e
−x
2
+x+2
− 1,
γ ≈ −1.000,
+ cos(x),
γ ≈ 1.746,
2
γ ≈ 4.152,
f7 (x) = ln (x + x + 2) − x + 1, −1
Int. J. Model. Simul. Sci. Comput. 2010.01:509-522. Downloaded from www.worldscientific.com by Prof. Dr. Muhammad Noor on 11/06/14. For personal use only.
f8 (x) = sin
517
2
(x − 1) − x/2 + 1, γ ≈ 0.5948.
We run Algorithm 1 for four values of m: m = 1, 2, 3, 4. Here, m = 1 corresponds to the classical Newton method (2). We choose the same initial guess as found in Ref. 13. Thus the reader may find it easier to compare performance of various methods presented in this work and reported in Ref. 13. Table 1 reports outcome of our numerical work. Table 1 reports (iterations required, number of function evaluations needed, COC during second last iteration) for the Newton method (m = 1), fourthorder iterative method (m = 2), sixth-order iterative method (m = 3) and eighthorder iterative method (m = 4). Computational order of convergence reported in Table 1 was observed during the second last iteration. Following important observations were made during numerical experimentations: (1) In Table 1, the methods which require least number of function evaluations for convergence are marked in bold. We may see in Table 1 that for the five functions, out of eight functions, the choice m = 3 is optimal. While for the functions f4 (x), f5 (x) and f6 (x) the choice m = 4 is optimal. We may also observe that the choice m = 1 (the Newton method (2)) is not an optimal method for any function. (2) From Table 1, we notice that for the functions f3 (x), f7 (x) and f8 (x) the sixthorder (m = 3) and eighth-order (m = 4) methods require same number of iterative steps. Table 2 reports residual (|f (xn )|) during the last iterative step for all methods. We see in Table 2 that the eighth-order method (choice m = 4) produces least residual. To find the optimum value of the free parameter α, we optimize the absolute value of the asymptotic constant in the error equations (8), (12), (14), (16) and
Table 1. Iterations, number of function evaluations and COC for the Newton method (m = 1), fourth-order method (m = 2), sixth-order method (m = 3) and eighth-order iterative method (m = 4). f (x) f1 (x) f2 (x) f3 (x) f4 (x) f5 (x) f6 (x) f7 (x) f8 (x)
x0 1.6 2.0 1.8 1.0 −0.5 2.0 3.2 1.0
NM (m = 1)
m=2
m=3
m=4
(9,18,2) (23,46,2) (10,20,2) (9,18,2) (11,22,2) (9,18,2) (10,20,2) (10,20,2)
(4,12,4) (10,30,4) (4,12,3.99) (4,12,3.99) (5,15,3.99) (4,12,3.99) (4,12,3.99) (4,12,4.01)
(2,8,5.66) (7,21,6) (3,12,6.21) (3,12,5.90) (4,16,5.99) (3,12,5.99) (3,12,6.19) (3,12,6.35)
(2,10,7.6) (6,30,8) (3,15,8.22) (2,10,8.10) (3,15,6.75) (2,10,8.10) (3,15,8.19) (3,15,8.36)
December S1793962310000286
518
16,
2010 9:48 WSPC/262-IJMSSC/S1793-9623
S. K. Khattri & M. A. Noor Table 2.
Residual (|f (xn )|).
f (x)
m=1
m=2
m=3
m=4
f3 (x) f7 (x) f8 (x)
10−918 10−435 10−347
10−637 10−872 10−744
10−715 10−671 10−800
10−1067 10−1522 10−1302
Int. J. Model. Simul. Sci. Comput. 2010.01:509-522. Downloaded from www.worldscientific.com by Prof. Dr. Muhammad Noor on 11/06/14. For personal use only.
(18). We found that for α :=
c2 f (γ) , = c1 2 f (γ)
asymptotic error constant vanishes, which will result in a higher order method. Unfortunately, we cannot evaluate f (γ) and f (γ) a priori. Accordingly, the parameter α may be defined adaptively as follows αn+1 =
f (xn ) − f (xn−1 ) , 2 (xn − xn−1 ) f (xn )
with n ≥ 1.
Here, the second derivative is replaced by a finite difference approximation. 3.2. Comparison with other popular methods Let us review various iterative methods for computational purpose. Based upon the well known Jarrat’s method7 recently Ren et al.9 proposed the following sixthorder convergent iterative family of methods consisting of three steps and three parameters (RWB) 2 f (xn ) , yn = xn − 3 f (xn ) 3 f (yn ) + f (xn ) f (xn ) zn = xn − , (19) 6 f (yn ) − 2 f (xn ) f (xn ) (2a − b)f (xn ) + bf (yn ) + cf (xn ) f (zn ) . xn+1 = zn − (−a − b)f (xn ) + (3a + b)f (yn ) + cf (xn ) f (xn ) where a, b, c ∈ R and a=0. Wang et al.10 also developed a sixth-order convergent variant of the Jarrat’s method. Their method consists of three steps and two parameters (WKL) 2 f (xn ) yn = xn − , 3 f (xn ) 3 f (yn ) + f (xn ) f (xn ) zn = xn − , (20) 6 f (yn ) − 2 f (xn ) f (xn ) (5α + 3β)f (xn ) − (3α + β)f (yn ) f (zn ) , xn+1 = zn − 2 α f (xn ) + 2 β f (yn ) f (xn )
December S1793962310000286
16,
2010 9:48 WSPC/262-IJMSSC/S1793-9623
Simple Yet Efficient Algorithm for Solving Nonlinear Equations
519
Int. J. Model. Simul. Sci. Comput. 2010.01:509-522. Downloaded from www.worldscientific.com by Prof. Dr. Muhammad Noor on 11/06/14. For personal use only.
where α, β ∈ R with α + β = 0. Lately Sharma and Guha11 modified the Ostrowski method24 and developed the following sixth-order convergent method consisting of three steps and one parameter (SG) f (xn ) yn = xn − , f (xn ) f (xn ) f (yn ) zn = yn − , (x ) f (x ) − 2f (y ) f n n n f (zn ) f (xn ) + af (yn ) , xn+1 = zn − f (xn ) + (a − 2)f (yn ) f (xn )
(21)
where a ∈ R. Earlier, Neta8 has developed the sixth-order convergent family of methods consisting of three steps and one parameter (NETA) f (xn ) yn = xn − , f (xn ) f (yn ) f (xn ) + af (yn ) zn = yn − , (x ) f (x ) + (a − 2)f (y ) f n n n f (xn ) − f (yn ) f (zn ) . xn+1 = zn − f (xn ) − 3f (yn ) f (xn )
(22)
We may notice that, in the preceding method, the choice a = −1 produces the same correcting factor in the last two steps. Chun and Ham12 also developed a sixthorder modification of the Ostrowski’s method. Their family of methods consists of the following three steps (CH) f (xn ) yn = xn − , f (xn ) f (yn ) f (xn ) zn = yn − , f (xn ) − 2f (yn ) f (xn ) f (zn ) , xn+1 = zn − H(un ) f (xn )
(23)
where un = f (yn )/f (xn ), and H(t) represents a real-valued function satisfying H(0) = 1, H (0) = 2. In the case H(t) =
1 + (β + 2)t , 1 + βt
(24)
the third substep is similar to the method of Sharma and Guha.11 Based upon the well-known King’s method5 and the Newton’s method (2), recently Li et al.
December S1793962310000286
Int. J. Model. Simul. Sci. Comput. 2010.01:509-522. Downloaded from www.worldscientific.com by Prof. Dr. Muhammad Noor on 11/06/14. For personal use only.
520
16,
2010 9:48 WSPC/262-IJMSSC/S1793-9623
S. K. Khattri & M. A. Noor
constructed a three-step and 16th-order iterative method (LMMW)14 : f (xn ) yn = xn − , f (xn ) 2 f (xn ) − f (yn ) f (yn ) zn = yn − , 2 f (xn ) − 5 f (yn ) f (xn ) f (zn ) xn+1 = zn − f (zn ) f (zn ) f (zn ) f zn − 2 f (zn ) − f zn − f (zn ) f (zn ) . − f (zn ) f (zn ) 2 f (zn ) − 5 f zn − f (zn )
(25)
The first step in the preceding method is the Newton’s method while the second step is referred to as the King’s method.5 We test the methods for the following functions: f1 (x) = x3 + 4 x2 − 10,
γ ≈ 1.365. 2
2
f2 (x) = x exp(x ) − sin (x) + 3 cos(x) + 5, γ ≈ −1.207. f3 (x) = sin2 (x) − x2 + 1,
γ ≈ ±1.404.
−
γ = 0. √ γ = 2.
f4 (x) = tan 1(x) 4
2
f5 (x) = x + sin(π/x ) − 5, f6 (x) = e(−x
2
5
+x+2)
− 1,
γ = −1.0.
2
γ ≈ 1.347.
4
f7 (x) = x + x + 4x − 15, 3
f8 (x) = (x − 1) − 1,
γ = 2.0.
Free parameters in various methods are randomly chosen as follows: a = 2 in the method by Sharma et al. (21), a = b = c = 1 in the method by Ren et al. (19), β = 1 in the method by Chun et al. (23), α = β = 1 in the method (20), and in the method by Neta et al. (22) a = 10. The computational results are reported in Table 3. Table 3 presents number of function evaluations and COC during the second last Table 3.
Number of function evaluations and COC for various iterative methods.
f (x)
f (x0 )
m=3
LMMW
NM
SG
RWB
NETA
CH
WKL
f1 (x) f2 (x) f3 (x) f4 (x) f5 (x) f6 (x) f6 (x) f6 (x) f7 (x) f8 (x)
10−10
(16,5.9) (24,5.9) (16,6.2) (92,5.1) (12,5.6) (32,6.2) (24,5.9) (24,5.9) (24,5.5) (28,5.9)
(306,16) div div div div div div div div div
(132,2) div div div div div div div div div
div div div div div div div div div div
div div div div div div div div div div
div div div div div div div div div div
(448,6) div div div div div div div div div
div div div div div div div div div div
0.2 0.0 100 0.6 0.5 4.0 −4.0 10−300 1.0
December S1793962310000286
16,
2010 9:48 WSPC/262-IJMSSC/S1793-9623
Simple Yet Efficient Algorithm for Solving Nonlinear Equations
521
iterative step for various methods. An optimal iterative method for solving nonlinear equations must require least number of function evaluations. In Table 3, methods which require least number of function evaluations are marked in bold. We again observe in Table 3 that the contributed method outperforms other popular methods for almost all the functions.
Int. J. Model. Simul. Sci. Comput. 2010.01:509-522. Downloaded from www.worldscientific.com by Prof. Dr. Muhammad Noor on 11/06/14. For personal use only.
4. Conclusions In this work, we have contributed a simple yet pragmatic algorithm for constructing iterative methods of desired convergence order. Three very important and practical attributes of our algorithm are: (1) It is easy to program and implement. (2) The algorithm allows control over the convergence order. As a result, we can choose the order of convergence. (3) The algorithm converges even if the derivative vanishes during the iterative process. Computational results and comparison with existing well-known methods confirm robust and efficient characters of our algorithm. Acknowledgments We are grateful to the reviewers for the constructive remarks and suggestions which have enhanced our work. The research of Prof. M. Aslam Noor is supported by the Visiting Professor Program of King Saud University, Riyadh, Saudi Arabia under Grant No: KSU.VPP.108. References 1. Argyros I. K., Computational Theory of Iterative Methods, Series: Studies in Computational Mathematics, (Eds.) Chui C. K., Wuytack L., Vol. 15 (Elsevier Publ. Comp., New York, 2007). 2. Amri S., Hosseni S. M., Second-order method for solving 2D nonlinear parabolic differential equations based on ADI method, Int. J. Model. Simulation Sci. Comput. 1:133–146, 2010. 3. Sochi T., Computational techniques for modeling non-Newtonian flow in porous media, Int. J. Model. Simulation Sci. Comput. 1(2):239–256, 2010. 4. Howe R. M., Improving accuracy and speed in real-time simulation of electric circuits, Int. J. Model. Simulation Sci. Comput. 1(1):47–83, 2010. 5. King R. F., A family of fourth order methods for nonlinear equations, SIAM J. Numer. Anal. 10(5):876–879, 1973. 6. Traub J. F., Iterative Methods for the Solution of Equations (Chelsea Publishing Company, New York, 1977). 7. Argyros I. K., Chen D., Qian Q., The Jarrat method in Banach space setting, J. Comput. Appl. Math. 51:103–106, 1994. 8. Neta B., On a family of multipoint methods for non-linear equations, Int. J. Comput. Math. 9:353–361, 1981. 9. Ren H., Wu Q., Bi W., New variants of Jarratt’s method with sixth-order convergence, Numer. Algorithms 52:585–603, 2009. 10. Wang X., Kou J., Li Y., A variant of Jarratt method with sixth-order convergence, Appl. Math. Comput. 204:14–19, 2008.
December S1793962310000286
Int. J. Model. Simul. Sci. Comput. 2010.01:509-522. Downloaded from www.worldscientific.com by Prof. Dr. Muhammad Noor on 11/06/14. For personal use only.
522
16,
2010 9:48 WSPC/262-IJMSSC/S1793-9623
S. K. Khattri & M. A. Noor
11. Sharma J. R., Guha R. K., A family of modified Ostrowski methods with accelerated sixth order convergence, Appl. Math. Comput. 190:111–115, 2007. 12. Chun C., Ham Y., Some sixth-order variants of Ostrowski root-finding methods, Appl. Math. Comput. 193:389–394, 2003. 13. Bi W., Ren H., Wu Q., Three-step iterative methods with eighth-order convergence for solving nonlinear equations, J. Comput. Appl. Math. 225:105–112, 2009. 14. Li X., Mu C., Ma J., Wang C., Sixteenth order method for nonlinear equations, Appl. Math. Comput. 215(10):3769–4054, 2009. 15. Wu X., Wu H., On a class of quadratic convergence iteration formulae without derivatives, Appl. Math. Comput. 107:77–80, 2000. 16. Wu X., Fu D., New high-order convergence iteration methods without employing derivatives for solving nonlinear equations, Comput. Math. Appl. 41:489–495, 2001. 17. Wu X., Note on the improvement of Newton’s method for system of nonlinear equations, Appl. Math. Comput. 189(2):1476–1479, 2007. 18. Wu X., A new continuation Newton-like method and its deformation, Appl. Math. Comput. 112:75–78, 2000. 19. Wu X., A significant improvement on Newton’s interative method, Appl. Math. Mech. 20(8):924–927, 1999. 20. Kanwar V., Tomar S. K., Modified families of Newton, Halley and Chebyshev methods, Appl. Math. Comput. 192(1):20–26, 2007. 21. Khattri S. K., Altered Jacobian Newton iterative method for nonlinear elliptic problems, IAENG Int. J. Appl. Math. 38:3, 2008. 22. Chun C., Construction of Newton-like iteration methods for solving nonlinear equations, Numer. Math. 104(3):297–315, 2006. 23. ARPREC, C++/Fortran-90 arbitrary precision package, Available at: http://crd. lbl. gov/∼dhbailey/mpdist/. 24. Ostrowski A. M., Solutions of Equations and System Equations (Academic Press, New York, 1960).