This article was downloaded by: [Panjab University] On: 05 March 2014, At: 00:59 Publisher: Taylor & Francis Informa Ltd Registered in England and Wales Registered Number: 1072954 Registered office: Mortimer House, 37-41 Mortimer Street, London W1T 3JH, UK
International Journal of Computer Mathematics Publication details, including instructions for authors and subscription information: http://www.tandfonline.com/loi/gcom20
Exponentially fitted variants of Newton's method with quadratic and cubic convergence a
V. Kanwar & S. K. Tomar
b
a
University Institute of Engineering and Technology, Panjab University , Chandigarh, India b
Department of Mathematics , Panjab University , Chandigarh, India Published online: 22 Jul 2009.
To cite this article: V. Kanwar & S. K. Tomar (2009) Exponentially fitted variants of Newton's method with quadratic and cubic convergence, International Journal of Computer Mathematics, 86:9, 1603-1611, DOI: 10.1080/00207160801950596 To link to this article: http://dx.doi.org/10.1080/00207160801950596
PLEASE SCROLL DOWN FOR ARTICLE Taylor & Francis makes every effort to ensure the accuracy of all the information (the “Content”) contained in the publications on our platform. However, Taylor & Francis, our agents, and our licensors make no representations or warranties whatsoever as to the accuracy, completeness, or suitability for any purpose of the Content. Any opinions and views expressed in this publication are the opinions and views of the authors, and are not the views of or endorsed by Taylor & Francis. The accuracy of the Content should not be relied upon and should be independently verified with primary sources of information. Taylor and Francis shall not be liable for any losses, actions, claims, proceedings, demands, costs, expenses, damages, and other liabilities whatsoever or howsoever caused arising directly or indirectly in connection with, in relation to or arising out of the use of the Content. This article may be used for research, teaching, and private study purposes. Any substantial or systematic reproduction, redistribution, reselling, loan, sub-licensing, systematic supply, or distribution in any form to anyone is expressly forbidden. Terms &
Downloaded by [Panjab University] at 00:59 05 March 2014
Conditions of access and use can be found at http://www.tandfonline.com/page/termsand-conditions
International Journal of Computer Mathematics Vol. 86, No. 9, September 2009, 1603–1611
Exponentially fitted variants of Newton’s method with quadratic and cubic convergence V. Kanwara * and S.K. Tomarb
Downloaded by [Panjab University] at 00:59 05 March 2014
a University
Institute of Engineering and Technology, Panjab University, Chandigarh, India; b Department of Mathematics, Panjab University, Chandigarh, India
(Received 26 May 2007; revised version received 18 November 2007; accepted 28 December 2007) In this paper, we present some new families of Newton-type iterative methods, in which f (x) = 0 is permitted at some points. The presented approach of deriving these iterative methods is different. They have well-known geometric interpretation and admit their geometric derivation from an exponential fitted osculating parabola. Cubically convergent methods require the use of the first and second derivatives of the function as Euler’s, Halley’s, Chebyshev’s and other classical methods do. Furthermore, new classes of third-order multipoint iterative methods free from second derivative are derived by semi-discrete modifications of cubically convergent iterative methods. Further, the approach has been extended to solve a system of non-linear equations. Keywords: Newton’s method; Euler’s method; Halley’s method; Chebyshev’s method; convergence 2000 AMS Subject Classification: 65H05
1.
Introduction
The celebrated Newton’s method xn+1 = xn −
f (xn ) , f (xn )
n ≥ 0,
(1)
is probably the best known and most widely used algorithm. Many topics related to Newton’s method still attract attention from researchers. It is quadratically convergent to simple root and linear to multiple roots. Because the method is very sensitive to initial guess, it has poor convergence and stability problems also. The classical Chebyshev–Halley methods [2] which improve Newton’s method are given by 1 Lf (xn ) f (xn ) xn+1 = xn − 1 + , (2) 2 1 − λLf (xn ) f (xn ) where Lf (xn ) = f (xn )f (xn )/f 2 (xn ). *Corresponding author. Email:
[email protected]
ISSN 0020-7160 print/ISSN 1029-0265 online © 2009 Taylor & Francis DOI: 10.1080/00207160801950596 http://www.informaworld.com
Downloaded by [Panjab University] at 00:59 05 March 2014
1604
V. Kanwar and S.K. Tomar
This family is known to converge cubically and includes the classical Chebyshev’s method (λ = 0) [2,6,8,11,14,15] and the famous Halley’s method (λ = 1/2) [3,4,6,8,10–13]. These are close relatives of Newton’s method and require the use of second derivative. Halley’s method does not work when f (x) vanishes, although it is less sensitive to the initial guess (it does not work properly if f (x) and f (x) or f (x) and f (x) are simultaneously near zero). Chebyshev’s method is poorly convergent in comparison to Halley’s method. For a comprehensive review of these algorithms, there are sufficient monographs available in the literature [8,11,14]. Recently, Kanwar and Tomar [6] proposed an alternative to the failure situation of Newton’s or Chebyshev’s methods, i.e., when the denominator in these formulae are zero or very small. More recently, Kanwar and Tomar [7] derived modifications to cubically convergent multi-point methods free from second derivatives. The purpose of the present paper is to provide some alternative derivations through exponentially fitted osculating parabolas. The formulae provided here are the natural extension of Newton’s, Halley’s and Chebyshev’s methods respectively. Some other formulae are also presented.
2.
Proposed methods
Let us consider the exponentially fitted osculating parabola in the form y = ep(x−xn ) [A(x − xn )2 + B(x − xn ) + C],
(3)
where p ∈ , |p| < ∞; A, B and C are arbitrary constants. These constants will be determined by using tangency conditions at the point x = xn . If the exponentially fitted osculating parabola given by Equation (3) is tangent to the graph of the equation in question, i.e., f (x) = 0 at x = xn , then we have y (k) (xn ) = f (k) (xn ),
k = 0, 1, 2.
(4)
Thus, we obtain A=
f (xn ) − p[2f (xn ) − pf (xn )] , 2
B = f (xn ) − pf (xn ),
C = f (xn ).
(5)
Suppose the parabola (3) meets the x − axis at x = xn+1 , then y(xn+1 ) = 0,
(6)
A(xn+1 − xn )2 + B(xn+1 − xn ) + C = 0.
(7)
and it follows from Equation (3) that
If the parabola (3) is an exponentially fitted straight line, then taking A = 0 in Equation (7), we have C (8) xn+1 = xn − . B Or f (xn ) , n ≥ 0. (9) xn+1 = xn − f (xn ) − pf (xn ) In order to obtain quadratic convergence, the entity in the denominator should be largest in magnitude. The formula (9) is well-defined, even if f (xn ) = 0 unlike Newton’s formula.
International Journal of Computer Mathematics
1605
Further, solving Equation (7) for xn+1 , and making use of the values of constants A, B and C given in (5), one can obtain xn+1 = xn −
Downloaded by [Panjab University] at 00:59 05 March 2014
{f (x
n)
− pf (xn )} ±
2f (xn ) {f (xn ) − pf (xn )}2 − 2f (xn ) ×{f (xn ) − p{2f (xn ) − pf (xn )}}
,
(10)
xn+1 = xn −
2f (xn ){f (xn ) − pf (xn )} , 2f (xn ){f (xn ) − pf (xn )} − f (xn )f (xn ) + p 2 f 2 (xn )
(11)
xn+1 = xn −
f (xn ) f 2 (xn ){f (xn ) − p{2f (xn ) − pf (xn )}} . − f (xn ) − pf (xn ) 2{f (xn ) − pf (xn )}3
(12)
The formulae given in Equations (9)–(12) are the variants of Newton’s formula. Note that for p = 0, the formulae (9)–(12) reduce to respectively Newton’s, Euler’s [8,10,11], Halley’s and Chebyshev’s methods.
3.
Development of multipoint iterative methods
Constructing iterative methods with cubic or higher convergence, not requiring the computation of second derivative is quite interesting and important from an application point of view. These multipoint iterative methods for single variable nonlinear equations have been studied recently by Weerakoon and Fernando [15], Homeier [5], Frontini and Sormani [1]. Recently, Kou et al. [9] derived new modifications of Newton’s method with fifth-order convergence. In this family, it requires two evaluations of the function and two of its first derivatives per iteration. At any stage of computation, if the derivative of the function is either zero or very small in the neighbourhood of the root, these variants of Newton’s method derived in [9] will fail miserably. Here we are interested in developing third-order multipoint iterative methods which are free from second derivative. The main idea of these proposed methods is discretization of second-order derivative involved in the family of Halley’s (11) and Chebyshev’s method (12), respectively. The unified family (11) and (12) of modified Chebyshev–Halley type methods can be written as Lf (xn ) 1 f (xn ) xn+1 = xn − 1+ , (13) f (xn ) − pf (xn ) 2 1 − λLf (xn ) where Lf (xn ) =
f (xn )[f (xn ) − p{2f (xn ) − pf (xn )}] , {f (xn ) − pf (xn )}2
and λ is a real parameter. We note from the formula (13) that: (1) (2) (3) (4) (5)
For λ = 1/2, it corresponds to modified Halley’s formula (11). For λ = 1/2 and p = 0, it corresponds to Halley’s formula [3]. For λ = 0, it corresponds to modified Chebyshev’s formula (12). For λ = 0 and p = 0, it corresponds to Chebyshev’s formula [14]. For λ → ±∞, it corresponds to modified Newton’s formula (9).
(13a)
1606
V. Kanwar and S.K. Tomar
First family: Let us take u = f (xn )/{f (xn ) − pf (xn )} and assume that |u| 1. Expanding the function f (xn − u) about the point x = xn with f (xn ) = 0, we have f (xn − u) = f (xn ) − uf (xn ) +
u2 f (xn ) + O(u3 ). 2
(14)
Downloaded by [Panjab University] at 00:59 05 March 2014
Inserting the value of u defined above in the coefficient of f (xn ), one can find the value of f (xn ) and inserting this value in the formula (13a) for Lf (xn ), we obtain after some simplification f (xn − u) p 2 f 2 (xn ) Lf (xn ) ≈ 2 . (15) − f (xn ) 2{f (xn ) − pf (xn )}2 If |u| is sufficiently small, then the second term within the square bracket in Equation (15) can be neglected and from (13) we obtain f (xn − u) f (xn ) xn+1 = xn − 1+ . (16) f (xn ) − pf (xn ) f (xn ) − 2λf (xn − u) Special cases 1. For λ = 0, the formula (16) gives to xn+1
f (xn ) + f (xn − u) = xn − . f (xn ) − pf (xn )
(17)
2. For λ = 1/2, the formula (16) gives xn+1 = xn −
f 2 (xn ) . {f (xn ) − pf (xn )}{f (xn ) − f (xn − u)}
(18)
3. For λ = 1, the formula (16) gives xn+1
f (xn ) f (xn ) − f (xn − u) = xn − . f (xn ) − pf (xn ) f (xn ) − 2f (xn − u)
(19)
It can be verified that for p = 0, the formulae (17)–(19) reduce to Traub [14], Newton–Secant [14] and Traub–Ostrowski’s formula [14] respectively. The parameter p in all these formulae is chosen so as to give the largest value of the denominator. Second family: Replacing the second derivative in Equation (13) by a finite difference between first derivatives as f (xn ) − f (xn − βu) f (xn ) ≈ , (20) βu where β = 0 ∈ and u is defined earlier. Assuming |u| 1 and adopting the same procedure as in the first family, we obtain xn+1 =
⎡
⎤
⎢ 1 {{f (xn ) − pf (xn )}{f (xn ) − f (x − βu)} − βpf (xn ){2f (xn ) − pf (xn )}} ⎥ ⎥. xn − ⎢ ⎣1 + 2 ⎦ β{f (xn ) − pf (xn )}2 − λ{{f (xn ) − pf (xn )} ×{f (xn ) − f (x − βu)} − βp{2f (xn ) − pf (xn )}} (21)
International Journal of Computer Mathematics
1607
For particular values of λ and β, some particular cases of this family are 1. For β = −1 and λ = 1/2, we get the following family of multipoint iterative method xn+1 = xn −
3f (x
n)
−
f
2f (xn ) . {xn + f (xn )/{f (xn ) − pf (xn )}}
(22)
2. For β = −1/2 and λ = 1/2, we obtain another family of multipoint iterative method xn+1 = xn −
f (xn ) . 2f (xn ) + f {xn + f (xn )/{2{f (xn ) − pf (xn )}}}
(23)
Downloaded by [Panjab University] at 00:59 05 March 2014
3. For β = −1/2 and λ = 0, we obtain xn+1
f (xn )f xn + f (xn )/2{f (xn ) − pf (xn )} = xn − . {f (xn ) − pf (xn )}2
(24)
4. For β = 1/2 and λ = 1/2, we get f (xn ) {f {xn − f (xn )/{2f (xn ) − pf (xn )}}}
(25)
2f (xn ) . f (xn ) + f {(xn − {f (xn )/f (xn ) − pf (xn )})}
(26)
xn+1 = xn − 5. For β = 1 and λ = 1/2, we get xn+1 = xn −
It can be seen that the formulae (24)–(26) are the modifications over third-order multipoint iterative formulae given in [14] and Weerakoon and Fernando formula [15] respectively. Note that Kanwar and Tomar [7] previously derived some formulae similar to the formulae (17)–(19), (22), (23) and (25) by using a different approach.
4.
Convergence
Here, we shall present the mathematical proof for the order of convergence of methods (9) and (12), the order of convergence for the remaining methods can be proved on similar lines. Theorem 1 Let f : I → be a sufficiently differentiable real valued function defined on an interval I enclosing a simple zero x = r of f (x). Further, we assume that the point x0 is sufficiently close to r and f (xn ) − pf (xn ) = 0 in I . Then the sequence of approximate roots generated by the formulae (9) and (12) has quadratic and cubic convergence, respectively and satisfy the following error equations en+1 = (C2 − p)en2 + O(en3 ),
(27)
en+1 = (2C22 − C3 − 3pC2 + 1.5p 2 )en3 + O(en4 ).
(28)
and
where en = xn − r and Ck = (1/k!)f k (r)/f (r),
k = 2, 3, . . . .
1608
V. Kanwar and S.K. Tomar
Proof Expanding f (xn ), f (xn ) and f (xn ) about the point x = r by means of Taylor’s expansion, we get f (xn ) = f (r) en + C2 en2 + C3 en3 + O(en4 ) , (29) 3 3 f (xn ) = f (r) 1 + 2C2 en + 3C3 en + O(en ) , (30) and f (xn ) = f (r) + f (r)en + O(en2 ).
(31)
Downloaded by [Panjab University] at 00:59 05 March 2014
Using (29)–(31), we can derive f (xn ) = en − (C2 − p)en2 − (2C3 − 2C22 + 2pC2 − p 2 )en3 + O(en4 ). f (xn ) − pf (xn )
(32)
f (xn ) − 2pf (xn ) + p 2 f (xn ) = f (r)[2C2 − 2p + (6C3 − 4pC2 + p 2 )en + O(en2 )].
(33)
and
Using (29) and (32) in formula (9), we get en+1 = (C2 − p)en2 + O(en3 ),
(34)
which proves quadratic convergence of the method (9). Similarly, using Equations (29)–(33) in formula (12), we get en+1 = (2C22 − C3 − 3pC2 + 1.5p 2 )en3 + O(en4 ), which shows cubic convergence of the method (12).
5.
(35)
Extension of proposed method (9) to the system of nonlinear equations
Let us consider the case of two simultaneous nonlinear equations in two unknowns: f (x, y) = 0, g(x, y) = 0. Consider the auxiliary equations
ep(x−x0 ) f (x, y) = 0, , ep(y−y0 ) g(x, y) = 0.
(36)
(37)
where p is a real parameter and |p| < ∞. Roots of equations in (36) are also the root of equations in (37) and vice-versa. Suppose these equations have at least one real solution and let (x0 , y0 ) be the initial approximation to this solution. If h and k are the improvements to the initial approximation, then from equations in (37), we have eph f (x0 + h, y0 + k) = 0, . (38) epk g(x0 + h, y0 + k) = 0. Expanding these equations about the point (x0 , y0 ) by Taylor’s series upto the first degree terms, we get approximately h(fx0 + pf0 ) + kfy0 + f0 = 0, , (39) hgx0 + k(gy0 + pg0 ) + g0 = 0. where f0 = f (x0 , y0 ), g0 = g(x0 , y0 ), fx0 = (∂f/∂x)(x0, y0 ) etc.
International Journal of Computer Mathematics
1609
On solving equations in (39) for h and k, we get ⎫ g0 fy0 − f0 (gy0 + pg0 ) ⎪ h= ,⎪ ⎪ (fx0 + pf0 )(gy0 + pg0 ) − fy0 gx0 ⎬ . ⎪ f0 gx0 − g0 (fx0 + pf0 ) ⎪ k= .⎪ ⎭ (fx0 + pf0 )(gy0 + pg0 ) − fy0 gx0
(40)
Therefore, a new approximation to the solution is
Downloaded by [Panjab University] at 00:59 05 March 2014
x1 = x0 + h,
y1 = y0 + k.
(41)
The method will work even if fx0 gy0 − fy0 gx0 = 0 unlike Newton’s method. This process is repeated till we get the two successive approximations close enough to obtain the desired accuracy. The proposed method can be extended to a system of n nonlinear equations in n unknowns. The following example shows the working of the procedure. Example
Let us consider the following simultaneous non-linear equations x2 − y2 = 4
and
x 2 + y 2 = 16.
The √ solution of this system√of equations is trivial and we observe that one root is given by x = 10 = 3.162 and y = 6 = 2.449. To apply the method, a starting guess is necessary. Let x0 = y0 = 0 be the initial guess and taking |p| = 1. We found that one can obtained the accurate solution just after five iterations, while the Newton’s method fails completely to provide the solution as the Jacobian of the system vanishes.
6.
Numerical examples and conclusion
To explain the proposed methods (9), (11) and (12), we have considered the following few test equations: Examples 1. 2. 3. 4. 5. 6. 7.
exp(x 2 + 7x − 30) − 1 = 0, r = 3. x 3 + 4x 2 − 10 = 0, r = 1.365229964256287. x exp(x 2 ) − sin2 x + 3 cos x + 5 = 0, r = −1.207647800445557. cos x − x = 0, r = 0.739085137844086. (x − 1)6 − 1 = 0, r = 2. tan−1 x = 0, r = 0. e−x − sin x = 0, r = 6.285049438476562.
Employing methods (9), (11), (12) and method (2.19) of [6] to solve above nonlinear equations given in Examples 1–7. We have compared these methods with classical Newton’s, Halley’s and Chebyshev’s methods also. Calculations are performed by using C ++ in double-precision arithmetic and the formulae are tested for |p| = 1. The results are presented in Table 1. These numerical examples show that method (9) and method (12) appear to be more attractive than Newton’s and Chebyshev’s method, respectively [14]. For example, Example 6 has a root at r = 0. For initial guess x0 = 2, Newton’s and Chebyshev’s methods diverge because their correction factors increase beyond limit. Their counterpart gives the required root because their correction factors are small throughout the iteration process. Example 7 has a jumping problem for Newton’s and Chebyshev’s methods respectively. This equation has infinite number of roots lying
1610 Table 1.
Example 1
2 3 4
Downloaded by [Panjab University] at 00:59 05 March 2014
5
6 7
V. Kanwar and S.K. Tomar Comparison of iterative methods: Test functions, initial guess, iterations. Initial guesses 1.0 1.5 2.0 3.5 0.0 0.5 −0.5 −1.5 −1.0 3.0 1.0 1.8 3.0 2.0 5.0
Newton Divergent Divergent 15 11 Fails 6 9 5 7 6 Fails 5 8 Divergent Converges to undesired root
Method (9)
Halley
Method (11)
3 13 9 11 5 4 4 5 6 8 1 5 9 8 6
13 11 8 6 Fails 3 3 3 5 4 Fails 3 4 4 4
1 7 11 6 3 3 3 3 4 5 6 2 5 5 4
Chebyshev Divergent Divergent Divergent 7 Fails 33 Divergent 3 Divergent Divergent Fails 108 5 Divergent Converges to undesired root
Method (12)
Method (2.19)
13 2 9 8 3 3 3 4 4 5 6 3 6 6 4
3 8 2 8 4 4 3 4 5 6 1 3 6 6 5
close to π, 2π, . . . . It can be seen that Newton’s and Chebyshev’s methods do not necessarily converge to the root, that is, nearest to the starting value. For example, Newton’s method with x0 = 5 converges to the root closet to 3π, i.e., the nearest root 6.285049438476562 is skipped. Similarly, Chebyshev’s method converges to the smallest root 0.5885327458365, which is far away from the required root. This type of numerical instability has not been observed in these modified techniques. Finally, this study represents several formulae of second- and third-order for solving nonlinear equations numerically and has well-known geometric derivation. These methods can be viewed as natural extensions of Newton’s, Euler’s, Halley’s and Chebyshev’s methods respectively and can be used as an alternative to the existing techniques or in some cases where existing techniques are not successful. We have also presented many new families of third-order multipoint iterative methods free from second derivative, in which f (x) = 0 is permitted at some points. By using the same idea, one can obtain other iterative processes by considering different exponentially fitted osculating curves.
Acknowledgements Authors wish to thank the unknown referees and the subject editor for their valuable suggestions and comments on the manuscript, which led to the improvement of the paper.
References [1] M. Frontini and E. Sormani, Some variants of Newton’s method with third-order convergence, Appl. Math. Comput. 140 (2003), pp. 419–426. [2] J.M. Gutiérrez and M.A. Hernández, A family of Chebyshev-Halley type methods in Banach spaces, Bull. Aust. Math. Soc. 55 (1997), pp. 113–130. [3] E. Halley, A new, exact and easy method for finding the roots of any equations generally, without any previous reduction (Latin), Philos. Trans. Roy. Soc. London. 18 (1694), pp. 136–148. [4] M.A. Hernández, Newton-Raphson’s method and convexity, Zb. Rad. Prirod. -Mat. Fak. Ser. Mat. 22(1) (1993), pp. 159–166. [5] H.H.H. Homeier, A modified Newton method for root finding with cubic convergence, J. Comput. Appl. Math. 157 (2003), pp. 227–230. [6] V. Kanwar and S.K. Tomar, Modified families of Newton, Halley and Chebyshev’s method, Appl. Math. Compt. 192 (2007), pp. 20–26.
International Journal of Computer Mathematics
1611
Downloaded by [Panjab University] at 00:59 05 March 2014
[7] ——— Modified families multi-point iterative methods for solving nonlinear equations, Numer. Algor. 44 (2007), pp. 381–389. [8] C.T. Kelly, Iterative Methods for Linear and nonlinear Equations, SIAM, Philadelphia, 1995. [9] J. Kou, Y. Li, and X. Wang, Some modifications of Newton’s method with fifth-order convergence, J. Comput. Appl. Math. 209 (2) (2007), pp. 146–152. [10] A. Melman, Geometry and convergence of Euler’s and Halley’s methods, SIAM Rev. 39 (4) (1997), pp. 728–735. [11] A.M. Ostrowski, Solution of Equations in Euclidean and Banach Space, 3rd ed., Academic Press, New York, 1973. [12] G.S. Salehov, On the convergence of the process of tangent hyperbolas (Russian), Dokl. Akad. Nauk. SSSR 82 (1952), pp. 525–528. [13] T.R. Scavo and J.B. Thoo, On the geometry of Halley’s method, Am. Math. Monthly 102 (1995), pp. 417–426. [14] J.F. Traub, Iterative Methods for Solution of Equations, Prentice-Hall, Englewood Cliffs, NJ, 1964. [15] S. Weerakoon and T.G. Fernando, A variant of Newton’s method with accelerated third-order convergence, Appl. Math. Lett. 13 (2000), pp. 87–93.