Available online at www.ispacs.com/jiasc Volume 2012, Year 2012 Article ID jiasc-00012, 11 pages doi:10.5899/2012/jiasc-00012 Research Article
A family of optimal iterative methods with fifth and tenth order convergence for solving nonlinear equations M. Matinfar 1∗, M. Aminzadeh
2
(1) Department of Mathematics, University of Mazandaran, P.O.Box 47415-95447, Babolsar, Iran (2) Science of Mathematics Faculty, Department of Mathematics, University of Mazandaran
—————————————————————————————————— c M. Matinfar and M. Aminzadeh. This is an open access article distributed under the Copyright 2012 ⃝ Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
—————————————————————————————————— Abstract In this paper, we propose a two-step method with fifth-order convergence and a family of optimal three-step method with tenth-order convergence for finding the simple roots 1 of nonlinear equations. The optimal efficiency indices are all found to be 5 3 ≈ 1.71 and 1 10 4 ≈ 1.78. Some numerical examples illustrate that the algorithms are more efficient and performs better than other methods. Keywords: Nonlinear equation; Iterative method; Three-step; Convergence order; Efficiency index.
1
Introduction
In this paper, we develop an iterative method to find a simple root α of the nonlinear equation f (α) = 0, where f : D ⊂ R −→ R is a scalar function on an open interval D. It is well known that Newton’s method is one of the best iterative methods for solving a single nonlinear equation by using xn+1 = xn −
f (xn ) , f ′ (xn )
(1.1)
which converges quadratically in some neighborhood of α. Ostrowski’s method [20], given by yn = xn − f′(xn ) , f (xn ) (1.2) (xn ) f (yn ) xn+1 = yn − f (x f)−2f (yn ) f ′ (x ) , n n
∗
Corresponding author. Email address:
[email protected] Tel: +989122974150
1
M. Matinfar et.al
Journal of Interpolation and Approximation in Scientific Computing
is an improvement of Newton’s method. The order increases by at least two at the expense of additional function evaluation at another point iterated by the Newton’s method. To improve the local order of convergence and efficiency index, many more modified methods have been proposed in open literatures, see [[2]-[21]] and references therein. Chun and Ham developed a family of variants of Ostrowski’s method with sixth-order methods by weight function methods in [7] (see (12) − (17) therein), which is written as: n) y = xn − ff′(x , (xn ) n f (xn ) n) zn = yn − f (xn )−2f (yn ) ff′(y , (1.3) (x n) f (zn ) x = z − H(µ ) , n+1
where µn = ′′
f (yn ) f (xn )
n f ′ (x ) n
n
′
and H(t) represents a real-valued function with H(0) = 1, H (0) = 2
and |H (0)| < ∞. Kou et al. presented a family of variants of Ostrowski’s method [18] (see (23) therein) with seventh-order convergence, which is given by: n) yn = xn − ff′(x , (xn ) f (xn ) n) zn = yn − f (xn )−2f (yn ) ff′(y , (1.4) (x n) [( ] )2 f (z ) f (x )−f (y ) f (z ) n n n n xn+1 = zn − ′ + f (yn )−αf (zn ) , f (xn )−2f (yn ) f (x ) n
where α is constant. Bi et al. presented a family of eighth-order convergence methods[1] (see (5) therein), which is given by: n) y = xn − ff′(x , (xn ) n 2f (xn )−f (yn ) f (yn ) zn = yn − 2f (xn )−5f (yn ) f ′ (x ) , (1.5) n f (zn ) x n+1 = zn − H(µn ) f [zn ,yn ]+f [zn ,xn ,xn ](zn −yn ) , where f [zn , yn ] =
f (zn )−f (yn ) zn −yn
and f [zn , xn , xn ] =
f (zn )−f (xn ) zn −xn
′
− f (xn ) and µn = ′
′′
f (yn ) f (xn )
and H(t) represents a real-valued function with H(0) = 1, H (0) = 2 and |H (0)| < ∞. Recently Bi et al. presented a new family of eighth-order iterative methods [2] (see (24) therein), which is given by: n) y = xn − ff′(x , (xn ) n n) zn = yn − h(µn ) ff′(y , (1.6) (xn ) f (xn )+(γ+2)f (zn ) f (zn ) x =z − , n+1
n
where γ ∈ ℜ is constant, µn = ′
′′
f (xn )+γf (zn ) f (yn ) f (xn )
f [zn ,yn ]+f [zn ,xn ,xn ](zn −yn )
and H(t) represents a real-valued function with ′′′
H(0) = 1, H (0) = 2, H (0) = 10 and |H (0)| < ∞. Recently, there are several eighthorder methods proposed in [[7]-[9]]. Now after furnishing the outlines of the present work and a short study on the available high order developments of the classical Newtons method, we will provide our contribution in the next section. In the section 2 gives a general class of efficient three-step ten-order methods including three evaluations of the function and one of its first derivative per cycle. In the section 3,where the numerical comparisons are made to manifest the accuracy of the new methods from our class. Finally, the conclusion of the paper will be drawn in section 4.
2
ISPACS GmbH
M. Matinfar et.al
2
Journal of Interpolation and Approximation in Scientific Computing
Development of method and convergence analysis
To develop the new method, let us consider the iteration scheme in the form yn = xn − f′(xn ) , f (x ) n
zn = yn − G(µn ) f′(xn ) , µn = f (x ) n
(2.7)
f (yn ) xn ,
where G(µn ) represents a real-valued function, we have the following convergence result ′
Theorem 2.1. Assume that f ∈ C 5 (D). Suppose x∗ ∈ D, f (x∗ ) = 0 and f (x∗ ) ̸= 0. If the initial point x0 is sufficiently close to x∗ , then the sequence {xn } generated of the ′ iteration scheme (2.7) converges to x∗ . If G is any function with {G(0) = 0, G (0) = ′′ 1, G (0) = 2, G(n) (0) = 0, (n ≥ 3)}, then the convergence order of any method of the family (2.7) arrives to five. ′
Proof 2.1. Since f is sufficiently differentiable, by expanding f (xn ) and f (xn ) about ∗ x , one obtains 5 ∑ ′ ∗ f (xn ) = f (x )(en + ci ein + O(e6n )), (2.8) i=2
and ′
′
∗
f (xn ) = f (x )(1 +
5 ∑
ici eni−1 + O(e5n )),
(2.9)
i=2
where en = xn − x∗ and ck = software we can get:
1 f (k) (x∗ ) k! f ′ (x∗ )
for k = 2, 3, . . . Furthermore, with using the Maple
zn = x∗ − a1 en − c2 a2 e2n + (a3 c22 + a4 c3 )e3n + (a5 c32 + a6 c2 c3 + a7 c4 )e4n + O(e5n ),
(2.10)
where ′ ′′ ′ a1 = G(0), a2 = −1 − G(0) + G (0), a3 = − 12 {G (0) − 2G(0) + 3G (0) − 2} ′′ ′′′ ′ ′ a4 = 2G(0) + 2 − 2G (0), a5 = −9G (0) + 52 G (0) + 4G(0) − 61 G (0) + 4 ′ ′′ ′ a6 = −7 − 7G(0) + 11G (0) − 2G (0), a7 = 3G(0) + 3 − 3G (0).
(2.11)
Solving system of the equations {a1 = 0, a2 = 0, a3 = 0, a4 = 0, a5 = 0, a6 = 0, a7 = 0} we ′ ′′ ′′′ find that {G(0) = 0, G (0) = 1, G (0) = 2, G (0) = 0} thereby we obtain G(µn ) = µ2n +µn , n) where µn = f (y xn . Now, we consider an iteration scheme of the form, yn = xn − f′(xn ) , [f (xn )2 ] zn = yn − f (y2n ) + f (yn ) f′(xn ) , xn x f (x ) n
3
(2.12)
n
ISPACS GmbH
M. Matinfar et.al
Journal of Interpolation and Approximation in Scientific Computing
and satisfies the following error equation : zn − x∗ = (2c3 c22 − 3c42 )e5n + O(e6n ).
(2.13)
Remark 2.1. The order of convergence of the iterative method (2.12) is 5. This method requires two evaluations of the function, namely, f (xn ) and f (yn ) and one evaluation of ′ first derivatives f (xn ). We take into account the definition of efficiency index [11] as p1/w , where p is the order of the method and w is the number of function evaluations per iteration required by the method. If we suppose that all the√evaluations have the same cost, we have that the efficiency index of the method (2.12) is 3 5 ≈ 1.7099. Now we suggest the following iterative class by using the weight function approach: n) yn = xn − ff′(x , (x ] [ n )2 f (xn ) f (yn ) n) zn = yn − x2 + f (y x n f ′ (xn ) n f (zn ) x n+1 = zn − K(θn ) f ′ (z ) , θn = z n
(2.14) 1 . ′ n −f (zn )
Where K(θn ) represents a real-valued function. ′
Hermite interpolation on three points ( We can express ) f (zn ) as follows:(by using the ) ′ ′ ′ xn , f (xn ), f (xn ) , (yn , f (yn )) and zn , f (zn ), f (zn ) we can approximate f (zn ) if we solve the equations (2.14). ′ 0 + a1 + 2a2 zn + 3a3 zn2 = a4 f (zn ) a0 + a1 zn + a2 zn2 + a3 zn3 = a4 f (zn ) a0 + a1 yn + a2 yn2 + a3 yn3 = a4 f (yn ) (2.15) ′ 2 0 + a1 + 2a2 xn + 3a3 xn = a4 f (xn ) a0 + a1 xn + a2 x2n + a3 x3n = a4 f (xn ) for the coefficients ai , 0 6 i 0 4 and If 0 1 Mz = 1 0 1
a4 = 1. ′
1 2zn 3zn2 −f (zn ) zn zn2 zn3 −f (zn ) 2 yn3 −f (yn ) yn yn ′ 1 2xn 3x2n −f (xn ) xn x2n x3n −f (xn )
then the system (2.15) has a unique solution if and only if the Determinant(Mz ) = 0 [10]. By expanding Determinant(Mz ) about fifth column we obtain ′
′
(A)f (zn ) − (B)f (zn ) + (C)f (yn ) − (D)f (xn ) + (E)f (xn ) = 0,
(2.16)
As, we have ′
f (zn ) =
(B) (C) (D) ′ (E) f (zn ) − f (yn ) + f (xn ) − f (xn ), (A) (A) (A) (A) 4
(2.17)
ISPACS GmbH
M. Matinfar et.al
where
Journal of Interpolation and Approximation in Scientific Computing
A = −(xn − yn )2 (xn − zn )2 (yn − zn ), B = (xn − yn )2 (xn − zn )(xn + 2yn − 3zn ), C = (xn − zn )4 , D = −(xn − yn )(xn − zn )2 (yn − zn )2 , E = (xn − zn )(yn − zn )2 (2yn − 3x + zn ).
(2.18)
C D E Substituting {A, B, C, D, E} of (2.18) in (2.17) and simplifying { B A , A , A , A } we have ′ f (zn ) ≈ Ψf (xn , yn , zn ):
xn + 2yn − 3zn (xn − zn )2 f (zn ) + f (yn ) (xn − zn )(yn − zn ) (xn − yn )2 (yn − zn ) (yn − zn )(2yn − 3xn + zn ) yn − zn ′ f (xn ) + f (xn ). (2.19) x n − yn (xn − zn )(xn − yn )2
Ψf (xn , yn , zn ) = − +
Substituting Ψf (xn , yn , zn ) of (2.19) in (2.14), we construct a three-step iterative method: n) y = xn − ff′(x , n [ (xn )2 ] f (xn ) n) zn = yn − f (yx2n ) + f (y ′ xn f (xn ) n f (zn ) x n+1 = zn − K(θn ) Ψf (xn ,yn ,zn ) , θn =
(2.20) 1 zn −Ψf (xn ,yn ,zn )
Where K(θn ) represents a real-valued function. We prove the following convergence theorem for the method (2.20).
′
Theorem 2.2. Assume that f ∈ C 10 (D). Suppose x∗ ∈ D, f (x∗ ) = 0 and f (x∗ ) ̸= 0. If the initial point x0 is sufficiently close to x∗ , then the sequence {xn } generated of the ′ iteration scheme (2.20) converges to x∗ . If K is any function with {K(−1) = 1, K (−1) = ′′ 1, |K (0)| < ∞}, then the convergence order of any method of the family (2.20) arrives to ten. ′
Proof 2.2. Since f is sufficiently differentiable, by expanding f (xn ) and f (xn ) about x∗ , one obtains 10 ∑ ′ f (xn ) = f (x∗ )(en + ci ein + O(e11 (2.21) n )), i=2
and ′
′
∗
f (xn ) = f (x )(1 +
10 ∑
ici eni−1 + O(e10 n )),
(2.22)
i=2 (k)
∗
1 f (x ) where en = xn − x∗ and ck = k! for k = 2, 3, . . . ′ f (x∗ ) By expanding yn about xn , we obtain
yn = x∗ + c2 e2n + (2c3 − 2c22 )e3n + (3c4 − 7c2 c3 + 4c32 )e4n + · · · + O(e11 n ), 5
(2.23)
ISPACS GmbH
M. Matinfar et.al
Journal of Interpolation and Approximation in Scientific Computing
By expanding f (yn ) about xn , we have ′
f (yn ) = f (x∗ )(c2 e2n + (2c3 − 2c22 )e3n + (3c4 − 7c2 c3 + 5c32 )e4n + · · · + O(e11 n )).
(2.24)
By substituting (2.22), (2.23), (2.24) and (2.25) into the second formula of (2.20), using Taylor’s expansion, and simplifying, we have zn = x∗ + (2c3 c22 − 3c42 )e5n + · · · + (−930c24 c2 c3 + 6129c4 c23 c22 − 11952c4 c3 c42 + 1530c24 c32 + 27c34 + 3356c92 − 456c4 c33 + 5539c4 c62 + 1344c2 c43 − 9828c33 c32 11 + 19598c23 c52 − 14237c3 c72 )e10 n + O(en )).
(2.25)
By expanding f (zn ) about xn , we have ′
f (zn ) = f (x∗ )((2c3 c22 − 3c42 )e5n + · · · + (−11952c4 c3 c42 + 6129c4 c23 c22 − 930c24 c2 c3 + 27c34 + 3365c92 + 1530c24 c32 − 456c4 c33 + 5539c4 c62 11 + 1344c2 c43 − 9828c33 c32 + 19602c23 c52 − 14249c3 c72 )e10 n + O(en )).
(2.26)
′
By expanding Ψf (xn , yn , zn ) ≈ f (zn ) about xn , we have ′
Ψf (xn , yn , zn ) = f (x∗ )(1 + c2 c4 e4n + · · · + (4344c4 c3 c42 − 1350c4 c23 c22 + 176c24 c2 c3 − 12c34 − 2180c92 − 461c24 c32 + 10c4 c33 − 2709c4 c62 − 160c2 c43 + 2900c33 c32 − 8468c23 c52 + 7808c3 c72 )e9n + O(e10 n )). (2.27) By substituting (2.25), (2.26) and (2.27) into the third formula of (2.20), using Taylor’s expansion, and simplifying, we have we have
2 5 4 2 2 6 xn+1 = x∗ − c2 2 (−2c3 + 3c2 )(−1 + K(−1))en − c2 (20c2 + 3c2 c4 + 8c3 − 30c3 c2 )(−1 + K(−1))en 3 2 2 4 7 +(−8c33 + 89c6 2 − 24c4 c2 c3 + 43c4 c2 + 104c3 c2 − 198c3 c2 )(−1 + K(−1))en 7 2 2 5 4 3 2 3 8 −(36c4 c2 3 + 329c2 − 298c4 c3 c2 + 18c2 c4 − 958c3 c2 + 281c4 c2 − 152c2 c3 + 766c3 c2 )(−1 + K(−1))en 3 3 2 8 4 +(−652c4 c2 c2 3 + 2167c4 c3 c2 − 2165K(−1)c2 c4 c3 + 652K(−1)c4 c2 c3 + 1090K(−1)c2 + 80K(−1)c3 ′
2 2 5 6 3 2 2 4 5 2 +54c2 4 c3 − 213c4 c2 − 1357c4 c2 + 3904c3 c2 + 1450c3 c2 − 4234c3 c2 + 3K (−1)c4 c2 − 54K(−1)c4 c3 ′
4 8 3 2 2 4 2 2 −2K (−1)c4 c3 2 c3 − 80c3 − 1090c2 − 1450K(−1)c3 c2 + 4234K(−1)c3 c2 + 213K(−1)c4 c2 5 9 3 9 2 3 3 6 4 −3904K(−1)c3 c6 2 + 1354K(−1)c2 c4 )en + (27c4 + 3356c2 + 1530c4 c2 − 456c4 c3 + 5539c4 c2 + 1344c2 c3
(2.28)
3 2 5 7 2 2 4 2 −9828c3 3 c2 + 19598c3 c2 − 14237c3 c2 − 6117K(−1)c2 c4 c3 + 11912K(−1)c2 c4 c3 + 930K(−1)c4 c2 c3 ′
′
2 4 7 5 2 6 −12K (−1)c4 c2 2 c3 + 40K (−1)c4 c2 c3 + 14225K(−1)c2 c3 − 19594K(−1)c2 c3 − 5513K(−1)c2 c4 ′
′
3 3 4 3 3 6 2 3 −1527K(−1)c2 4 c2 + 456K(−1)c4 c3 − 1344K(−1)c2 c3 + 9828K(−1)c3 c2 − 26K (−1)c4 c2 − 3K (−1)c4 c2 ′
′
′
′
7 4 2 5 2 9 3 −12K (−1)c6 2 c3 + 24K (−1)c2 c3 + 4K (−1)c2 c3 − 8K (−1)c2 c3 − 3347K(−1)c2 − 27K(−1)c4 ′
′
8 2 2 2 4 10 11 −18K (−1)c9 2 + 9K (−1)c2 − 930c4 c2 c3 + 6129c4 c3 c2 − 11952c4 c3 c2 )en + O(en ),
By Substituting K(−1) = 1 in (2.28), using Taylor’s expansion, and simplifying, we have ′ ′ 2 9 2 7 ′ 7 6 5 xn+1 = x∗ − c3 2 c4 (−2c3 + 3c2 )(−1 + K (−1))en − c2 18c2 K (−1) − 9c2 − 9K (−1)c2 + 12c3 c2 ′ 4 4 ′ 4 ′ 3 ′ 2 2 3 2 ′ 2 −24c5 2 K (−1)c3 − 26c4 c2 + 26c2 K (−1)c4 + 12c2 K (−1)c3 + 8c2 K (−1)c3 − 4c3 c2 − 4c2 K (−1)c3 ′ ′ ′ 2 2 2 2 2 10 11 −40c2 2 K (−1)c4 c3 + 40c4 c3 c2 + 3c2 K (−1)c4 − 3c2 c4 + 12K (−1)c4 c3 − 12c4 c3 )en + O(en )
6
(2.29)
ISPACS GmbH
M. Matinfar et.al
Journal of Interpolation and Approximation in Scientific Computing
′
By Substituting K (−1) = 1 in (2.29), using Taylor’s expansion, and simplifying, we have 11 en+1 = xn+1 − x∗ = −c22 (9c72 − 9c62 − 12c3 c52 + 12c3 c42 + 4c23 c32 − 4c23 c22 )e10 n + O(en ) (2.30) ′
′′
we find that {K(−1) = 1, K (−1) = 1, |K (0)| = α < ∞} thereby we obtain K(θn ) = 1 1 1 2 2 αθn + (1 + α)θn + 2 + 2 α, where θn = zn −Ψf (xn ,yn ,zn ) . Now, we consider an iteration scheme of the form, n) , y = xn − ff′(x (x n ] [ n )2 f (xn ) f (yn ) n) zn = yn − x2 + f (y xn f ′ (xn ) n ] [ f (zn ) x = z − 1 αθ2 + (1 + α)θ + 2 + 1 α n+1
n
2
n
n
2
Ψf (xn ,yn ,zn ) , θn
(2.31) =
1 zn −Ψf (xn ,yn ,zn )
where α ∈ ℜ Remark 2.2. The order of convergence of the iterative method (2.31) is 10. This method requires three evaluations of the function, namely, f (xn ), f (yn ) and f (zn ) and one evalua′ tion of first derivatives f (xn ). We take into account the definition of efficiency index [11] as p1/w , where p is the order of the method and w is the number of function evaluations per iteration required by the method. If we suppose that all the √ evaluations have the same cost, we have that the efficiency index of the method (2.31) is 4 10 ≈ 1.778279. Table 1:Comparison of different methods in terms of orders and efficiencies.
Methods Newton Ostrowski (2.12) Chun Kou Bi Liu Sharma (2.31)
3
Order 2 4 5 6 7 8 8 8 10
Total number of evaluations 2 3 3 4 4 4 4 4 4
Efficiency index √ 2 2 ≈ 1.414 √ 3 4 ≈ 1.587 √ 3 5 ≈ 1.709 √ 4 6 ≈ 1.565 √ 4 7 ≈ 1.626 √ 4 8 ≈ 1.682 √ 4 8 ≈ 1.682 √ 4 8 ≈ 1.682 √ 4 10 ≈ 1.778
Numerical examples
The accuracy of our contribution is tested on numerous numerical problems. Our aim is fulfilled in this section by comparison of the novel method (2.31) with the other wellknown optimal eighth-order methods. Bi et al. developed a scheme of optimal order of convergence eight [2], estimating the first derivative of the function in the second and third steps and constructing a weight function as well in the following form BM: n) yn = xn − ff′(x , (x ) ( n )2 3 f (yn ) f (xn ) (3.32) , z = y − n n f (xn )−3f (yn ) f ′ (xn ) f (xn )−(2+γ)f (zn ) f (zn ) x n+1 = zn − f (xn )+γf (zn ) f [zn ,yn ]+(zn −yn )f [zn ,xn ,xn ] , 7
ISPACS GmbH
M. Matinfar et.al
Journal of Interpolation and Approximation in Scientific Computing
where γ ∈ ℜ. Liu and Wang in [19] presented the following family of optimal order eight LWM:
n) , yn = xn − ff′(x (x n) f (xn ) n) , zn = yn − f (xn )−2f (yn ) ff′(y (xn ) [ ( )2 f (xn )−f (yn ) xn+1 = zn − f′(zn ) + f (xn )−2f (yn ) f (x ) n
(3.33)
] f (zn ) f (yn )−µf (zn )
+
4f (zn ) f (xn )+βf (zn )
,
with µ and β are in ℜ. Sharma in [21] to produce optimal eighth-order method in the following form ShM: n) yn = xn − ff′(x , (xn ) f (xn ) n) zn = yn − f (xn )−2f (yn ) ff′(y , (x n) [ ( )2 ] f [xn ,yn ]f (zn ) xn+1 = zn − 1 + f (zn ) + f (zn ) f (xn ) f (xn ) f [xn ,zn ]f [yn ,zn ] .
(3.34)
We present some numerical results to illustrate the efficiency of the three-step iterative method proposed in this paper. We compare (2.31) with BM, LWM and ShM. All computations were done using Matlab. We use the following stopping criteria for computer programs: | xn+1 − xn |< ϵ, | f (xn+1 ) |< ϵ and so, when the stopping criterion is satisfied, xn+1 is taken as a computed value of the exact root. For numerical illustrations in this section we used the fixed stopping criterion ϵ = 1.E −1000 , where ϵ represents tolerance. We present some numerical test results with the following functions: f1 (x) = x3 + 4x2 − 15, x∗ = 1.93198055660636, 2 2 x f2 (x) = xe − sin (x) + 3 cos(x) + 5, x∗ = −1.207647827130919, −x2 − 1, f (x) = 10xe x∗ = 1.67963061042845, 3 2 2 f4 (x) = sin (x) − x + 1, x∗ = 1.4044916482153411, f5 (x) = x5 + x4 + 4x2 − 15, x∗ ≈ 1.347, √ f6 (x) = ln(x) + x − 5, x∗ = 8.3094326942315718, 2 +7x−30 x f7 (x) = e − 1, x∗ = 3.000000000000000, x − 2x − 5, f (x) = sin(x)e x∗ = −2.5232452307325549, 8 √ 1 f9 (x) = √x − x − 3, x∗ = 9.6335955628326952, f10 (x) = x2 + 2x + 5 − 2 sin(x) − x2 + 3, x∗ ≈ 2.3319676558839640. (3.35)
The computational order of convergence and the computational asymptotic convergence constant are defined here as: COC=
|xn −α| ) n−1 −α| |xn−1 −α| log( |x ) n−2 −α|
log( |x
xn −α and CACC= (xn−1 −α)m for m-th
order method. from Tables 2 we observe that the computational order of convergence COC perfectly coincides with the theoretical result.
8
ISPACS GmbH
M. Matinfar et.al
Journal of Interpolation and Approximation in Scientific Computing
Table 2:Comparison of iterative methods. Function f1 (x), x0 = 2
f2 (x), x0 = −1.5
f3 (x), x0 = 1.5
f4 (x), x0 = 2
f5 (x), x0 = 1.4
f6 (x), x0 = 8
f7 (x), x0 = 3.5
f8 (x), x0 = −2.4
f9 (x), x0 = 9
f10 (x), x0 = 2
Method (2.31) BM LWM ShM (2.31) BM LWM ShM (2.31) BM LWM ShM (2.31) BM LWM ShM (2.31) BM LWM ShM (2.31) BM LWM ShM (2.31) BM LWM ShM (2.31) BM LWM ShM (2.31) BM LWM ShM (2.31) BM LWM ShM
iteration 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 4 5 5 5 5 3 3 3 3 3 3 3 3 3 3 3 3
|xk − xk−1 | 3.067919e-517 1.600405e-330 1.880395e-387 4.565243e-428 9.906991e-496 4.425546e-66 1.630467e-234 2.342471e-280 4.252576e-425 1.250541e-46 3.716627e-45 1.378981e-50 7.871968e-52 4.384097e-29 2.790447e-25 4.310256e-29 1.139567e-94 8.621099e-62 8.575358e-79 5.463271e-85 5.947598e-233 1.023748e-82 1.850555e-113 1.417600e-124 1.913851e-66 4.455908e-08 2.233053e-24 3.060757e-39 6.673608e-292 2.982942e-107 8.857841e-134 4.669022e-134 5.398511e-211 2.071706e-72 7.375387e-99 1.030470e-121 1.281396e-107 2.073513e-57 3.187186e-77 1.116946e-85
|f (xk )| 0.000000e+00 0.000000e+00 0.000000e+00 0.000000e+00 2.854803e-989 2.443099e-455 1.200000e-998 1.100000e-998 5.853855e-849 2.401965e-323 7.005386e-355 4.941091e-399 .296854e-409 1.459166e-200 1.401650e-196 9.953621e-228 1.318939e-750 2.391218e-426 3.508366e-623 1.808119e-673 0.000000e+00 2.311776e-582 4.581926e-912 0.000000e+00 5.957524e-520 1.253740e-52 4.987826e-183 1.124354e-302 0.000000e+00 1.506077e-753 0.000000e+00 0.000000e+00 0.000000e+00 4.020242e-511 3.856457e-796 1.941528e-981 6.550009e-862 1.521116e-400 2.045299e-616 6.055121e-685
COC 10.000000 7.000000 8.000000 8.000000 10.000136 7.001710 8.000010 8.000005 10.000000 7.117036 7.890752 7.923618 10.003543 6.999961 7.999899 7.999973 10.000000 7.000000 8.000000 8.000000 10.004131 7.000000 8.000000 8.003890 10.002929 7.000000 7.728782 7.940283 10.005681 6.976066 7.980231 7.982322 10.005393 7.012641 8.010044 8.001482 10.000002 6.882628 7.869719 7.876306
CACC 7.835140e-32 8.366870e+43 1.511810e-02 3.426550e-03 6.257824e-19 1.310553e+11 2.194998e+00 2.248948e-01 6.737125e-16 3.700361e+04 6.961666e+00 1.367211e+00 2.983141e-16 4.307044e+26 1.535909e+00 3.365674e-01 9.876406e-15 2.115316e+61 3.238501e+00 6.149815e-01 4.229410e-008 6.521569e+74 1.133918e-09 6.521258e-11 1.212305e-1 6.205476e+05 6.180026e+05 1.122829e+05 1.141776e-19 4.803241e+06 8.721286e-09 7.634113e-09 3.825332e-007 1.881641e+01 2.562805e-10 8.885207e-13 4.983215e-09 1.545999e+04 7.914598e-05 1.029900e-05
The computational results presented in Table 2 shows that in almost all of cases, the presented method converge more rapidly than other methods. This means that the new method have better efficiency in computing process than other methods and for most of the functions we tested, the obtained methods behave at least equal performance compared to the other known methods of the same order. As shown in Tables 2, the proposed method (2.31) is preferable to Bi’s method, Liu’s methods and sharma’s method with eight-order convergence.
4
Conclusions
In this work we presented an approach which can be used to constructing of ten-order convergence iterative method that do not require the computation of second or higher derivatives. Also, we proposed a new three-step iterative methods for solving nonlinear equations. Numerical examples also show that the numerical results of our new three-step method, in equal iterations, improve the results of other existing three-step methods with eight-order convergence. Finally, it is hoped that this study makes a contribution to solve nonlinear equation .
References [1] W. Bi, H. Ren, Q. Wu, Three-step iterative methods with eighth-order convergence for solving nonlinear equations, Journal of Computational and Applied Mathematics,
9
ISPACS GmbH
M. Matinfar et.al
Journal of Interpolation and Approximation in Scientific Computing
225 (2009) 105–112. http://dx.doi.org/10.1016/j.cam.2008.07.004 [2] W. Bi, Q. Wu, H. Ren, A new family of eighth-order iterative methods for solving nonlinear equations, Applied Mathematics and Computation, 214 (2009) 236–245. http://dx.doi.org/10.1016/j.amc.2009.03.077 [3] C. Chun, Construction of third-order modifications of Newton’s method, Applied Mathematics and Computation, 189 (2007) 662–668. http://dx.doi.org/10.1016/j.amc.2006.11.127 [4] C. Chun, Some fourth-order iterative methods for solving nonlinear equations, Applied Mathematics and Computation, 195 (2008) 454–459. http://dx.doi.org/10.1016/j.amc.2007.04.105 [5] C. Chun, Some improvements of Jarratt’s method with sixth-order convergence, Applied Mathematics and Computation, 190 (2007) 1432–1437. http://dx.doi.org/10.1016/j.amc.2007.02.023 [6] C. Chun, Beny Neta, Some modification of Newton’s method by the method of undetermined coefficients, Computers and Mathematics with Applications, 56 (2008) 2528–2538. http://dx.doi.org/10.1016/j.camwa.2008.05.005 [7] C. Chun, YoonMee Ham, Some sixth-order variants of Ostrowski root-finding methods, Applied Mathematics and Computation, 193 (2007) 389–394. http://dx.doi.org/10.1016/j.amc.2007.03.074 [8] C. Chun, YoonMee Ham, Some second-derivative-free variants of super-Halley method with fourth-order convergence, Applied Mathematics and Computation, 195 (2008) 537–541. http://dx.doi.org/10.1016/j.amc.2007.05.003 [9] A. Cordero, J.L. Hueso, E. Martinez, J.R. Torregrosa, A family of iterative methods with sixth and seventh order convergence for nonlinear equations, Mathematical and Computer Modelling, 52 (2010) 1490–1496. http://dx.doi.org/10.1016/j.mcm.2010.05.033 [10] W.F. Finden, An error term and uniqueness for Hermite-Birkhoff interpolation involving only function values and/or first derivative values, Journal of Computational and Applied Mathematics, 212 (2008) 1–15. http://dx.doi.org/10.1016/j.cam.2006.11.022 [11] W. Gautschi, Numerical Analysis: An Introduction, Birkhauser, Boston, 1997. [12] Y. Ham, C. Chun, A fifth-order iterative method for solving nonlinear equations, Applied Mathematics and Computation, 194 (2007) 287–290. http://dx.doi.org/10.1016/j.amc.2007.04.005 [13] Y. Ham, C. Chun, S. Lee, Some higher-order modifications of Newton’s method for solving nonlinear equations, Journal of Computational and Applied Mathematics, 222 10
ISPACS GmbH
M. Matinfar et.al
Journal of Interpolation and Approximation in Scientific Computing
(2008) 477–486. http://dx.doi.org/10.1016/j.cam.2007.11.018 [14] J. Kou, New sixth-order methods for solving non-linear equations, Applied Mathematics and Computation, 189 (2007) 647–651. http://dx.doi.org/10.1016/j.amc.2006.11.117 [15] J. Kou, Y. Li, The improvements of Chebyshev-Halley methods with fifth-order convergence, Applied Mathematics and Computation, 188 (2007) 143–147. http://dx.doi.org/10.1016/j.amc.2006.09.097 [16] J. Kou, Y. Li, Modified Chebyshev-Halley methods with sixth-order convergence, Applied Mathematics and Computation, 188 (2007) 681–685. http://dx.doi.org/10.1016/j.amc.2006.10.018 [17] J. Kou, Y. Li, An improvement of the Jarratt method, Applied Mathematics and Computation, 189 (2007) 1816–1821. http://dx.doi.org/10.1016/j.amc.2006.12.062 [18] J. Kou, Y. Li, Xiuhua Wang, Some variants of Ostrowskis method with seventh-order convergence, Journal of Computational and Applied Mathematics, 209 (2007) 153– 159. http://dx.doi.org/10.1016/j.cam.2006.10.073 [19] L. Liu, X. Wang, Eighth-order methods with high efficiency index for solving nonlinear equations, Applied Mathematics and Computation, 215 (2010) 3449–3454. http://dx.doi.org/10.1016/j.amc.2009.10.040 [20] A.M. Ostrowski, Solution of Equations in Euclidean and Banach Space, third ed, Academic Press, New York, 1973. [21] J.R. Sharma, R. Sharma, A new family of modified Ostrowski’s methods with accelerated eighth order convergence, Numerical Algorithms, 54 (2010) 445–458. http://dx.doi.org/10.1007/s11075-009-9345-5
11
ISPACS GmbH