Vol 22, No. 5;May 2015
Modified Two-step Fixed Point Iterative Method for Solving Nonlinear Functional Equations with Convergence of Order Five and Efficiency Index 2.2361 Waqas Nazeer1, Shin Min Kang2,∗ , Muhmmad Tanveer3 and Abdul Aziz Shahid4
1
2
Division of Science and Technology, University of Education, Lahore 54000, Pakistan e-mail:
[email protected]
Department of Mathematics and RINS, Gyeongsang National University, Jinju 660-701, Korea e-mail:
[email protected] 3
Department of Mathematics, Lahore Leads University, Lahore 54810, Pakistan e-mail:
[email protected]
4
Department of Mathematics, Lahore Leads University, Lahore 54810, Pakistan e-mail:
[email protected] Abstract
In this paper, we present a modified two-step fixed point iterative method for solving nonlinear functional equations and analyzed. The modified two-step fixed point iterative method has convergence of order five and efficiency index 2.2361, which is larger than most of the existing methods and the methods discussed in Table 1. The modified two-step fixed point iterative method converges faster than the methods discussed. The comparison tables demonstrate the faster convergence of modified two-step fixed point method. 2010 Mathematics Subject Classification: 37F50. Key words and phrases: fixed point method, modified two-step fixed point method, Newton’s method
1
Introduction
The problem, to recall, is solving equations in one variable. We are given a function f and would like to find atleast one solution of the equation f (x) = 0. Note that, we do not put any restrictions on the function f ; we need to be able to evaluate the function; otherwise, we cannot even check that a given x = ξ is true, that is f (r) = 0. In reality, the mere ability to be able to evaluate the function does not suffice. We need to assume some kind of “good behavior”. The more we assume,the more potential we have, on the one ∗
24
Corresponding author
[email protected]
Vol 22, No. 5;May 2015
hand, to develop fast iteration scheme for finding the root. At the same time, the more we assume, the fewer the functions are going to satisfy our assumptions! This a fundamental paradigm in numerical analysis. We know that one of the fundamental algorithm for solving nonlinear equations is so-called fixed point iterations method [10]. In the fixed point iteration method for solving a nonlinear equation f (x) = 0, the equation is usually rewritten as x = g(x), (1.1) where (i) there exists [a, b] such that g(x) ∈ [a, b] for all x ∈ [a, b], (ii) there exists [a, b] such that |g 0 (x)| ≤ L < 1 for all x ∈ [a, b]. Considering the following iteration scheme xn+1 = g(xn),
n ≥ 0,
(1.2)
and starting with a suitable initial approximation x0 , we built up a sequence of approximations, say {xn }, for the solution of nonlinear equation, say ξ. the scheme will be converge to ξ, provided that (i) the initial approximation x0 is chosen in the interval [a, b], (ii) g has a continuous derivative on (a, b), (iii) |g 0(x)| < 1 for all x ∈ [a, b], (iv) a ≤ g(x) ≤ b for all x ∈ [a, b]. It is well known that the fixed point method has first order convergence. Kang et al. [13] described a new second order iterative method for solving nonlinear equations extracted from fixed point method by the following approach of [3] as follows: If g 0 (x) 6= 1, we can modify (1.1) by adding θ 6= −1 to both sides as: θx + x = θx + g(x), (1 + θ)x = θx + g(x), which implies that θx + g(x) = gθ (x). 1+θ In order for gθ (x) to be efficient, we can choose θ such that gθ0 (x) = 0,we yields x=
(1.3)
θ = −g 0 (x), so that (1.3) takes the form x=
−xg 0 (x) + g(x) . 1 − g 0 (x)
For a given x0 , we calculate the approximation solution xn+1 , by the iteration scheme xn+1 =
−xn g 0 (xn ) + g(xn ) , 1 − g 0 (xn )
gθ0 (xn ) = 0.
This is so-called a new second order iterative method for solving nonlinear equations, which converges quadratically.
25
[email protected]
Vol 22, No. 5;May 2015
By using the Taylor expansion, expanding f (x) about the point xn such that f (x) = f (xn ) + (x − xn )f 0 (xn ) +
1 (x − xn )2 f 00 (xn ) + ... 2!
(1.4)
If f 0 (xn ) 6= 0, we can evaluate the above expression as follows: f (xn ) + (x − xn )f 0 (xn ) = 0. If we choose xn+1 the root of equation, then we have xn+1 = xn −
f (xn ) . f 0 (xn )
(1.5)
This is so-called the Newton’s method [5,15,17] for root-finding of nonlinear functions, which converges quadratically. From (1.4) one can evaluate xn+1 = xn −
2f (xn )f 0 (xn ) . 2f 02 (xn ) − f (xn )f 00 (xn )
(1.6)
This is so-called the Halley’s method [2, 7, 8] for root-finding of nonlinear functions, which converges cubically. The iterative methods with higher-order convergence are presented in some literature [1,6,9,11,12,16,18]. In [1], Abbasbandy gives an iterative method, called the Abbasbandy’s method provisionally, which is expressed as xn+1 = xn −
f (xn ) f 2 (xn )f 00 (xn ) f 3 (xn )f 000 (xn ) − − . f 0 (xn ) 2f 03 (xn ) 6f 04 (xn )
(1.7)
It is pointed out that the Abbasbandy’s method has nearly supercubic convergence. One can see easily that the Abbasbandy’s method requires the evaluation of second and third derivatives of the function f (x). During the last century, the numerical techniques for solving nonlinear equations have been successfully applied (see, e.g., [1–12, 14–18] and the references therein). McDougall and Wotherspoon [14] modified √ the Newton’s method and their modified Newton’s method have convergence of order (1 + 2). Theorem 1.1. [3] Suppose that g ∈ C p [a, b]. If g (k)(x) = 0 for k = 1, 2, ..., p − 1 and g (p)(x) 6= 0, then the sequence xn is order p. In this paper, we presented a modified two-step fixed point iterative method for solving nonlinear functional equations having convergence of order five and efficiency index 2.2361 extracted from new second order iterative method for solving nonlinear equations [13] motivated by the technique of McDougall and Wotherspoon [14]. The proposed modified two-step fixed point iterative method applied to solve some problems in order to assess its validity and accuracy.
26
[email protected]
Vol 22, No. 5;May 2015
2
New iterative method
Let f : X ⊂ R → R for an open interval X be a scalar function and consider that the nonlinear equation f (x) = 0 (or x = g(x)), where g(x) : X ⊂ R → R, then we have a new second order iterative method [13] xn+1 =
−xn g 0 (xn ) + g(xn ) , 1 − g 0 (xn )
gθ0 (xn ) = 0.
(2.1)
By following the approach of McDougall and Wotherspoon [14], we develop a modified two-step fixed point iterative method as follows: Initially, we choose two starting points x and x∗ . Then, we set x∗ = x, our modified two-step fixed point iterative method that we examine herein is given by x∗0 = x0, −x0 g 0 ( 12 [x0 + x∗0 ]) + g(x0) , 1 − g 0( 12 [x0 + x∗0 ])
(2.2)
−xn g 0 ( 21 [xn−1 + x∗n−1 ]) + g(xn ) , 1 − g 0 ( 12 [xn−1 + x∗n−1 ])
(2.3)
x1 = which implies (for n ≥ 1) x∗n =
xn+1 =
−xn g 0 ( 12 [xn + x∗n ]) + g(xn ) , 1 − g 0 ( 12 [xn + x∗n ])
g0
1
2
[xn + x∗n ] = 6 0.
(2.4)
These are the main steps of our modified two-step fixed point iterative method. The value of x2 is calculated from x1 using g(x1 ) and the values of first derivative of g(x) evaluated at 12 (x1 + x∗1 ) (which is more appropriate value of the derivative to use than the one at x1 ), and this same value of derivative is re-used in the next predictor step to obtain x∗3 . This re-use of the derivative means that the evaluations of the starred values of x in (2.3) essentially come for free, which then enables the more appropriate value of the derivatives to be used in the corrector step (2.4).
3
Convergence analysis
Theorem 3.1. Let f : X ⊂ R → R for an open interval X and consider that the nonlinear equation f (x) = 0 (or x = g(x)) has a simple root α ∈ X, where g(x) : X ⊂ R → R be sufficiently smooth in the neighborhood of α, then the convergence order of modified twostep fixed point iterative method given in (2.4) is at least five. Proof. To analysis the convergence of modified two-step fixed point iterative method (2.4), let −xg 0 ( 12 [x + x∗ ]) + g(x) 0 1 ∗ H(x) = , g [x + x ] 6= 0. n n 2 1 − g 0 ( 12 [x + x∗ ])
27
[email protected]
Vol 22, No. 5;May 2015
Let α be a simple zero of f and f (α) = 0 (or g(α) = α), then we can easily deduce by using the software Maple that H(α) = α, H 0 (α) = 0, H 00 (α) = 0, H 000(α) = 0, H (4)(α) = 0, H (5)(α) =
15g 004(α) . (−1 + g 0 (α))4
(3.1)
Now, from (3.1) it can be easily seen that H (5)(α) 6= 0, then according to Theorem 1.1, modified two-step fixed point iterative method (2.4) has fifth order convergence.
4
Comparisons of efficiency index
Weerakoon and Fernando [18], Homeier [9] and Frontini, and Sormani [6] have presented numerical methods having cubic convergence. In each iteration of these numerical methods three evaluations are required of either the function or its derivative. The best way of comparing these numerical methods is to express the of convergence per function or derivative evaluation, the so-called “efficiency” of the numerical method. On this basis, 1 the Newton’s method has an efficiency of 2 2 ≈ 1.4142, the cubic convergence methods 1 have an efficiency of 3 3 ≈ 1.4422. Kuo [12] has developed several methods that each require two function evaluations and two derivative evaluations and these methods achieve an order of convergence of either 1 1 five or six, so having efficiencies of 5 4 ≈ 1.4953 and 6 4 ≈ 1.5651 respectively. In these Kuo’s methods the denominator is a linear combination of derivatives evaluated at different values of x, so that, when the starting value of x is not close to the root, this denominator may go to zero and the methods may not converge. The sixth order methods suggested in Kuo [12], if the ratio of function’s derivatives at the two value of x differ by a factor of more than three, then the method gives an infinite change in x. That is, the derivatives at the predictor and corrector stages can both be the same sign, but if their magnitudes differ by more than a factor of three, the method does not converge. Jarrat [11] developed a fourth order method that requires only one function evaluation and two derivative evaluations, and similar fourth order method have been described by Soleymani et al. [16]. Jarrat’s method is similar to those of Kuo’s methods in that if the ratio of derivatives at the predictor and corrector steps exceeds a factor of three, the method gives an infinite change in x. Jarratt’s methods is similar to those of Kou in that if the ratio of the derivatives at the predictor and corrector steps exceeds a factor of three, the method gives an infinite change in x. While our modified two-step fixed point iterative method (MFPIM) has an efficiency 1 of 2 5 ≈ 2.2361 larger than the efficiencies of all methods discussed above.
28
[email protected]
Vol 22, No. 5;May 2015
The efficiencies of the methods we have discussed are summarized in Table 1. Table 1: Comparison of efficiencies of various methods Method Newton, quadratic Cubic methods Kou’s fivth order Kou’s sixth order Jarratt’s fourth order Secant MFPIM
5
Number of function or derivative evaluations 2 3 4 4 3 1 2
Efficiency index 1
2 2 ≈ 1.4142 1 3 3 ≈ 1.4422 1 5 4 ≈ 1.4953 1 6 4 ≈ 1.5651 1 43 √ ≈ 1.5874 0.5(1 + 5) ≈ 1.6180 1 5 2 ≈ 2.2361
Applications
In this section, we use following five test functions to illustrate the efficiency of our developed the modified two-step fixed point iterative method. We compare the modified two-step fixed point iterative method with the Newton’s method, the Abbasbandy’s method, the Halley’s method, the fixed point method, and the new second order iterative method [13]. Consider g(x) = 2 + e−x , x0 = 2.2, r 10 3 2 , x0 = 1.5, f (x) = x + 4x − 10, g(x) = 4+x f (x) = x2 − ex − 3x + 2, g(x) = ln(x2 − 3x + 2), x0 = 0.8, f (x) = x + ln(x − 2),
f (x) = x3 + x2 − 3x − 3,
1
g(x) = (3 + 3x − x2 ) 3 , x0 = 1, 1 1 f (x) = x3 + 4x2 + 8x + 8, g(x) = − 1 + x2 + x3 , x0 = 1. 2 8 Table 2: Comparison of NM, AM, HM, FPM, NIM and MFPIM (f(x) = x + ln(x − 2), g(x) = 2 + e−x , x0 = 2.2)
Method NM AM HM FPM NIM MFPIM
N 5 6 3 32 4 2
Nf 10 24 9 32 8 4
|f(xn+1 )| 7.291197e − 36 7.693693e − 52 9.455064e − 34 5.410786e − 30 5.895526e − 46 7.525339e − 71
xn+1
2.120028238987641229484687975272
Table 3: Comparison of NM, AM, HM, FPM, NIM and MFPIM q 10 3 2 (f(x) = x + 4x − 10, g(x) = 4+x , x0 = 1.5)
Method NM AM HM FPM NIM MFPIM
29
N 4 6 3 34 4 2
Nf 8 24 9 34 8 4
|f(xn+1 )| 1.357704e − 40 8.158206e − 58 3.335097e − 33 6.189210e − 30 3.335097e − 33 6.382825e − 63
xn+1
1.365230013414096845760806828982
[email protected]
Vol 22, No. 5;May 2015
Table 4: Comparison of NM, AM, HM, FPM, NIM and MFPIM (f(x) = x2 − ex − 3x + 2, g(x) = ln(x2 − 3x + 2), x0 = 0.8) Method NM AM HM FPM NIM MFPIM
N 5 5 4
Nf 10 20 12
7 3
14 6
|f(xn+1 )| 2.850194e − 34 1.914886e − 31 5.757065e − 60 Diverged 7.068808e − 48 1.346194e − 40
xn+1
0.257530285439860760455367304937
Table 5: Comparison of NM, AM, HM, FPM, NIM and MFPIM 1 (f(x) = x3 + x2 − 3x − 3, g(x) = (3 + 3x − x2 ) 3 , x0 = 1) Method NM AM HM FPM NIM MFPIM
N 8 13 5 24 5 3
Nf 16 4 15 24 10 6
|f(xn+1 )| 2.214414e − 38 1.720366e − 41 1.049054e − 52 9.368499e − 30 7.141257e − 33 4.279250e − 121
xn+1
1.732050807568877293527446341506
Table 6: Comparison of NM, AM, HM, FPM, NIM and MFPIM (f(x) = x3 + 4x2 + 8x + 8, g(x) = −(1 + 12 x2 + 18 x3 ), x0 = 1) Method NM AM HM FPM NIM MFPIM
N 5 5 4 97 5 2
Nf 10 20 12 97 10 4
|f(xn+1 )| 1.630799e − 41 2.888095e − 32 2.068718e − 72 5.570340e − 30 1.630799e − 41 1.828628e − 30
xn+1
−2.000000000000000000000000000000
Tables 2-6. Shows the numerical comparisons of the modified two-step fixed point iterative method (MFPIM) with the Newton’s method (NM), the Abbasbandy’s method (AM), the Halley’s method (HM), the fixed point method (FPM) and the new second order iterative method (NIM). The columns represent the number of iterations N and the number of functions or derivatives evaluations Nf required to meet the stopping criteria, and the magnitude |f (x)| of f (x) at the final estimate xn .
6
Conclusions
A new modified two-step fixed point iterative method (MFPIM) for solving nonlinear functions has been established. We can concluded from Tables 1 and 2-6 that 1. The efficiency index of the MFPIM is 2.2361 which is larger than the efficiency index of most of the existing methods and the methods discussed in Table 1. 2. The MFPIM has convergence of order five. 3. By using some examples the performance of the MFPIM, we are also discussed. The MFPIM is performing very well in comparison to the Newton’s method (NM), the
30
[email protected]
Vol 22, No. 5;May 2015
Abbasbandy’s method (AM), the Halley’s method (HM), the fixed point method (FPM) and the new second order iterative method (NIM) as discussed in Table 2-6.
References [1] S. Abbasbandy, Improving Newton-Raphson method for nonlinear equations by modified Adomian de composition method, Appl. Math. Comput. 145 (2003), 887–893. [2] I. K. Argyros, A note on the Halley method in Banach spaces, Appl. Math. Comput. 58 (1993), 215–224. [3] E. Babolian and J. Biazar, On the order of convergence of Adomian method, Appl. Math. Comput. 130 (2002), 383–387. [4] E. Babolian and J. Biazar, Solution of nonlinear equations by modified Adomian decomposition method, Appl. Math. Comput. 132 (2002), 167–172. [5] R. L. Burden and J. D. Faires, Numerical Analysis (Second ed.), Brooks/Cole Publishing Co., California, 1998. [6] M. Frontini and E. Sormani, Some variant of Newton’s method with third-order convergence, Appl. Math. Comput. 140 (2003), 419–426. [7] J. M. Guti´errez and M. A. Hern´ andez, A family of Chebyshev-Halley type methods in Banach spaces, Bull. Austral. Math. Soc. 55 (1997), 113–130. [8] J. M. Guti´errez and M. A. Hern´ andez, An acceleration of Newton’s method: superHalley method, Appl. Math. Comput. 117 (2001), 223–239. [9] H. H. H. Homeier, A modified Newton’s Method for root finding with cubic convergence, J. Comput. Appl. Math. 157 (2003), 227–230. [10] E. Isaacson and H. B. Keller, Analysis of Numerical Methods, John Wiley & Sons, New York, 1966. [11] P. Jarratt, Some efficient fourth order multipoint methods for solving equations, BIT 9 (1969), 119–124. [12] J. Kuo, The improvements of modified Newton’s method, Appl. Math. Comput. 189 (2007), 602–609. [13] S. M. Kang, A. Rafiq and Y. C. Kwun, A new second-order iteration method for solving nonlinear equations, Abstr. Appl. Anal. 2013 (2013), Article ID 487062, 4 pages. [14] T. J. McDougall and S. J. Wotherspoon, √ A simple modification of Newton’s method to achieve convergence of order (1 + 2), Appl. Math. Lett. 29 (2014), 20–25. [15] A. Quarteroni, R. Sacco and F. Saleri, Numerical Mathematics, Springer Verlag, New York, 2000.
31
[email protected]
Vol 22, No. 5;May 2015
[16] F. Soleymani, S. K. Khattri and S. K. Vanani, Two new classes of optimal Jarratt-type fourth-order methods, Appl. Math. Lett. 25 (2012), 847–853. [17] J. Stoer and R. Bulirsch, Introduction to Numerical Analysis (Third ed.), SpringerVerlag, New York, 2002. [18] S. Weerakoon and T. G. I. Fernando, A variant of Newton’s method with accelerated third-order convergence, Appl. Math. Lett. 13 (2000), 87–93.
32
[email protected]