Applied Mathematics Letters 61 (2016) 67–72
Contents lists available at ScienceDirect
Applied Mathematics Letters www.elsevier.com/locate/aml
An efficient method based on progressive interpolation for solving non-linear equations Xiao-Diao Chen, Yubao Zhang, Jiaer Shi, Yigang Wang ∗ Key Laboratory of Complex Systems Modeling and Simulation, Hangzhou Dianzi University, Hangzhou, China
article
info
Article history: Received 16 April 2016 Received in revised form 12 May 2016 Accepted 12 May 2016 Available online 24 May 2016 Keywords: Root finding Progressive interpolation method Convergence rate Efficiency index Non-linear equations
abstract The root-finding problem of a univariate polynomial is a fundamental and longstudied problem, which has wide applications in mathematics, engineering, computer science, and natural sciences. This paper presents a progressive interpolation based method for solving a simple root within a given interval, which is of convergence order 3 · 2n−3 and needs n functional evaluations of the given function f (t). The new method can ensure the convergence and achieve a better efficiency index. It needs none of the evaluations of the derivatives of f (t). Numerical examples show the convergence order of the progressive method. © 2016 Elsevier Ltd. All rights reserved.
1. Introduction In the past years, many modified iterative methods have been developed for finding the simple roots of a nonlinear equation f (t) = 0, which is a common and important problem in science and engineering [1–4]. Several classical methods, such as Newton, Halley or Ostrowski methods, are utilized to improve the local order of convergence [5–10]. Usually, the more the convergence order p of an iterative method, the more number n of functional evaluations per step. The efficiency index, which is defined as p1/n , is used for measuring the balance between those quantities [11]. The efficiency indexes of classical Newtons method, Newton–Secant method and Ostrowski method can be listed as 21/2 ≈ 1.414, 31/3 ≈ 1.442 and 41/3 ≈ 1.587, respectively. Several methods with optimal eighth convergence order are developed, whose efficiency index is 81/4 ≈ 1.682 [2,12–14]. In [11], Li, Mu, Ma and Wang presented a method of sixteen convergence order, whose efficiency index is 161/6 ≈ 1.587. Yun provided an iterative method for computing a simple root of ∗ Corresponding author. E-mail address:
[email protected] (Y. Wang).
http://dx.doi.org/10.1016/j.aml.2016.05.007 0893-9659/© 2016 Elsevier Ltd. All rights reserved.
68
X.-D. Chen et al. / Applied Mathematics Letters 61 (2016) 67–72
f (t) [15]. The above methods may diverge in some cases even the initial value is close to the very simple root. In this paper, we provide a progressive interpolation-based method for computing the simple root t⋆ ∈ [a, b] of f (t). Let t1 = a, t2 = (a + b)/2 and t3 = b. Different from the Newton’s method which utilizes the Taylor’s expansion function for approximating f (t), we try to find a rational polynomial yi (t) = Yi (t)/Xi (t) interpolating f (t) at points t = tj , j = 1, 2, . . . , 2 + i, and the root t3+i of yi (t) is taken as the approximation of t⋆ , where i = 1, 2, . . . , n, Yi (t) is a quadratic polynomial and Xi (t) is a polynomial of degree i − 1. In principle, it can achieve 3 × 2n−3 order with n functional evaluations per step, where n ≥ 3. It ensures the convergence even without good initial values, and can achieve a better efficiency index. 2. The progressive interpolation-based method We progressively compute yi (t) = Yi (t)/Xi (t) interpolating f (t) at points t = tj , j = 1, 2, . . . , 2 + i, which can be solved by the following equation system yi (tj ) = f (tj ),
i = 1, 2, . . . , n, and j = 1, 2, . . . , 2 + i,
(1)
and utilize the root tn+3 of yn (t) to approximate t⋆ . We show an example for illustrating the details. Example 1. We take f (t) = (t − 1)3 − 1, t ∈ [1.8, 2.4] as the given function for illustrating the details, which has a simple root t⋆ = 2. In this case, t1 = 1.8, t2 = 2.1 and t3 = 2.4. At the beginning, i = 1, we obtain the quadratic polynomial y1 (t) and its root t4 = 2.0026. We progressively compute yi (t) by solving Eq. (1), i = 2, 3, 4. In this case, we have that t4 − t⋆ = 2.6e − 3, t5 − t⋆ = 2.1e − 6, t6 − t⋆ = 2.1e − 12, and t7 − t⋆ = 2.9e − 24, which coincide with the results of Theorem 2. Combining the constraints Eq. (1) with the assumption that t⋆ is a simple root of f (t) within [a, b], we have that yi (a) · yi (b) = f (a) · f (b) < 0, which means that yi (t) has at least one real root ti+3 ∈ [a, b]. Note that the result ti+3 of the algorithm does not diverse, so the algorithm ensures the convergence. We introduce Theorem 3.5.1 in Page 67, Chapter 3.5 of [16] as follows. Theorem 1. Let w0 , w1 , . . . , wr be r + 1 distinct points in [a, b], and n0 , . . . , nr be r + 1 integers ≥ 0. Let N = n0 + · · · + nr + r. Suppose that g(t) is a polynomial of degree N such that g (i) (wj ) = f (i) (wj ), i = r (N +1) (ξ1 (t)) (t − wi )ni . 0, . . . , nj , j = 0, . . . , r. Then there exists ξ1 (t) ∈ [a, b] such that f (t) − g(t) = f (N +1)! i=0
Note that t⋆ is a simple root of f (t), so we have that f (t⋆ ) = 0 and f ′ (t⋆ ) ̸= 0. From the Taylor’s expansion, there exists ξ2 (t) ∈ [a, b] such that f (t) − f (t⋆ ) = f ′ (ξ2 (t))(t − t⋆ ),
∀t ∈ [a, b].
(2)
Let h = b − a. If h is small enough, combining with Eq. (2), we have that |t − t⋆ | = O(|f (t) − f (t⋆ )|).
(3)
We have the following theorem. Theorem 2. Suppose that h is small enough such that Eq. (3) is satisfied, we have that |t⋆ − t3+i | = O(h3·2
i−1
),
i = 1, 2, . . . , n.
(4)
X.-D. Chen et al. / Applied Mathematics Letters 61 (2016) 67–72
69
Table 1 Efficiency indexes and convergence orders of different n.
CO EI
n=3
n=4
n=5
n=6
n=7
n=8
n=9
3 1.442
6 1.565
12 1.643
24 1.698
48 1.738
96 1.769
192 1.793
Proof. We prove it by induction. (1) Firstly, we prove that the case i = 1 is true, i.e., |t⋆ − t4 | = O(h3 ). Let H1 (t) = f (t) − y1 (t). Combining Eq. (1) with Theorem 1, there exists ξ3 (t) ∈ [a, b] such that H (3) (ξ (t)) 1 3 |H1 (t)| = (t − t1 )(t − t2 )(t − t3 ) 3! = O(|(t − t1 )(t − t2 )(t − t3 )|) = O(h3 ),
∀t ∈ [a, b].
(5)
Combining Eq. (3) with Eq. (5), we have that |t⋆ − t4 | = O(|f (t⋆ ) − f (t4 )|) = O(|y1 (t4 ) − f (t4 )|) = O(h3 ). i−1
(2) Secondly, assume that it is true for all i < n, i.e., |t⋆ − t3+i | = O(h3·2 ), ∀i < n. We prove that it is n−1 also true for the case i = n, i.e., |t⋆ − t3+n | = O(h3·2 ). Note that t3+n is a root of yn (t), from Eq. (3), we have that |t⋆ − t3+n | = O(|f (t⋆ ) − f (t3+n )|) = O(|f (t3+n ) − yn (t3+n )|).
(6)
Combining Eq. (6), Eq. (1) with Theorem 1, we obtain that 2+n
|t⋆ − t3+n | = O(|f (t3+n ) − yn (t3+n )|) = O( Π |t3+n − tj |).
(7)
j=1
From Eq. (7), we have that |t3+n − tj | = |(t3+n − t⋆ ) + (t⋆ − tj )| = O(|t⋆ − tj |).
(8)
Combining Eq. (7) with Eq. (8), we obtain that 2+n
|t⋆ − t3+n | = O( Π |t⋆ − tj |).
(9)
j=1
1+n
Similarly, we have that |t⋆ − t2+n | = O( Π |t⋆ − tj |), which leads to j=1
1+n
|t⋆ − t3+n | = O( Π |t⋆ − tj |) · |t⋆ − t2+n | = O(|t⋆ − t2+n |2 ).
(10)
j=1
Combining Eq. (10) with the assumption, we have that |t⋆ − t3+n | = O(h3·2 From the above all, we have completed the proof.
n−1
).
From Theorem 2, given a number n, we need to compute f (ti ), i = 1, 2, . . . , n and achieve the convergence order 3 · 2n−3 . From Eq. (10), we have that |tn+3 − t⋆ | = O(|tn+3 − tn+2 |2 ), which can be used to estimate the minimum number n satisfying the given error. The efficiency indexes of several cases are listed in Table 1, where CO and EI denote the convergence order and the efficiency index, respectively. 3. Numerical results Let MLi , MSh , MYun and Mnew denote the methods in [11,14,15] and the method in this paper, respectively, where q in [15] is set as 2. In this section, n = 6 is proposed and tested in Mnew , which achieves
X.-D. Chen et al. / Applied Mathematics Letters 61 (2016) 67–72
70
Table 2 Comparisons of Example 2 between difference methods. x0
n
3
4
5
6
1.9 2.11 [1.9, 2.11]
MYun MYun Mnew
5.5e−6 3.2e−4 1.8e−5
3.0e−11 1.0e−7 1.1e−10
8.9e−22 1.1e−14 6.2e−21
7.9e−43 1.2e−28 2.5e−41
0.81 3.2 [0.81, 3.2]
MYun MYun Mnew
1.1e−1 5.3e−1 1.6e−3
1.1e−2 1.6e−1 1.3e−6
1.2e−4 2.2e−2 1.0e−12
1.4e−8 4.6e−4 1.1e−24
Method
x0
1.9
2.11
0.81
3.2
MLi
1 (n = 6) 2 (n = 12)
4.2e−18 3.9e−281
1.4e−19 9.0e−305
4.7e+2 5.6
1.5e−6 2.7e−96
MSh
1 (n = 4) 2 (n = 8)
4.4e−9 4.5e−68
2.8e−9 1.2e−69
8.1 1.3
6.7e−3 1.1e−18
Table 3 Comparisons of Example 3 between difference methods. Method
MYun
MLi
MSh
Mnew
i=1 i=2
2.62 2.57
1.6e+147 /
35.9 /
2.3e−35 7.5e−2186
the convergence order 24 and the efficiency index 1.698; t⋆ and x0 denote root and an initial value, the simple ⋆ ⋆ if x0 < t , 0 , 2t − x0 + ε], and without special claim, the initial interval used in Mnew is defined as [x where ε = ⋆ [2t − x0 − ε, x0 ], otherwise, 10−5 . In the following tables, both two iterative process of MLi , MYun and Mnew , and three ones of MSh needs 12 functional evaluations, where n and i denote the number of functional evaluations in one process and the ith iterative process, respectively. The tables show that the results of MLi , MSh and MYun are sensitive to the selection of initial values, or even diverge, while the performance of Mnew is much more stable. Example 2 ([11]). Given f1 (t) = (t − 1)3 − 1 with the exact root t⋆ = 2. We choose four initial values 1.9, 2.11, 0.81 and 3.2 of x0 for MYun and Mnew . The comparison results on the errors of different n are shown in Table 2. It shows that both MYun and Mnew can increase efficiency index by 2 while the value of n is increased by 1, where n ≥ 3. We run MLi and MSh twice, and it shows that MLi and MSh can achieve the convergence order 16 and 8, respectively. In this case, Mnew achieves the best performance among the four methods, while that of MYun is better than those of MLi and MSh . 2
Example 3 ([11]). Given f2 (t) = 10150−5x − 1 with the exact root t⋆ = 5.4772 and the initial value x0 = 5.494. The corresponding results are shown in Table 3. As shown in Table 3, both MLi and MSh diverge (denoted by “/”), MYun does not converge within 80 iterative processes, while MNew converges and achieves a much better result. Example 4 (Abstracted from [11]). Let 2
f3 (t) = e−t +t+3 − t + 2, t⋆ = 2.490, x0 = 2.3, f4 (t) = et − 1, t⋆ = 0, x0 = 0.14, 2 3 ⋆ t +7t−30 ⋆ f5 (t) = t − 10, t = 2.154, x0 = 2, f6 (t) = e√ − 1, t = 3, x0 = 2.97, f7 (t) = t5 + t − 104 , t⋆ = 6.308, x0 = 6, f8 (t) = t − 1/t√− 3, t⋆ = 9.633, x0 = 9.4, f9 (t) = et + t − 20, t⋆ = 2.842, x0 = 2.5, f10 (t) = ln(t) + t − 5, t⋆ = 8.309, x0 = 7.9. Table 4 shows the results, where “CO” denotes numerical convergence order. In Table 4, the value of CO of MLi , MYun and MNew is defined as ln(E12 )/ ln(E6 ), while that of MSh is defined as (ln(E12 )/ ln(E8 ))3/2 , where En is the error with n functional evaluations. It shows that MNew can achieve smaller error and better performance than those of MLi , MSh and MYun .
X.-D. Chen et al. / Applied Mathematics Letters 61 (2016) 67–72
71
Table 4 Errors of Example 4 between difference methods. Method
f3 (t)
f4 (t)
f5 (t)
f6 (t)
f7 (t)
f8 (t)
f9 (t)
f10 (t)
Mnew
n=6 n = 12 CO
1e−55 6e−3513 64.2
2e−64 1e−4252 66.6
3e−70 9e−4462 64.4
6e−54 1e−3416 64.5
1e−68 6e−4396 64.3
1e−86 7e−5580 64.6
2e−63 5e−4076 64.7
2e−81 1e−5238 65.1
MYun
n=6 n = 12 CO
6e−24 1e−1481 63.7
3e−39 6e−2485 64.5
3e−21 5e−1339 65.0
5e−21 1e−1246 61.5
4e−8 3e−506 68.2
8e−64 3e−4101 65.1
1e−3 1e−206 71.3
1e−53 1e−3449 65.1
MLi
n=6 n = 12 CO
1e−12 2e−188 15.7
5e−20 1e−314 16.3
2e−20 2e−322 16.3
6e−13 2e−185 15.1
3e−17 2e−274 16.6
7e−32 4e−520 16.7
3e−13 7e−208 16.6
3e−26 3e−428 16.7
MSh
n=4 n=8 n = 12 CO
2e−6 4e−46 5e−364 22.4
2e−10 2e−80 3e−641 22.7
6e−10 4e−77 5e−615 22.7
1e−7 1e−51 1e−403 22.4
1e−8 4e−68 2e−544 22.9
3e−16 4e−136 2e−1094 22.9
4e−7 5e−55 3e−438 22.9
2e−13 5e−113 3e−909 22.9
4. Conclusion This paper presents a progressive interpolation based method. It can achieve convergence order 3 · 2n−3 by using n functional evaluations. In principle, it can ensure convergence within an interval containing a simple root. Numerical examples show that the new method can achieve better performance than those of classical Newton’s method and other prevailing methods. As for future work, one further challenge is to obtain the optimal convergence order. Another challenge is to extend Mnew to multipoint cases. Acknowledgment This research was partially supported by the National Science Foundation of China (61370218, and 61502130). References [1] W. Bi, Q. Wu, H. Ren, A new family of eighth-order iterative methods for solving nonlinear equations, Appl. Math. Comput. 214 (1) (2009) 236–245. [2] H. Kung, J. Traub, Optimal order of one-point and multi-point iteration, Appl. Math. Comput. 21 (4) (1974) 643–651. [3] M. Noor, W. Khan, New iterative methods for solving nonlinear equation by using homotopy perturbation method, Appl. Math. Comput. 219 (8) (2012) 3565–3574. [4] J. Sharma, R. Sharma, A new family of modified ostrowski’s methods with accelerated eighth order convergence, Numer. Algorithms 54 (4) (2010) 445–458. [5] M. Grau-S´ anchez, J. D´ıaz-Barrero, An improvement to ostrowski root-finding method, Appl. Math. Comput. 173 (1) (2006) 450–456. [6] M. Grau-S´ anchez, J. D´ıaz-Barrero, Some sixth order zero-finding variants of chebyshev-halley methods, Appl. Math. Comput. 211 (1) (2009) 108–110. ´ [7] M. Grau-S´ anchez, Grau Angela, M. Noguera, Ostrowski type methods for solving systems of nonlinear equations, Appl. Math. Comput. 218 (6) (2011) 2377–2385. [8] T. Eftekhari, A new proof of interval extension of the classic ostrowski’s method and its modified method for computing the enclosure solutions of nonlinear equations, Numer. Algorithms 69 (1) (2014) 157–165. √ [9] T.J. McDougall, S.J. Wotherspoon, A simple modification of Newton’s method to achieve convergence of order 1 + 2, Appl. Math. Lett. 29 (1) (2014) 20–25. [10] C. Chun, B. Neta, A new sixth-order scheme for nonlinear equations, Appl. Math. Lett. 25 (2) (2012) 185–189. [11] Li X, C. Mu, J. Ma, C. Wang, Sixteenth-order method for nonlinear equations, Appl. Math. Comput. 215 (10) (2010) 3754–3758. [12] A. Cordero, J. Torregrosa, M. Vassileva, A family of modified ostrowski’s methods with optimal eighth order of convergence, Appl. Math. Lett. 24 (12) (2011) 2082–2086. [13] J. Sharma, H. Arora, An efficient family of weighted-Newton methods with optimal eighth order convergence, Appl. Math. Lett. 29 (1) (2014) 1–6. [14] J. Sharma, H. Arora, A new family of optimal eighth order methods with dynamics for nonlinear equations, Appl. Math. Comput. 273 (1) (2016) 924–933.
72
X.-D. Chen et al. / Applied Mathematics Letters 61 (2016) 67–72
[15] B. Yun, A family of optimal multipoint root-finding methods based on the interpolating polynomials, Appl. Math. Sci. 35 (8) (2014) 1723–1730. [16] P. Davis, Interpolation and Approximation, Dover Publications, New York, 1975.