Iterative methods for simultaneous computing

1 downloads 0 Views 166KB Size Report
at a time and methods for finding all roots of algebraic polynomials ...... inclusion of polynomial zeros, Applied Mathematics and Computation, Volume 179,.
Iterative methods for simultaneous computing arbitrary number of multiple zeros of nonlinear equations Gyurhan H. Nedzhibov

a, ∗

a Faculty of Mathematics and Informatics, Shumen University, Shumen 9700, Bulgaria

Abstract In this work we present two modifications of an earlier presented iterative method for solving nonlinear equations in the case of multiple zeros. The new methods are two-point, derivative free iterative methods for the simultaneous extraction of one or several of the multiple zeros. It is proved that the proposed methods possess quadratic convergence locally. Numerical examples are given to illustrate the efficiency and performance of the methods presented. Key words: Nonlinear equations, Root-finding methods, Simultaneous methods, Order of convergence

1

Introduction

We consider the problem of solving the nonlinear equation f (x) = 0 ,

(1)

where f : D ⊆ R → R be a nonlinear, continuous function on D. The problem of solving such nonlinear equations is one of the oldest problems of applied mathematics. It still remains an important research topic, arising in many areas of engineering sciences, physics, computer science, finance, and so on. There are many numerical methods which have been developed for solving this problem. Basically, the methods concerning this topic may be classified in two groups: methods for finding only one root of nonlinear equations at a time and methods for finding all roots of algebraic polynomials simultaneously. We should note that the methods for the simultaneous finding of all roots of polynomial have some advantages, e.g., they have a wider region of convergence, they are more stable, and they can be implemented for parallel computing. The list of publications concerning this topic is very extensive. We want to refer the readers’ attention to some interesting publications on the two groups, respectively, published in the last few years: [7], [8], [9], [10], [11][12] and [13], [14], [15], [16], [17], [18], [19], [20], [21], [22], [23], [24],[25][26]. For some classical books, monographs and summary papers covering many of the valuable results, see [27], [28], [29], [30], [31], [32]. But sometimes in practise it necessary to find more than one or only few zeros of a given nonlinear equation. In recent years, the problem of simultaneous extraction of only a part of all roots of polynomials became very popular. One of the first works on this topic ∗

Corresponding author. E-mail address: [email protected] (G. Nedzhibov).

1

dates from 1966, by S. Presic [2]. Some recent results are presented by N. Kyurkchiev and A. Iliev in [3, 4, 5]. See also [6]. In our recent publication [1] we have introduced a new iterative method for simultaneous extraction of several simple zeros of a nonlinear equation. In this study we extend the suggested method in the case of multiple zeros. The paper is organized as follows. Some preliminary results are included in Section 2. In Section 3 we generalize the theorems from Section 2 in the case of random simultaneous iterative method. The new modified iterative methods are presented in Section 4, where we prove that the new modifications have second order of convergence. Numerical examples are given in Section 5 to demonstrate the convergence behavior of the methods considered.

2

Preliminary results

Let Im = {1, . . . , m} be the index set. In [2] we have presented the following two-point iterative method for simultaneous extraction of m simple zeros of (1)  (k) (k) (k)  y = xi − Gi , i³∈ Im´ ,    i (k) f xi (2) (k+1) (k)  i , k = 0, 1, 2, . . . , xi = xi − h  (k) (k)   f xi , yi (k)

where Gi

is the quantity (k)

Gi

³

(k)

= Gi x1 , . . . , x(k) m

´

³ ´ (k) f xi ´ =Q ³ (k) (k) m x − x j6=i i j

for i ∈ Im .

The notation f [., .] represents divided difference of order one. For simplicity, further we will omit the iteration index. The following two theorems give the convergence results of the presented method in the both cases: polynomials and random nonlinear functions. 2.1 Application to polynomials If we restrict our consideration to f algebraic (monic) polynomial of degree n with simple zeros α1 , α2 , . . . , αn . The following theorem concerned with the convergence order of the proposed iterative method (2). (0)

(0)

(0)

Theorem 1 If x1 , x2 , . . . , xm (m ≥ 1) are sufficiently close approximations to the simple zeros α1 , α2 , . . . , αm of a polynomial f of degree n (and m ≤ n), then the order of convergence of the iterative method (2) is two. Remark 1. In the case of extracting all of the zeros of a polynomial f of order n, i.e. m = n, from (2) we get the known third order iterative method  f (xi )   ,  yi = xi − Qn j6=i (xi − xj ) (3) f (xi )    x ˆi = xi − , i ∈ In , f [xi , yi ] 2

suggested and analyzed in [14, 15]. 2.2 Application to nonlinear equations The following theorem is concerned with the order of convergence of method (2) for nonlinear, continuous and sufficiently differentiable function f : D ⊂ R → R. (0)

Theorem 2 Let α1 , α2 , . . . , αm are simple zeros of the nonlinear function f . If x1 , (0) (0) x2 , . . ., xm are sufficiently close approximations to α1 , α2 , . . . , αm , then the order of convergence of the iterative method (2) is two. Remark 2. In the case of approximating only one root of nonlinear equation (1), i.e. m = 1, from (2) we get the well known second order iterative method x ˆi = xi −

f (xi ) f (xi )2 = xi − , f [xi , yi ] f (xi ) − f (yi )

(4)

where yi = xi − f (xi ), see [27].

3

Extended modifications

In this section we suggest the following generalized version of method (2)   yi = xi − f (xi ) ϕi (x) , f (xi ) ˆi = xi − , i = 1, . . . , m ,  x f [xi , yi ]

(5)

where ϕi (x) be continuous functions with ϕi (α) 6= 0 for i = 1, . . . , m.

3.1

Application to polynomials

We restrict our consideration to algebraic (monic) polynomial of degree n f (x) = xn + an−1 xn−1 + . . . + a1 x + a0 ,

(6)

with simple zeros α1 , α2 , . . . , αn . In this case the following theorem implies that the convergence order of the proposed iterative method (5) becomes quadratic as the number of the iterations is increasing. Theorem 3 If x1 , x2 , . . . , xm (m ≥ 1) are sufficiently close approximations to the simple zeros α1 , α2 , . . . , αm of a polynomial f of degree n (where m ≤ n) defined by (6), and there exist functions ϕi (x) = ϕi (x1 , . . . , xm ) such that ϕi (α1 , . . . , αm ) 6= 0, then the iterative function (5) generates convergent process with second order of convergence . Proof. Let us denote εi = xi − αi , εˆi = x ˆi − αi and vi = vi (x) = f (xi ) ϕi (x). According to the assumption of the theorem, the errors ε1 , ε2 , . . . , εm are sufficiently small in moduli. Let us assume that the errors ε1 , ε2 , . . . , εm are of the same order in moduli and let |εi | = O(|ε|), where ε ∈ {ε1 , ε2 , . . . , εm } is the error such that |ε| = max1≤k≤m |εk |. 3

Similarly |ˆ ε| = max1≤k≤m |ˆ εk |. Let use the notation β = Om (γ) for two complex numbers β and γ if their moduli are of the same order, i.e. |β| = O(|γ|). Using definition of the divided difference f [xi , yi ] =

f (xi ) − f (yi ) f (xi ) − f (yi ) = , xi − yi vi (x)

we can represent (5) in the following way x ˆi = xi − vi (x)

f (xi ) , i ∈ Im , f (xi ) − f (yi )

(7)

where yi = xi − vi (x). Observe that n Y f (xi ) = εi (xi − αj ) = Om (ε) and vi = Om (f (xi )) = Om (ε) .

(8)

j6=i

Further we use the following presentation n n n Y (yi − αj ) (yi − αi ) Y (yi − αj ) εi − vi Y (yi − αj ) f (yi ) = = = . f (xi ) (xi − αj ) (xi − αi ) (xi − αj ) εi (xi − αj ) j=1

j6=i

(9)

j6=i

Using that ¶ n n n µ n Y X (yi − αj ) Y (xi − vi − αj ) Y vi vi 1− = = =1− Pij , (xi − αj ) (xi − αj ) (xi − αj ) (xi − αj ) j6=i

j6=i

where Pij is a polynomial of

j6=i

vi (xi −αj ) .

j6=i

It follows that

n Y (yi − αj ) = 1 − O(εi ) . (xi − αj ) j6=i

From the last estimation and (9) we find µ ¶ f (yi ) ε i − vi vi vi − εi vi = (1 − O(εi )) = 1 − 1− O(εi ) = 1 − (1 − Om (ε)) . f (xi ) εi εi εi εi

(10)

Hence, (10) implies f (xi ) = f (xi ) − f (yi )

µ ¶ f (yi ) −1 εi 1− = (1 + Om (ε)) . f (xi ) vi

Further, from (7) it follows ¯ ¯ ¯ ¯ εi ¯ |ˆ εi | = |ˆ xi − αi | = ¯εi − vi (1 + Om (ε))¯¯ = Om (ε2 ) . vi Since Om (ε2 ) is the dominant term we conclude that the order of convergence of the method (5) is two, which completes the proof of Theorem 3. 4

3.2

Application to nonlinear equations

In our consideration we make use of the following well-known theorem from the theory of iterative processes. Theorem 4 (Traub [1, Theorem 2.2]) Let φ be an iterative function such that φ and its derivatives φ0 , . . . , φ(p) are continuous in the neighborhood of a root α of a given function f . Then φ defines an iterative method of order p if and only if φ(α) = α , φ0 (α) = . . . = φ(p−1) (α) = 0 , φ(p) (α) 6= 0 .

(11)

The following theorem is concerned with the order of convergence of methods (5) for a nonlinear sufficiently differentiable function f : D ⊂ R → R. Theorem 5 Let α1 , α2 , . . . , αm be simple zeros of the nonlinear sufficiently differentiable function f : D ⊂ R → R. If x1 , x2 , . . ., xm are sufficiently close approximations to α1 , α2 , . . . , αm , then the order of convergence of the iterative method (5) is two. Proof. Let us fix i and put x ≡ α = (α1 , . . . , αm ). Considering yi = yi (x) = xi − vi (x) as a function of xi and using the Lhopital’s rule we get ¯ f 0 (αi ) − f 0 (yi (α))yi0 (α) f (xi ) − f (yi ) ¯¯ f (αi ) − f (yi (α)) = = f 0 (αi ). f [xi , yi ]|x=α = = 0 (α) ¯ xi − yi α − y (α) 1 − y i i i x=α Obviously, we have x ˆi (α) = αi −

f (αi ) = αi , (i = 1, . . . , m) . f [αi , yi (α)]

(12)

The first derivative of the function x ˆi regarding xi is x ˆi 0 = 1 −

f (xi )f [xi , yi ]0xi f 0 (xi ) + . f [xi , yi ] f [xi , yi ]2

Using the Taylor series f (y) = f (x) − f 0 (x)(x − y) +

¡ ¢ f 00 (x) f 000 (x) (x − y)2 − (x − y)3 + O (x − y)4 , 2! 3!

we get f [xi , yi ] =

¡ ¢ f (xi ) − f (yi ) f 00 (x) f 000 (x) = f 0 (x) − (x − y) + (x − y)2 + O (x − y)3 . xi − yi 2! 3!

Then it follows

¯ f 00 (αi ) f [xi , yi ]0xi ¯x=α = (1 + yi0 (α)) 6= 0 . 2

Finally we obtain x ˆi 0 (α) = 0 , (i = 1, . . . , m) , and x ˆi 00 (α) =

f 00 (αi ) 0 y (α) 6= 0 , (i = 1, . . . , m) . f 0 (αi ) i

(13) (14)

From (12),(14) and Theorem 4 it follows that the iterative function (5) has the order of convergence two. 5

4

Generalizations for multiple zeros

In this section we suggest generalizations of the method (2) in the case of multiple zeros. Two cases arises in finding multiple zeros of nonlinear equations.

4.1

In the case of preliminary known multiplicities

We examine the equation (1) where the function f has the form in the particular m Y f (x) = (x − αi ) (x − αj )sj g(x) si

(15)

j6=i

where g : D ⊂ R → R be such a function that g(αi ) 6= 0 for i = 1, . . . , m. In other words the α1 , α2 , . . . , αm are roots of f respectively having multiplicity rates s1 , s2 , . . . , sm and s1 + s2 + . . . + sm = l. In particular case of f polynomial of n-th degree, the function g(x) is a polynomial of degree n − l (where l ≤ n) and g(αi ) 6= 0 for i = 1, . . . , m. We suggest the following iterative function   yi = xi − Gi , f (xi )1/si (16)  x , i = 1, . . . , m , ˆi = xi − Gi 1/s 1/s i i − f (yi ) f (xi ) f (xi ) . j6=i (xi − xj ) The order of convergence of this methods is analyzed in the following Theorem.

where Gi = Gi (x1 , . . . , xm ) = Qm

Theorem 6 Let α1 , α2 , . . . , αm be multiple roots of function f defined by (15). If x1 , x2 , . . ., xm are sufficiently close approximations to the respective zeros α1 , α2 , . . . , αm , then the iterative function (16) is convergent with order of convergence two. Proof: Let us fix i and denote ψ(xi ) =

f (xi )1/si . Observe that f (xi )1/si − f (yi )1/si

f (xi ) = (xi − αi )si

m Y (x − αj )sj g(xi ) . j6=i

We will transform ψ(xi ) in the following way 1 s

ψ(xi ) = 1− =

si

sQ

= f (yi ) f (xi )

1−

si

1

m j=1 (yi Qm j=1 (xi

− αj )sj g(yi ) − αj )sj g(xi )

1 v u m u (yi − αi )si Y (yi − αj )sj g(yi ) i 1 − st si εi (xi − αj )sj g(xi ) j6=i

=

1 v . ¶sj uY m µ u yi − αj εi − Gi st g(yi ) i 1− εi xi − αj g(xi ) j6=i

6

(17)

Let us examine v µ ¶ uY u m yi − αj sj st i xi − αj

=

j6=i

=

v µ ¶ uY u m xi − Gi − αj sj st i xi − αj j6=i v µ ¶sj uY um Gi st i 1− = xi − αj j6=i

v u m X u st i 1 − Mij j6=i

Gi , xi − αj

Gi . xi − αj Using that Gi = O(εi ) it is easy to obtain that

where Mij are polynomials of

m X

Mij

j6=i

Gi = O(εi ). xi − αi

Thus we have demonstrated that the following presentation is valid v µ ¶ uY u m yi − αj sj st i = 1 + O(εi ). xi − αj j6=i

Using the Taylor series g(yi ) = g(xi ) − g 0 (xi )(xi − yi ) + we get that

Then

¡ ¢ g 00 (xi ) (xi − yi )2 + O (xi − yi )3 , 2!

¡ ¢ g 0 (xi ) g 00 (xi ) 2 g(yi ) =1− Gi + Gi + O (xi − yi )3 . g(xi ) g(xi ) 2!g(xi ) s

s si

g(yi ) = g(xi )

si

1−

p g 0 (xi ) Gi + O(G2i ) = si 1 + O(εi ) = 1 + O(εi ) . g(xi )

Hence, we get ψ(xi ) =

εi (1 + O(εi )). Gi

Further, for the iterative process (16), we obtain εˆi = xi − αi − Gi ψ(xi ) εi = εi − Gi (1 + O(εi )) Gi = εi − εi (1 + O(εi )) = O(ε2i ). The proof is completed. In this case we have used the acceleration technics presented in [26] to obtain the iterative function (16). 7

4.2

In the case of preliminary unknown multiplicities

In this section we suggest the following generalized version of method (2)  F (xi )   ,  yi = xi − Qm j6=i (xi − xj ) F (xi )    x ˆi = xi − , i ∈ Im , F [xi , yi ] where F (xi ) =

(18)

f (xi ) . f [xi , xi − f (xi )]

Theorem 7 Let α1 , α2 , . . . , αm be multiple zeros of the nonlinear function f defined by (15). If x1 , x2 , . . ., xm are sufficiently close approximations to the respective zeros α1 , α2 , . . . , αm , then the order of convergence of the iterative method (18) is two. Proof. Let us fix i and consider the divided difference f [xi , xi − f (xi )] =

f (xi ) − f (xi − f (xi )) . f (xi )

From Taylor expansion f (xi − f (xi )) = f (xi ) − f 0 (xi )f (xi ) +

f 000 (xi ) f 00 (xi ) f (xi )2 − f (xi )3 + . . . , 2 3!

it follows that f [xi , xi − f (xi )] = f 0 (xi ) −

f 00 (xi ) f 000 (xi ) f (xi ) + f (xi )2 + . . . . 2 3!

Then for the function F (xi ), we have F (xi ) =

f (xi ) = f [xi , xi − f (xi )]

f (xi ) µ ¶. 00 f (xi ) f (xi ) f 000 (xi ) f (xi )2 f 0 (xi ) 1 − + + . . . 2 f 0 (xi ) 3! f 0 (xi )

(19)

Using the presentation (15) let us denote f (xi ) = (xi − αi )si h(xi ) , where h(αi ) 6= 0. Then f (xi )0xi = si (xi − αi )si −1 h(xi ) + (xi − αi )si h(xi )0xi . From last two equations it follows that u(xi ) =

f (xi ) (xi − αi )si h(xi ) (xi − αi )h(xi ) = = f 0 (xi ) si (xi − αi )si −1 h(xi ) + (xi − αi )si h(xi )0xi si h(xi ) + (xi − αi )h(xi )0xi

and u(αi ) = 0 . 8

(20)

Differentiating u(xi ) regarding xi , we get u0 (xi ) =

si h(xi )2 − si (xi − αi )h(xi )h0 (xi ) − (xi − αi )2 h(xi )h00 (xi ) , (si h(xi ) + (xi − αi )h0 (xi ))2

then it follows u0 (αi ) =

1 6= 0 . si

(21)

From (19), (20) and (21), it follows that F (αi ) = 0 , i = 1, . . . , m and F 0 (αi ) =

1 6= 0 , i = 1, . . . , m , si

i.e. the function F (x) has only simple zeros at α1 , . . . , αm . Using (5) and Theorem 5 (when ϕi (x) = 1 for i = 1, . . . , m), it follows that the iterative function (18) is convergent of order two. Theorem is proved.

5

Numerical results

In this section, we employ the new methods (16) and (18) to solve some nonlinear equations. All computations were done using MATLAB 7.0. We accept an approximate solution rather than the exact root, depending on the precision (²) of the computer. We use the following stopping criteria for computer programs: (i) kx(k+1) − x(k) k < ², (ii) kF (x(k+1) )k < ², where we use the maximum norm kxk∞ = max{|x1 |, . . . , |xn |} and the notation F (x) = (f (x1 ), . . . , f (xn ))T . When the stopping criterion is satisfied, x(k+1) is taken as the exact root computed. For numerical illustrations in this section we used the fixed stopping criterion ² = 10−8 . Our consideration of the method is based upon the following criteria: the initial ap(0) (0) proximation x(0) = (x1 , . . . , xm ) (where m is the number of roots we are seeking), the number of iterations to approximate the zero(s) (Iterations) and the roots reached αm = (αs1 , . . . , αsm ). We tested the proposed methods (16) and (18) in the examples of several nonlinear equations to demonstrate their performance. We selected the following three examples for illustration. Example 1. Consider the polynomial equation (x + 2)4 (x − 3)2 x3 = 0, with three multiple zeros α1 = −2 , α2 = 0 , α3 = 3 and the corresponding multiplicities s1 = 4 , s2 = 3 , s3 = 2. See the results in Table 1.

9

Table 1. Numerical results for Example 1. Initial point Iterations (k) Zeros reached α = (αs1 , .., αsm ) (0) (0) (0) x = (x1 , . . . , xm ) (16) (18) (16) (18) (0) x = −2.1 4 4 α1 α1 x(0) = −1.7 NaN 23 – α1 (0) x = −0.1 4 4 α2 α2 x(0) = 2.9 NaN NaN – – (0) x = (−2.1, 0.1) 4 5 α = (α1 , α2 ) α = (α1 , α2 ) x(0) = (−2.2, −0.2) 20 NaN α = (α1 , α2 ) – (0) x = (−1.7, 0.2) 283 12 α = (α1 , α2 ) α = (α1 , α2 ) x(0) = (−1.8, 0.3, 3.1) NaN NaN – – The results presented in Table 1 show that for this example the iterative processes (16) and (18) have similar convergence behavior. For the both methods there exist initial points when the iterative process fail. For some initial approximations the presented methods converges linearly at the beginning of iterative procedure. This is the reason for larger number of iterations. For the initial approximation in neighbourhood of α3 = 3 the both methods are not convergent (denoted N aN ). Example 2. Consider the nonlinear equation: µ 2 ¶3 sin (x) − 1 = 0, exp( x2 ) with infinite number multiple zeros. In the numerical experiments we approximate (and denote) only the following three zeros α1 = −3.564283758 , α2 = −2.590959369 and α3 = −0.918673296, each of them with multiplicity 3. The results are given in Table 2. Table 2. Numerical results for Example 2. Initial point Iterations (k) Zeros reached α = (αs1 , .., αsm ) (0) (0) (0) x = (x1 , . . . , xm ) (16) (18) (16) (18) (0) x = −0.6 NaN 4 – α3 x(0) = −1.1 3 4 α3 α3 (0) x = −2.5 NaN 4 – α2 x(0) = −3.4 NaN 4 – α1 (0) x = (−3.6, −1.2) 4 4 α = (α1 , α3 ) α = (α1 , α3 ) x(0) = (−2.1, −0.1) NaN 9 – α = (α2 , α3 ) (0) x = (−3.5, −2.8) 4 4 α = (α1 , α2 ) α = (α1 , α2 ) x(0) = (−3.5, −2.3, −0.5) NaN 6 – α = (α1 , α2 , α3 ) x(0) = (−3.8, −3, −1.2) 30 30 α = (α2 , α1 , α3 ) α = (α1 , α2 , α3 ) x(0) = (−3.4, −2.4, −0.5) NaN 5 – α = (α1 , α2 , α3 ) In Table 2 we can see that the method (18) has better convergence properties than method (16). The iterative process (16) is convergent for few initial approximations. Example 3. Consider the nonlinear equation: ³ ´4 ex(x−1)(x−2)(x−3) − 1 = 0, 10

with four multiple zeros α1 = 0, α2 = 1, α3 = 2, α4 = 3 with multiplicities 4. The results are given in Table 3. Table 3. Numerical results for Example 3. Initial point Iterations (k) Zeros reached α = (αs1 , .., αsm ) (0) (0) (0) x = (x1 , . . . , xm ) (16) (18) (16) (18) (0) x = 1.4 5 5 α1 α1 x(0) = 3.1 NaN 5 – α3 (0) x = 2.5 7 5 α2 α3 x(0) = (1.3, 2.6) NaN 5 – α = (α2 , α3 ) x(0) = (1.1, 2.2) 4 5 α = (α2 , α3 ) α = (α2 , α3 ) x(0) = (0.2, 2.9) NaN 6 – α = (α1 , α4 ) (0) x = (0.5, 2.5) 6 6 α = (α3 , α2 ) α = (α1 , α3 ) x(0) = (0.4, 1.3, 2.6) NaN 6 – α = (α1 , α2 , α3 ) x(0) = (0.3, 0.8, 2.2) NaN 6 – α = (α1 , α2 , α3 ) x(0) = (0.1, 0.9, 2.9) 5 5 α = (α1 , α2 , α4 ) α = (α1 , α2 , α4 ) (0) x = (0.1, 0.9, 1.8, 2.9) 5 5 α = (α1 , α2 , α3 , α4 ) α = (α1 , α2 , α3 , α4 ) x(0) = (0.2, 0.8, 1.8, 2.8) NaN 6 – α = (α1 , α2 , α3 , α4 ) (0) x = (0.2, 0.8, 1.8, 3.1) NaN 8 – α = (α1 , α2 , α3 , α4 ) For this example, again the method (18) has better convergence properties than method (16). It has wider neighbourhood of convergence for each of the roots. For initial approximations where the both methods are convergent, they have equal number of iterations to reach the zeros. We have to denote that it is very important the choice of initial approximations. If they are chosen sufficiently close to the sought roots, then the expected (theoretical) convergence speed will be reached in practice. Otherwise the method (as many other iterative methods) shows slower convergence, especially at the beginning of the iterative process. For this reason, a special attention should be paid to finding good initial approximations.

6

Conclusion

In this work we presented two iterative methods of second order for extraction of more than one multiple zero of nonlinear equations, simultaneously. The new methods do not require the computation of the first-order or higher derivatives of the function f . The first modification (16) is applicable in the case of preliminarily known multiplicities of the extracting roots. The main advantage of the method (18) is that it is applicable for wide range of nonlinear functions f : D ⊆ R → R for computing arbitrary number of multiple zeros. For this method it is not necessary to know the multiplicities of the roots seeking. The numerical experiments confirm the theoretical results. Acknowledgments: This work is supported by Shumen University.

11

References [1] G. Nedzhibov, A derivative free iterative method for simultaneous computing arbitrary number of zeros of nonlinear equations, Computers and Mathematics with Applications, (2012), (in press). [2] S.Presic, Un procede iteratif pour la factorisation des polynomes, C.R. Acad. Sci.Paris, 262, pp.862–863, (1966). [3] A.Iliev and N.Kyurkchiev, On a generalization of Weierstrass-Dochev method for simultaneous extraction of only a part of all roots of algebraic polynomials: part I, C. R. Acad. Bulg. Sci., 55, pp.23–26, (2002). [4] A.Iliev and N.Kyurkchiev, Some methods for simultaneous extraction of only a part of all roots of algebraic polynomials: part II, C. R. Acad. Bulg. Sci., 55, pp.17–22, (2002). [5] N. Kyurkchiev and A.Iliev, On the R-order of convergence of a family of methods for simultaneous extraction of part of all roots of algebraic polynomials, BIT, 42, pp.879–885, (2002). [6] G.H.Nedzhibov, On Some Modifications of Weierstrass-Dochev method for simultaneous extraction of only a part of all roots of polynomials, Application of Mathematics in Engineering and Economics: 36th International Conference. AIP Conference Proceedings, Volume 1293, pp.141-148, (2010). [7] M.S. Petkovic, L. Rancic, M.R. Milosevic, On the new fourth-order methods for the simultaneous approximation of polynomial zeros, Journal of Computational and Applied Mathematics, Volume 235, Issue 14, Pages 4059-4075, (2011). [8] M.N.O. Ikhile, The root and Bells disk iteration methods are of the same error propagation characteristics in the simultaneous determination of the zeros of a polynomial, Part II: Round-off error analysis by use of interval arithmetic, Computers and Mathematics with Applications, Volume 61, Issue 11, Pages 3191-3217, (2011). [9] M.S. Petkovic, L.D. Petkovic, J. Dzunic, Accelerating generators of iterativemethods for finding multiple roots of nonlinear equations, Computers and Mathematics with Applications, Volume 59, Issue 8, Pages 2784-2793, (2010). [10] L.D. Petkovic, L. Rancic, Miodrag S. Petkovic, An efficient higher order family of root finders, Journal of Computational and Applied Mathematics, Volume 216, Issue 1, Pages 56-72, (2008). [11] J.M. McNamee, Numerical Methods for Roots of Polynomials, Part I, Elsevier, Amsterdam, (2007). [12] X. Zhang, H. Peng, G. Hu, A high order iteration formula for the simultaneous inclusion of polynomial zeros, Applied Mathematics and Computation, Volume 179, Issue 2, Pages 545-552, (2006).

12

[13] B. Neta, M. Scott, C. Chun, Basin attractors for various methods for multiple roots, Applied Mathematics and Computation, Volume 218, Issue 9, Pages 5043-5066, 1 January 2012. [14] J.M. McNamee, V.Y. Pan, Efficient polynomialroot-refiners: A survey and new record efficiency estimates, Computers and Mathematics with Applications, Volume 63, Issue 1, Pages 239-254, January 2012. [15] V.Y.Pan, A.L.Zheng, Root-finding by expansion with independent constraints, Computers and Mathematics with Applications, Volume 62, Issue 8, Pages 3164-3182, (2011). [16] S.K. Khattri and I.K. Argyros, Sixth order derivative free family of iterative methods. Applied Mathematics and Computation, Volume 217, Issue 12, pp.5500-5507, (2011). [17] S.K. Khattri and L. Torgrim, Constructing third-order derivative-free iterative methods. Int. J. Comput. Math. 88(7), pp.1509-1518, (2011). [18] G. Honorato, S. Plaza, N. Romero, Dynamics of a higher-order family of iterative methods, Journal of Complexity Volume 27, Issue 2, Pages 221-229, (2011). [19] M. Grau-Sanchez and J.L. Diaz-Barrero, Zero-finder methods derived using RungeKutta techniques. Applied Mathematics and Computation, 217, pp. 5366-5376, (2011). [20] P.D. Proinov, New general convergence theory for iterative processes and its applications to NewtonKantorovich type theorems, Journal of Complexity, Volume 26, Issue 1, Pages 3-42, (2010). [21] M.S. Petkovic, L.D. Petkovic, D. Herceg, On Schroders families of root-finding methods, Journal of Computational and Applied Mathematics Volume 233, Issue 8, Pages 1755-1762, (2010). [22] L.D. Petkovic, L. Rancic, M.S. Petkovic, An efficient higher order family of root finders, Journal of Computational and Applied Mathematics, 216, pp.56-72, (2008). [23] M.A. Noor, New iterative schemes for nonlinear equations, Applied Mathematics and Computation, Volume 187, Issue 2, Pages 937-943, (2007). [24] G.H. Nedzhibov, V.I. Hasanov and M.G. Petkov, On some families of multipoint iterative methods for solving nonlinear equations. Numerical Algorithms, 42, pp. 127136, (2006). [25] G.H. Nedzhibov, M.G. Petkov, On a family of iterative methods for simultaneous extraction of all roots of algebraic polynomial, Applied Mathematics and Computation, Vol. 162, Issue 1, p427–433, (2005). [26] G.H. Nedzhibov, An acceleration of iterative processes for solving nonlinear equations, Applied Mathematics and Computation, Vol. 168, Issue 1, pp.320-332, (2005).

13

[27] J.F. Traub, Iterative Methods for the Solution of Equations, Prentice Hall, Englewood Cliffs, New Jersey, (1964). [28] J.M. Ortega and W.C. Rheiboldt, Iterative Solution of Nonlinear Equations in Several Variables, Academic Press, New York, (1970). [29] D.K.R. Babajee, Analysis of higher order variants of Newtons method and their applications to differential and integral equations and in ocean acidation, Ph.D. Thesis, University of Mauritius, December, (2010). [30] A. Iliev and N. Kyurkchiev, Selected Topics in Numerical Analysis, Nontrivial Methods in Numerical Analysis, Lambert Academic Publishing (2010). [31] J.M. McNamee, Numerical Methods for Roots of Polynomials, Part I, Elsevier, Amsterdam, (2007). [32] L.D. Petkovic and M.S. Petkovic, A note on some recent methods for solving nonlinear equations, Applied Mathematics and Computation, 185, pp. 368374, (2007).

14