Modifications of Newton's method to extend the

0 downloads 0 Views 426KB Size Report
Sep 12, 2014 - DOI 10.1007/s40324-014-0020-y. Modifications of Newton's method to extend the convergence domain. Dzmitry Budzko, Alicia Cordero & Juan.
Modifications of Newton’s method to extend the convergence domain

Dzmitry Budzko, Alicia Cordero & Juan R. Torregrosa

SeMA Journal Boletin de la Sociedad Española de Matemática Aplicada ISSN 2254-3902 SeMA DOI 10.1007/s40324-014-0020-y

1 23

Your article is protected by copyright and all rights are held exclusively by Sociedad Española de Matemática Aplicada. This eoffprint is for personal use only and shall not be self-archived in electronic repositories. If you wish to self-archive your article, please use the accepted manuscript version for posting on your own website. You may further deposit the accepted manuscript version in any repository, provided it is only made publicly available 12 months after official publication or later and provided acknowledgement is given to the original source of publication and a link is inserted to the published article on Springer's website. The link must be accompanied by the following text: "The final publication is available at link.springer.com”.

1 23

Author's personal copy SeMA DOI 10.1007/s40324-014-0020-y

Modifications of Newton’s method to extend the convergence domain Dzmitry Budzko · Alicia Cordero · Juan R. Torregrosa

Received: 30 April 2014 / Accepted: 12 September 2014 © Sociedad Española de Matemática Aplicada 2014

Abstract The paper is devoted to description of certain ways of extending the domain of convergence of Newton’s method. This paper is a survey of contributions of representatives of Soviet and Russian mathematical school, namely, Kalitkin, Puzynin, Madorskij and others. They introduced different kinds of damping multiplier and showed that their usage may be helpful and beneficial while solving different nonlinear equations and systems starting with “bad” zero estimate. We have also paid attention to the problem of degeneracy of Jacobian matrix and the ways it was solved by named researchers. Finally, we have tested the presented iterative schemes on some examples in order to check their effectiveness. All complete strict proofs of key theorems can be found both in Russian and English in the provided bibliography. Keywords Nonlinear equations · Newton method · Damping multiplier · Convergence domain · Regularization Mathematics Subject Classification

49M15 · 65L20

The project has been funded with support from European Commission. This research was also supported by Ministerio de Ciencia y Tecnología MTM2011-28636-C02-02. D. Budzko Department of Informatics and Computer Systems, Brest State University, bul. Kosmonavtov, 21, 224016 Brest, Belarus e-mail: [email protected] A. Cordero · J. R. Torregrosa (B) Instituto Universitario de Matematica Multidisciplinar, Universitat Politecnica de Valencia, Camino de Vera, s/n, 46022 Valencia, Spain e-mail: [email protected] A. Cordero e-mail: [email protected]

123

Author's personal copy D. Budzko et al.

1 Introduction It is well-known that Newton’s method is very simple, powerful and effective for solving both nonlinear equations and systems f (x) = 0, f : D ⊆ Rn → Rn , n ≥ 1. Many important applications from different branches of science and engineering [15] are described by essentially nonlinear systems and very often they require good zero estimate in order Newton’s method would converge. This problem could be solved in two ways. The first approach is connected with developing hybrid methods [10]. It means that first iterations are carried out using some slow but “global” method like dichotomy method, and then last iterations are performed by some fast “local” methods, as Newton’s one or higher-order methods such as Ostrowski’s scheme of order four [22] or eighth-order methods like [6,7]. The efficiency is generally an important aspect that has been taken into account when designing an iterative method for solving nonlinear equations or systems, forgetting usually the accessibility of the iterative method, which shows the domain of starting points from which the scheme converges to the solution. In [14], the authors design an hybrid method combining secant and Newton’s schemes. On the other hand, the accessibility regions of third-order classical methods has been improved in [12] by using the idea of hybrid methods, starting with Newton’s scheme. Recently, different researchers (see for example [28] and the references therein) have designed hybrid iterative methods with regions of accessibility greater than that Newton’s scheme. Taking the proposed methods as a predictor, multipoint hybrid schemes can be constructed in order to combine high velocity and wide convergence domain. The second approach is connected with improvement of Newton’s method in order to extend its convergence domain. In this paper we discuss such improvement by introducing different kinds of damping multiplier obtained by Soviet and Russian School of Computational Mathematics. However, another researchers from different schools have also constructed damped Newton’s schemes with a fixed or variable damping parameter under different points of view: local and semilocal convergence, dynamical behavior, etc ([4,5,9,20,29] and references therein). The idea of a damping multiplier is not to allow the residual function to grow very fast during the iterative process. One can often observe the following unpleasant situation during making iterations, when the residual function decreases monotonically as desired then it sharply increases significantly by several orders. In this case if we could predict such situation, then we would change the magnitude of the step of the method in order not to “loosen” the iterative process. This is exactly the reason for introducing the damping multiplier. The general form of such kind of iterative schemes can be written as: xn+1 = xn − βn

f (xn ) , f  (xn )

(1)

where βn is a sequence of real parameters. With particular βn = 1 the iterative scheme turns into classical Newton’s method. In the proposed below modifications of Newton’s method βn changes according to some formula expression or algorithm. It works with systems and equations, and in both cases βn is some varying real scalar. As usual, the value of βn varies from zero to unit. In Soviet and Russian literature one can find the phrase “quasi-Newtonian iterative processes” to denote these “methods with β”. Such phrase is used to emphasize that “methods with β” are almost Newtonian. However, it is not of steady use. In general, there are two strong points to use considering methods with damping multiplier. Firstly, they are designed mostly for extending the convergence domain. Their advantage

123

Author's personal copy Modifications of Newton’s method to extend the convergence domain

is that for some nonlinear equations and systems they can converge while other known methods do not. The second is that this kind of methods could effectively act as a predictor for designing multipoint higher order schemes. In addition, these damping Newton schemes keeps the quadratic order of convergence of Newton’s method in some neighborhood of the root. Note that all such modifications of Newton’s method became possible due to the work of Gavurin [13], which can be indicated as starting paper in this research field. Sometimes, when solving nonlinear systems of equations one can face the problem of degenerate Jacobian matrix. When the determinant of the Jacobian is close to zero, then it is expedient to use regularized iterative methods. Aleksandrov [1] was the first to apply regularization for Newton-Kantorovich processes, however the concept of regularization for linear systems of equations was introduced in [25,26] by Tikhonov. We recommend this classical book of authors Tikhonov and Arsenin [27] for all who are interested in the regularization theory. This idea has been also used by different authors (see, for example [2,8]) on derivativefree iterative methods. Recently, some researchers have studied the damped Newton methods under the point of view of its dynamics, see for example [3] and the references therein. The outline of the paper is as follows. In Sect. 2, we present an overview of some kinds of β developed by Kalitkin [11], Puzynin [23] and Madorskij [19]. We give some ideas and considerations about how they regulate the size of the step β. Discussion of regularization problem could be found in Sect. 3. The results of numerical tests showing the comparative advantages of presented modifications are provided in Sect. 4. Finally, to continue studying this field in detail we recommend these two extensive reviews [24,30] and bibliography therein.

2 Description of the modifications of Newton’s method Kalitkin and Ermakov [11] proposed their way of choosing optimal step. Note that Kalitkin’s “optimal” differs from “optimal order” offered by Kung and Traub [18] in 1974. The modified iterative scheme has the form (1) with the following expression for βn :  f (xn )2 (2)   2 .  n)   f (xn )2 +  f xn − ff (x  (xn )   n) If we calculate the next functional evaluation f xn − ff (x (xn ) and it increases, then the denominator in (2) increases and βn decreases. On the next iteration it does not allow to “loosen” the iterative process. Below we interpret the results of Kalitkin and Ermakov. At first, as the residual function is dependent on the parameter β, we denote it by function δn (β):   2  f (xn )    , δn (β) =  f xn − β  (3) f (xn )  βn =

where  ·  denotes any norm. Of course, the search of β that minimizes δn (β) is quite a difficult problem. It is convenient to solve it tentatively. Using Taylor series expansion we approximate residual (3) by quadratic function dδn (0) δn (β) ≈ δn (0) + β + cβ 2 . (4) dβ

123

Author's personal copy D. Budzko et al.

From (1) and (3) follows that

  f (xn ) dδn (0) = 2 f (xn ) f  (xn ) −  = −2 f 2 (xn ) = −2δn (0). dβ f (xn )

(5)

Let us require that the approximate equality (4) become exact for β = 1. Then, by using (5) we determine the constant c = δn (1) + δn (0). Substituting it into (4), we obtain the value of the step, that minimizes the given quadratic function: βn =

δn (0) . δn (0) + δn (1)

(6)

It is evident that 0 < βn ≤ 1. We call optimal this kind of step and it matches (2). The obtained expression for βn has one disadvantage. One can provide examples (see Sect. 4) when the calculated iterate is very far from the root and then the rate of convergence will be slow. However, in enough close neighborhood of the root the convergence rate is quadratic as for Newton’s method. So, for practical considerations it is expedient to bound below the step by some value θ , which will play a role of parameter of the method:   δn (0) βn = max θ, . (7) δn (0) + δn (1) We can add that the magnitude of θ could vary significantly from 10−5 to 1 in order to obtain best results. Numerical tests show that, if we decrease θ then the domain of convergence enlarges, but the rate of convergence decreases. It is evident (see [11]) that near the solution the rate of convergence is quadratic, and βn is almost unit. At present time, Kalitkin with his disciples continue improving Newton’s method in order to extend the convergence domain. Recently [16,17] they have adapted their modification to the case of roots of high multiplicity. The results of their research show that the methods of bisection and golden section are efficient for multiple roots, while last iterations should be performed by the secant method with the Aitken extrapolation for improving accuracy. The next group of methods was developed by Puzynin and his collaborators. His PhD (1969) and Doctoral Thesis (1979) were devoted to improving convergence of Newton’s method, in particular. We consider here one of his several modifications, that is described, for example, in [23]. Below we interpret results of Puzynin and Zhanlav [23]. The proposed modification has the form βn−1  f (xn−1 ) βn = , (8)  f (xn ) and together with (1) they form an efficient iterative scheme for solving an equation or a system f (x) = 0. Firstly, let the following inequalities be fulfilled  f  (xn )−1  ≤ B,

 f  (xn ) ≤ K .

(9)

It could be shown [23] that  f (xn+1 ) ≤ ϕn+1 (βn ) f (xn ),

(10)

where transition coefficient ϕn+1 (β) from iteration to iteration is determined by formula ϕn+1 (β) = 1 − β +

123

β2 2 KB  f (xn ). 2

(11)

Author's personal copy Modifications of Newton’s method to extend the convergence domain

The problem of minimizing function (11) is difficult because K and B are often unknown, nevertheless it is evident that  ϕn+1 (0) = ϕn (0) = −1. (12)  Let us require that discrete analogs of derivatives ϕn (0) and ϕn+1 (0) satisfy (12):

ϕn (βn−1 ) − ϕn (0) ϕn+1 (βn ) − ϕn+1 (0) = . βn βn−1

(13)

Substituting expression of function (11) into relation (13), we obtain:  f (xn−1 ) βn = . βn−1  f (xn )

(14)

Formula (14) matches (8) and it shows that damping multiplier βn should decrease if function residual  f (xn ) increases, which, as we think, is worthwhile result. Among others modifications of I.V. Puzynin we mention the following form of the damping multiplier: βn+1 =

βn  f (xn )2 .  f (xn−1 )2

(15)

Finally, we consider the group of methods that was obtained by Madorskij. He patiently carried out research by introducing different kinds of damping multiplier and combining with different types of regularization (that will be introduced below in Sect. 3). The proposed iterative schemes are both with and without memory. Madorskij also made generalizations to the Steffensen’s method, Kantorovich-Krasnoselskij method, Traub’s third order method, descent method and in some other similar cases. Majority of his contributions could be found in the monography [19], which, however, is written in Russian. Ideas and proofs of Madorskij key theorems are rather like Puzynin’s ones but he generalized it to multistep processes and to processes with memory, when several previous functional evaluations are used in order to predict the further behavior of function residual. He also introduced a second damping multiplier γn inside the expression of βn , however, we have not noticed significant improvement while testing. We present here only some expressions for damping multiplier without results of numerical tests. Following [19], we note that his methods are effective while solving corresponding discretized system of nonlinear differential equations of second order by divided difference method. Madorskij [19] proposed to check on each iteration whether the residual function decreases and to turn on his damping multiplier only if the condition  f (xn ) <  f (xn+1 ) is fulfilled. In this case, one can hope that proposed modification is not worse than original Newton’s method. One of modifications of Newton’s method that was proposed by Madorskij has the following form:   γn  f (xn ) γn  f (xn )βn+1 βn+1 = min 1, , γn+1 = , γ0 = β02 . (16) βn  f (xn+1 ) βn  f (xn+1 ) Among multistep processes developed by Madorskij the following modification could be singled out: 

γn  f (xn )2 , βn+1 = min 1,  βn  f (xn )2 +  f (xn+1 )2  γn βn+1  f (xn )2  f (xn+1 )2 +  f (xn+2 )2  , (17) γn+1 = βn  f (xn+2 )2  f (xn )2 +  f (xn+1 )2

123

Author's personal copy D. Budzko et al.

 γ0 =

β02

 f (x0 )2 +  f (x1 )2 .  f (x1 )2

Iterative Scheme (1) with the following expression for βn applies to the group of full prediction methods, though its form is quite cumbersome:

 γn  f (xn )2  , βn+1 = min 1, βn  f (xn )2 +  f (xn + xn )2  γn  f (xn )2  f (xn+1 )2 +  f (xn+1 + xn+1 )2  γn+1 = , (18)  f (xn+2 )2  f (xn )2 +  f (xn + xn )2  γ0 = β02  f (x0 )2 +  f (x0 + x0 )2 , where xn = xn+1 − xn . Finally, we present the modification of Traub’s third order method that extends the convergence domain. Instead of iterative Scheme (1) it is recommended to use the process that can be symbolically written in the following form:  

−1 

−1 xn+1 = xn − βn f  (xn ) f (xn ) + βn f xn − f  (xn ) f (xn ) , (19) where βn could has different forms from [19].

3 Regularized iterative methods In some cases, while solving nonlinear systems, one can face the problem of degenerate Jacobian matrix. When the determinant of the Jacobian is close to zero, then it is expedient to use regularization. The concept of regularization for linear systems of equations was introduced in [25,26] by Tikhonov in early sixties of last century. There is also another reason for applying regularized iterative methods. Our experience shows that the use of regularization helps to avoid some stability problems. The region of convergent initial points becomes more regular, with less “noise”. The idea is to solve, not the original system of linear equations f  (xn )xn = −βn f (xn )

(20)

with respect to xn = xn+1 − xn on the first step, but to solve a modified one. Here we provide three types or regularization, which could be found, for example, in [11,19]. Instead of (20), it is recommended to solve (σn E + f  (xn ))xn = −βn f (xn ),

(21)

where E is the identity matrix. Expression for σn can has different forms, but our numerical tests show better results with the following σn : σn = αβn  f (xn )2 ,

(22)

where α is the parameter of regularization and should be chosen from the interval (10−5 , 10−1 ). We use α = 10−3 for all our numerical tests. If regularized iterative process with first step (21) is not effective, then it is recommended to use the following first step instead of (20): (σn E + f  (xn ) H f  (xn ))xn = −βn f  (xn ) H f (xn ),

123

(23)

Author's personal copy Modifications of Newton’s method to extend the convergence domain

where f  (xn ) H is the conjugate transposed matrix of f  (xn ). Expression for σn still has form (22). Finally, if the regularization (23) is not effective and matrix f  (xn ) H is ill-conditioned, then it is recommended to use the following regularization instead of formula (20): (σn E + (γ E + f  (xn ) H f  (xn )))xn = −βn (γ E + f  (xn ) H ) f (xn ),

(24)

where γ plays a role of the second parameter of regularization. In Sect. 4 one can find a comparative example showing the effectiveness of using regularization.

4 Numerical tests These numerical tests have been performed in Mathematica 9, by using variable precision arithmetics with 2000 digits of mantissa. The computer used is Intel(R) Core(TM) i7-3537U CPU @2.00GHz with 6.00 Gb of RAM. The first example has become classical, and one can easily find it in different textbooks. The equation f (x) ≡ ar ctan(x) = 0 (25) is used to demonstrate the weak points of powerful Newton’s method. We remind the function graph of (25) on Fig. 1. One can easily show that Newton’s convergence domain is |x| < 1.39175...,

(26)

where number 1.39175... can be obtained as one of the roots of the equation ar ctan(x) =

2x . 1 + x2

(27)

Thus for starting points x that are not included in (26), Newton’s method does not converge and we can not obtain the unique solution x = 0. Applying Kalitkin’s modification (7) with the iterative process (1) we obtain the following results with the tolerance 10−100 . Choosing θ = 10−2 we enlarge the convergence domain to |x| < 9. Depending on closeness of the zero estimate to the root it takes from 15 to 33 iterations. Then if θ decreases, the convergence domain enlarges, but the rate of convergence decreases and the number of iterations increases. One can see the relations between θ , number of iterations and the y 1.0 0.5

−3

−2

−1

1

2

3

x

− 0.5 − 1.0 Fig. 1 The function graph of ar ctan(x) with Newton’s domain of convergence

123

Author's personal copy D. Budzko et al. Table 1 Relations between convergence domain, θ and number of iterations

Convergence domain

θ

Number of iterations N

|x| < 9

10−2

15...33

|x| < 29

10−3

∼150

|x| < 90

10−4

∼850

|x| < 290

10−5

∼4,700

y 1.5

1.0

0.5

− 20

− 10

10

20

x

− 0.5

− 1.0

− 1.5 Fig. 2 The function graph of ar ctan(x) − 2x 2 1+x

size of convergence domain in the Table 1. Note that calculated convergence domains are approximate, rounded by integer number. The rate of convergence near the root is quadratic. We use Eq. (27) as the next example. The behavior of the function (27) is similar to (25). Both have asymptotic horizontal lines that graph curves do not cross, while abscissa x tends to infinity. While the Eq. (27) has three roots, Kalitkin’s modification (7) also appears to be effective in this case. The function graph of (27) is showed on Fig. 2. The next example is the nonlinear system of algebraic equations known as Brown almost linear function or Combined system. This system is widely used [11,19,21] while testing newly developed iterative schemes. The considered system is of dimension N and has the following form: f i (x) = xi +

N 

x j − N − 1,

1 ≤ i ≤ N − 1,

j=1

f N (x) =

N  j=1

123

x j − 1.

(28)

Author's personal copy Modifications of Newton’s method to extend the convergence domain Table 2 Number of iterations under zero estimate x0 = (0.5, 0.5, . . . 0.5) for system (28) Dimension

Newton

N=4

Newton R

17

9

Kalitkin

Kalitkin R

18

9

Puzynin

Puzynin R

18

9 9

N=8

70

9

59

9

72

N = 12

122

9

124

9

124

9

N = 16

224

8

271

8

226

8

N = 24

469

12

> 500

11

471

12

N = 32

> 500

11

> 500

11

> 500

11

Table 3 Numbers show how many times (in percentage terms, %) method is better or equal than opponent method for system (28) after 10,000 launches from random zero estimate Dimension

Newton

Newton R

Kalitkin

Kalitkin R

N=4

8.22

40.15

7.04

54.88

N=8

14.13

39.98

16.02

40.09

N = 16

13.67

34.60

16.35

40.65

Firstly, we launch the following iterative processes: Newton (βn = 1) and Kalitin’s (7) with θ = 0.5, Puzynin’s (14) modifications with and without regularization (23). The determinant of the Jacobian matrix is equal to: 

N N −1  N 1   det f (x) = − xj. (29) xN xi i=1

j=1

For the specific zero estimate x0 = (0.5, 0.5, . . . 0.5) formula (29) takes form: det f  (x0 ) = 21−N .

(30)

Thus, under big values of N the determinant (30) is close to zero and the corresponding system becomes ill-conditioned. So such zero estimate is unpleasant for Newton’s method and is good to demonstrate the advantage of applying the regularization. The obtained results are presented in Table 2 where parameter of regularization is α = 10−3 in (22), tolerance is 10−100 and R denotes regularized methods. Note that for system (28) the behavior of Newton’s method and Puzynin’s modification are quite similar that is why we excluded Puzynin’s modification from the following comparison. Newton’s and Kalitkin’s (with θ = 0.5) methods with and without regularization (23) have been launched 10,000 times using random (but the same for all methods) zero estimate from the interval (−10, 10). Numbers in Table 3 show how many times (in percentage terms, %) a method is better or equal than the rest of them. It allows to determine the best method. On the other hand, it is evident from Table 3 that usage of regularization is preferable and sometimes it is the only way to find the solution, when corresponding system is ill-conditioned.

5 Conclusions We presented some ways of extending the domain of convergence of Newton’s method by reviewing the contributions of representatives of Soviet and Russian mathematical school, namely, Kalitkin, Puzynin, Madorskij and others. Some effective kinds of damping multiplier

123

Author's personal copy D. Budzko et al.

were demonstrated while solving different nonlinear equations and systems starting with “bad” zero estimate. The problem of degeneracy of Jacobian matrix and the ways it was solved by named researchers was discussed. Numerical tests are in agreement with presented iterative schemes and have demonstrated their effectiveness. Computer algebra system mathematica was used while performing all calculations and visualizations. Acknowledgments ability of the paper.

The authors thank to the anonymous referees for their suggestions to improve the read-

References 1. Aleksandrov, L.: The Newton-Kantorovich regularized computing processes. USSR Comput. Math. Math. Phys. 11(1), 46–57 (1971) 2. Amat, S., Busquier, S.: On a higher order secant method. Appl. Math. Comput. 141(2–3), 321–329 (2003) 3. Amat, S., Busquier, S., Magreñan, A.A.: Reducing chaos and bifurcations in Newton-type methods. Abstr. Appl. Anal. 726701, 10 (2013) 4. Argyros, I.K., Hilout, S.: On the semilocal convergence of damped Newton’s method. Appl. Math. Comput. 219, 2808–2824 (2012) 5. Argyros, I.K., Gutiérrez, J.M., Magreñán, Á.A., Romero, N.: Convergence of the relaxed Newton’s method. J. Korean. Math. Soc. 51(1), 137–162 (2014) 6. Babajee, D.K.R., Cordero, A., Soleymani, F., Torregrosa, J.R.: On improved three-step schemes with high efficiency index and their dynamics. Numer. Algorithms 65, 153–169 (2014) 7. Cordero, A., Lotfi, T., Mahdiani, K., Torregrosa, J.R.: Two optimal general classes of iterative methods with eight-order. Acta Appl. Math. doi:10.1007/s10440-014-9869-0 8. Cordero, A., Torregrosa, J.R.: A class of Steffensen type methods with optimal order of convergence. Appl. Math. Comput. 217, 7653–7659 (2011) 9. Dembo, R.S., Eisenstat, S.C., Steihaug, T.: Inexact newton methods. SIAM J. Numer. Anal. 19, 400–408 (1982) 10. Dennis, J.E., Jr. Schnabel, R.B.: Numerical methods for unconstrained optimization and nonlinear equations. Classics in applied mathematics, vol. 16. SIAM (1996) 11. Ermakov, V.V., Kalitkin, N.N.: The optimal step and regularization for Newton’s method. USSR Comput. Math. Math. Phys. 21(2), 235–242 (1981) 12. Ezquerro, J.A., Hernández, M.A., Romero, N.: On some one-point hybrid iterative methods. Original Res. Art. Nonlinear Anal. Theory Methods Appl. 72(2), 587–601 (2010) 13. Gavurin, M.K.: Nonlinear functional equations and continuous analogues of iteration methods. Izvestiya Vysshikh Uchebnykh Zavedenii Matematika 5, 18–31 (1958) 14. Hernández, M.A., Romero, N.: A uniparametric family of iterative processes for solving nondifferentiable equations. J. Math. Annal. Appl. 275, 821–834 (2002) 15. Kalitkin, N.N., et al.: Mathematical Models in Nature and Science. Moscow (2005, in Russian) 16. Kalitkin, N.N., Poshivailo, I.P.: Computation of simple and multiple roots of a nonlinear equation. Math. Models Comput. Simul. 1(4), 514–520 (2009) 17. Kalitkin, N.N., Kuz’mina, L.V.: Computation of roots of an equation and determination of their multiplicity. Math. Models Comput. Simul. 3(1), 65–80 (2011) 18. Kung, H.T., Traub, J.F.: Optimal order of one-point and multi-point iteration. J. Assoc. Comput. Mach. 21, 643–651 (1974) 19. Madorskij, V.M.: Quasi-Newtonian Processes for Solving Nonlinear Equations. Brest State University, Brest (2005, in Russian) 20. Magreñán, Á.A.: Estudio de la dinámica del método de Newton amortiguado. PhD Thesis University of La Rioja, Spain. http://dialnet.unirioja.es/servlet/tesis?codigo=38821 (2013) 21. More, J.J., Garbow, B.S., Hillstrom, K.E.: Testing unconstrained optimization software. ACM Trans. Math. Softw. 7(1), 17–41 (1981) 22. Ostrowski, A.M.: Solutions of Equations and Systems of Equations. Academic Press, New York-London (1966) 23. Puzynin, I.V., Zhanlav, T.: Convergence of iterations on the basis of a continuous analogue of the Newton method. Comput. Math. Math. Phys. 32(6), 729–737 (1992) 24. Puzynin, I.V., et al.: The generalized continuous analog of Newton’s method for the numerical study of some nonlinear quantum-field models. Phys. Part. Nucl. 30, 87 (1999)

123

Author's personal copy Modifications of Newton’s method to extend the convergence domain 25. Tikhonov, A.N.: Regularization of incorrectly posed problems. Soviet Math. Dokl. 4, 6 (1963) 26. Tikhonov, A.N.: The stability of algorithms for the solution of degenerate systems of linear algebraic equations. USSR Comput. Math. Math. Phys. 5(4), 181–188 (1965) 27. Tikhonov, A.N., Arsenin, V.Y.: Methods for Solving Ill-Posed Problems. John Wiley and Sons Inc, New York (1977) 28. Velasco Del Olmo, A.I.: Mejoras de los dominios de puntos de salida de métodos iterativos que no utilizan derivadas. PhD University of La Rioja, Spain. http://dialnet.unirioja.es/servlet/tesis?codigo=38218 (2013) 29. Ypma, T.J.: Local convergence of inexact Newton methods. SIAM J. Numer. Anal. 21, 583–590 (1984) 30. Zhidkov, E.P., Makarenko, G.J., Puzynin, I.V.: Continuous analogue of Newton’s method in nonlinear problems of physics. Phys. Part. Nuclei. 4(1), 127–166 (1973). (in Russian)

123