some new higher order multi-point iterative methods ...

0 downloads 0 Views 2MB Size Report
Jun 29, 2016 - National Conference on Soft computing and its Applications, Anna University, ...... member of the King family of fourth-order methods. Also, they ...
SOME NEW HIGHER ORDER MULTI-POINT ITERATIVE METHODS AND THEIR APPLICATIONS TO DIFFERENTIAL AND INTEGRAL EQUATIONS AND GLOBAL POSITIONING SYSTEM

THESIS Submitted by Kalyanasundaram M

in partial fulfillment for the award of the degree of DOCTOR OF PHILOSOPHY in MATHEMATICS

DEPARTMENT OF MATHEMATICS PONDICHERRY ENGINEERING COLLEGE PONDICHERRY UNIVERSITY PUDUCHERRY-605014 INDIA JUNE 2016

DEPARTMENT OF MATHEMATICS PONDICHERRY ENGINEERING COLLEGE

BONAFIDE CERTIFICATE

It is certified that this thesis entitled “Some New Higher Order Multi-point Iterative Methods and their Applications to Differential and Integral Equations and Global Positioning System” is the bonafide work of Kalyanasundaram M who carried out the research under my supervision. Further certified that to the best of my knowledge the work reported herein does not form part of any other thesis or dissertation on the basis of which a degree or award was conferred on an earlier occasion of this or any other candidate.

Dr. J. JAYAKUMAR (Research Supervisor) Professor Department of Mathematics Pondicherry Engineering College Puducherry - 605 014 Date: 29.06.2016 Place: Puducherry

ii

DEPARTMENT OF MATHEMATICS PONDICHERRY ENGINEERING COLLEGE

DECLARATION

I hereby declare that the thesis entitled “Some New Higher Order Multi-point Iterative Methods and their Applications to Differential and Integral Equations and Global Positioning System” submitted to Pondicherry University, Puducherry, India for the award of the degree of DOCTOR OF PHILOSOPHY in MATHEMATICS is a record of bonafide research work carried under the supervision of Prof. Dr. J. Jayakumar, Department of Mathematics, Pondicherry Engineering College, Puducherry. Herein this research work does not form part of any other thesis or dissertation on the basis of which a degree or award was conferred on an earlier occasion.

KALYANASUNDARAM M Date: 29.06.2016 Place: Puducherry

iii

Thesis dedicated to my parents Madhu and Palaniyammal

iv

Acknowledgements I express my heartiest gratitude to my research supervisor, Prof. Dr. J. Jayakumar, for his guidance, motivation and constant support. He has provided many valuable suggestions, reviewed my work at every stage of the research and had given a complete freedom during the entire period of my research. I have learnt many things from him, which will be indeed very useful for my future career. I also thankful to my doctoral committee members, Dr. P. A. Padmanabham, Professor, Head, Department of Mathematics, Pondicherry Engineering College and Dr. Rajeswari Seshadri, Associate Professor, Department of Mathematics, Pondicherry University, for their advices throughout the period of research. Without their guidance and continuous help, this dissertation would not have been possible. My sincere thanks to my spiritual masters of Shri Ram Chandra Mission for their support during my difficult times. I am very grateful to respectable principal Prof. Dr. T. Sundararajan for his support. Also, I thank Deans of Research Pondicherry Engineering College for their help in completing my thesis. I would like to thank Dr. Babajee Diyashvir Kreetee Rajiv, for his many suggestions and constant support in caring out my research as a research collaborator. I would like to thank all the researchers in this field who ably supported me by sending their papers whenever I requested them. My sincere thanks to Prof. Dr. G. Sivaradje, TEQIP Coordinator and other members of TEQIP who helped me in getting the research fellowship for a period of three years. The TEQIP grant was crucial for the successful completion of this research. Furthermore, I would like to thank all the faculty members of the Department of Mathematics for their support during my Ph.D. studies. I would like to express my heartiest gratitude to Mr. Satbir Bakshi, Mr. S. Senthilnathan, Mr. A. Manickam, Mrs. Shantha Sheela Devi, Mrs. Anukumari Shukla and Ms. Amrita Jaiswal for supporting me morally as well as financially. I have experienced great friendship and enjoyment from persons who made my stay at PEC an unforgettable experience: M. Karthikeyan, D. Neelamegam, G. Jegan, P. Thamizhselvi, S. Karpagam, R. Ayyappan and J. Udayageetha. Also, I would like to express my heartiest gratitude to my friends M. Lakhsmankumar, S. Balajee and V. Murugan.

v

Finally, it is my greatest honor to thank my parents, Mr. A. Madhu (Father) and Mrs. M. Palaniyammal (Mother), for their patience, self-sacrifice and love. Without their blessings this work would never have come into existence. I feel very privileged to have siblings, M. Jothi, M. Jeevabalan and M. Kalpana. I must acknowledge here encouragement and good words I have received from my siblings were simply great. Last but not the least, I thank all the members of my relatives for their care in my development. Pondicherry Engineering College, Pondicherry June 29, 2016

vi

Kalyanasundaram M

Table of Contents Acknowledgements

v

Table of Contents

vii

List of Tables

x

List of Figures

xi

Summary

xii

Publications

xiv

Glossary of Symbols

xvi

1

2

3

Introduction 1.1 Classification of iterative methods . . . . . 1.2 Order of Convergence . . . . . . . . . . . . 1.3 Computational order of convergence . . . . 1.4 Computational Efficiency . . . . . . . . . . 1.5 Initial approximations . . . . . . . . . . . . 1.6 One-point iterative methods for simple zeros 1.7 Review of Traub’s period to date . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

Some new variants of Newton’s method of order three, four and five 2.1 Construction of new methods . . . . . . . . . . . . . . . . . . . . 2.2 Convergence Analysis of the methods . . . . . . . . . . . . . . . 2.3 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . 2.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . .

. . . . . . .

1 1 2 4 5 7 8 9

. . . .

24 24 26 29 30

Class of modified Newton’s method having fifth and sixth order convergence 3.1 Construction of new methods . . . . . . . . . . . . . . . . . . . . . 3.1.1 A three step fifth order I.F. . . . . . . . . . . . . . . . . . . 3.1.2 New class of I.F. with order six . . . . . . . . . . . . . . . 3.2 Convergence Analysis of the methods . . . . . . . . . . . . . . . . 3.3 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 3.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . vii

31 31 31 32 33 33 37

4

5

Two families of Newton-type methods having fourth and sixth order convergence 4.1 Construction of new methods . . . . . . . . . . . . . . . . . . . . . 4.1.1 Family of Optimal fourth order I.F. . . . . . . . . . . . . . 4.1.2 Family of sixth order I.F. . . . . . . . . . . . . . . . . . . . 4.2 Convergence Analysis of the methods . . . . . . . . . . . . . . . . 4.3 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 4.4 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . .

38 38 38 39 40 43 44

Family of higher order multi-point iterative methods based mean 5.1 Construction of new methods . . . . . . . . . . . . . . . . 5.1.1 Family of Optimal fourth order I.F. . . . . . . . . 5.1.2 Family of sixth order I.F. . . . . . . . . . . . . . . 5.1.3 Family of twelfth order I.F. . . . . . . . . . . . . . 5.2 Convergence Analysis of the methods . . . . . . . . . . . 5.3 Numerical Examples . . . . . . . . . . . . . . . . . . . . 5.4 Dynamic Behaviour in the Complex Plane . . . . . . . . . 5.5 Concluding Remarks . . . . . . . . . . . . . . . . . . . .

45 46 46 46 46 48 48 50 52

on power . . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

. . . . . . . .

6

Some New Multi-point Iterative Methods and their Basins of Attraction 6.1 Construction of new methods . . . . . . . . . . . . . . . . . . . . . 6.1.1 Convergence Analysis of the methods . . . . . . . . . . . . 6.2 Higher Order Methods . . . . . . . . . . . . . . . . . . . . . . . . 6.2.1 Convergence Analysis of the methods . . . . . . . . . . . . 6.3 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 6.4 Basins of attraction . . . . . . . . . . . . . . . . . . . . . . . . . . 6.4.1 Polynomiographs of p1 (z) = z 3 − 1 . . . . . . . . . . . . . 6.4.2 Polynomiographs of p2 (z) = z 4 − 1 . . . . . . . . . . . . . 6.5 A study on extraneous fixed points . . . . . . . . . . . . . . . . . . 6.6 An application problem . . . . . . . . . . . . . . . . . . . . . . . . 6.7 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . .

53 54 54 56 57 57 58 59 59 63 67 69

7

Improved Harmonic Mean Newton-type methods for system of nonlinear equations 7.1 Construction of new methods . . . . . . . . . . . . . . . . . . . . . 7.2 Convergence Analysis of the methods . . . . . . . . . . . . . . . . 7.3 Efficiency of the Methods . . . . . . . . . . . . . . . . . . . . . . . 7.4 Numerical Examples . . . . . . . . . . . . . . . . . . . . . . . . . 7.5 Application . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7.6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . .

70 71 72 81 82 83 85

Efficient Newton-type methods for system of nonlinear equations 8.1 Construction of new methods . . . . . . . . . . . . . . . . . . 8.2 Convergence Analysis of the methods . . . . . . . . . . . . . 8.3 Efficiency of the Methods . . . . . . . . . . . . . . . . . . . . 8.4 Numerical examples . . . . . . . . . . . . . . . . . . . . . .

87 88 88 95 96

8

viii

. . . .

. . . .

. . . .

8.5

8.6 9

Applications . . . . . . . . . . . . 8.5.1 Chandrasekhar’s equation 8.5.2 1-D Bratu problem . . . . Concluding Remarks . . . . . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

. . . .

97 97 98 99

An improvement to double-step Newton-type method and its multi-step version 100 9.1 Construction of new methods . . . . . . . . . . . . . . . . . . . . . 101 9.2 Convergence Analysis of the methods . . . . . . . . . . . . . . . . 102 9.3 Efficiency of the Methods . . . . . . . . . . . . . . . . . . . . . . . 107 9.4 Numerical examples . . . . . . . . . . . . . . . . . . . . . . . . . 108 9.5 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 9.5.1 Chandrasekhar’s equation . . . . . . . . . . . . . . . . . . 109 9.5.2 2-D Bratu problem . . . . . . . . . . . . . . . . . . . . . . 109 9.6 Concluding Remarks . . . . . . . . . . . . . . . . . . . . . . . . . 112

10 Application in Global Positioning System 10.1 Introduction . . . . . . . . . . . . . . . . . . 10.2 Basic Equations for Finding User Position . . 10.3 Measurement of Pseudorange . . . . . . . . . 10.4 Solution of User Position from Pseudoranges 10.5 Numerical Results for the GPS Problem . . . 10.6 Concluding Remarks . . . . . . . . . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

. . . . . .

113 113 114 115 116 118 119

11 Conclusion and Future Work

120

Bibliography

123

ix

List of Tables 6.1 6.2 6.3 6.4 6.5 6.6 6.7

Comparison of convergent and divergent grids for p1 (z) . Comparison of convergent and divergent grids for p2 (z) . Comparison of results for Planck’s radiation law problem Comparison of results for Planck’s radiation law problem Comparison of results for Planck’s radiation law problem Results for Planck’s radiation law problem in fzero . . . Comparison of Efficiency Index . . . . . . . . . . . . .

. . . . . . .

61 61 68 68 68 69 69

7.1 7.2

Comparison of EI and CE . . . . . . . . . . . . . . . . . . . . . . . Comparison of number of λ’s (out of 350 λ’s) for 1-D Bratu problem

81 84

8.1 8.2 8.3 8.4

Comparison of EI and CE . . . . . . . . . . . . . . . . . . . . . . . Weights and knots for the Gauss-Legendre formula (m = 8) . . . . . Numerical results for Chandrasekhar’s equation . . . . . . . . . . . Comparison of number of λ’s (out of 350 λ’s) for 1-D Bratu problem

96 98 98 99

9.1 9.2 9.3 9.4

Comparison of EI and CE . . . . . . . . . . . . . . . . . . . . . Comparison of iteration and errors for Chandrasekhar’s equation Comparison of number of λ’s for 2-D Bratu problem for n = 10 . Comparison of number of λ’s for 2-D Bratu problem for n = 20 .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . . . . .

. . . .

. . . .

108 109 111 111

10.1 Coordinates of observed satellite and pseudorange. . . . . . . . . . 118 10.2 Comparison of the iterative methods for GPS . . . . . . . . . . . . 118

x

List of Figures 3.1 3.2 3.3 3.4

Comparison of iterations for f2 (x), x(0) = −1.7 for 3rd P M . . Comparison of error for f2 (x), x(0) = −1.7 for 3rd P M . . . . Comparison of iterations for f2 (x), x(0) = −1.7 for 6th P M M Comparison of error for f2 (x), x(0) = −1.7 for 6th P M M . . .

. . . .

35 35 36 36

5.1 5.2

Results for f9 (x) . . . . . . . . . . . . . . . . . . . . . . . . . . . Polynomiographs of p1 (z) . . . . . . . . . . . . . . . . . . . . . .

49 51

6.1 6.2

Polynomiographs of p1 (z) . . . . . . . . . . . . . . . . . . . . . . Polynomiographs of p2 (z) . . . . . . . . . . . . . . . . . . . . . .

60 62

7.1 7.2 7.3 7.4

Comparison of Efficiency index and Computational efficiency index Variation of θ for different values of λ. . . . . . . . . . . . . . . . . Variation of number of iteration with λ for 1-D Bratu problem . . . Order of the (2r + 4)th HM family for each λ. . . . . . . . . . . . .

82 84 85 86

8.1

Comparison of Efficiency index and Computational efficiency index

96

9.1 9.2

Comparison of Efficiency index and Computational efficiency index 108 Variation of θ for different values of λ. . . . . . . . . . . . . . . . . 110

. . . .

. . . .

10.1 Two dimensional user position . . . . . . . . . . . . . . . . . . . . 114 10.2 Three dimensional user position . . . . . . . . . . . . . . . . . . . 114

xi

Summary In this thesis, we are interested to propose and study new multi-point iterative methods and their error analysis. Chapter 1 presents some preliminaries and literature review leading towards this work. Chapters 2-5 consider some new families of higher order multi-point iterative methods for solving scalar nonlinear equation. We have studied the local convergence analysis using Taylor series for all the methods proposed. Also, we have applied the new methods on many examples and compared with some existing equivalent methods and the results are tabulated in the respective chapters. In Chapter 5 basins of attraction for the fourth order methods have been studied for few cases. In Chapter 6, we have presented a new family of fourth order iterative methods which uses weight functions. Further, we have extended them to sixth and twelfth order methods. The performances of the proposed methods are displayed by applying them on many examples. An application problem arising from Planck’s radiation law is also verified. Basins of attraction analysis on the complex domain is carried out on certain standard complex polynomials for the proposed methods and some equivalent methods whose results are displayed using polynomiographs. Chapter 7 considers a new fourth order Newton-like method based on harmonic mean and its multi-step version for solving system of nonlinear equations. The new fourth order method requires evaluation of one function and two first order Fr´echet derivatives for each iteration. The multi-step version requires one more function evaluation for each iteration and has converges order 2r + 4, where r is a positive integer and r ≥ 1. We have proved that the root α is a point of attraction for a general iterative function, whereas the proposed new schemes also satisfy this result. Numerical experiments, including an application to the 1-D Bratu problem are given to illustrate the efficiency of the new methods. In Chapter 8, we have presented some efficient iterative methods of convergence order four, five and six respectively for solving system of nonlinear equations. Our xii

xiii

aim is to achieve higher order Newton-like methods with only one inverse of Jacobian matrix for each iteration. It is proved that new iterative schemes satisfy that the root α is a point of attraction. The performance of the proposed methods are verified through numerical examples and we have considered Chandrasekhar’s equation and 1-D Bratu problem for the application. In Chapter 9, we have proposed a new method with convergence order five by improving the double-step Newton method. The multi-step version requires one more function evaluation for each step. The multi-step version converges with order 3r+5, r ≥ 1 and positive integer. Numerical experiments compare the new methods with some existing methods. Our methods are also verified on Chandrasekhar’s problem and 2-D Bratu problem to illustrate the applications. Chapter 10 considers nonlinear system of equations arising out of a GPS receiver, whose data is carried on electromagnetic signals transmitted by the earth-orbiting GPS satellite constellation and the computation of the travel time of these received signals. The time measurements are converted to distance measurements, which can be used to compute the unknown position and time of the receiver from the known positions of the satellite transmitters and signal transit times. A set of nonlinear navigation equations is formed. These nonlinear equations are solved using iterative techniques based on newly developed Newton-type methods given in Chapters 7-9. The results indicate that the new Newton-type methods are simple, fast and accurate as compared to Newton’s method to predict the earth coordinates in GPS.

Publications Refereed Papers 1. Kalyanasundaram M., J. Jayakumar, “Some higher order Newton-like methods for solving system of nonlinear equations and their applications”, Int. J. Appl. Comput. Math, (2016), DOI 10.1007/s40819-016-0234-z. (Springer) 2. Kalyanasundaram M., D.K.R. Babajee, J. Jayakumar, “An improvement to double-step Newton method and its multi-step version for solving system of nonlinear equations and its applications”, Numerical Algorithms, (2016), DOI: 10.1007/s11075-016-0163-2. (Springer, SCIE, IF = 1.42) 3. Kalyanasundaram M., J. Jayakumar, “Higher Order Methods for Nonlinear Equations and their Basins of Attraction”, Mathematics 2016, 4, 22; DOI: 10.3390/math4020022. (Thomson Reuters - ESCI) 4. D.K.R. Babajee, Kalyanasundaram M., J. Jayakumar. “On some improved Harmonic Mean Newton-like methods for solving systems of nonlinear equations”, Algorithms 8 (2015), pp. 895−909. (Thomson Reuters - ESCI) 5. D.K.R. Babajee, Kalyanasundaram M., J. Jayakumar, “A family of higher order multi-point iterative methods based on power mean for solving nonlinear equations”, Afrika Matematika, (2015), DOI: 10.1007/s13370-015-03801. (Springer, Scopus) 6. Kalyanasundaram M., J. Jayakumar, “Two new families of iterative methods for solving nonlinear equations”, Tamsui Oxf. J. Inf. Math. Sci., (2015), (Accepted). (Scopus) 7. Kalyanasundaram M., J. Jayakumar, “Class of modified Newton’s method for solving nonlinear equations”, Tamsui Oxf. J. Inf. Math. Sci., 30 (2014), pp. 91−100. (Scopus) xiv

xv

8. Kalyanasundaram M., J. Jayakumar, “A fifth order modified Newton type method for solving nonlinear equations”, IJAER, 10(72) (2015), pp. 83−86. (Scopus) 9. J. Jayakumar, Kalyanasundaram M., “Power means based modification of Newton’s method for solving nonlinear equations with cubic convergence”, IJAMC, 6(2) (2015), pp. 1−6. 10. J. Jayakumar, Kalyanasundaram M., “Generalized Power means modification of Newton’s method for Simple Roots of Nonlinear Equation”, Int. J. Pure Appl. Sci. Technol., 18(2) (2013), pp. 45−51. 11. J. Jayakumar, Kalyanasundaram M., “Modified Newton’s method using harmonic mean for solving nonlinear equation”, IOSR Journal of Mathematics, 7 (2013), pp. 93−97.

Refereed Conference Proceedings 1. Kalyanasundaram M., J. Jayakumar, “Higher order Multi-point Iterative methods and its Application to GPS and Nonlinear Differential Equations”, Book of Abstracts of the National Conference on Recent Developments in Mathematical Analysis and its Applications, Pondicherry University, February 25−26, 2016. 2. Kalyanasundaram M., J. Jayakumar, “A fifth order modified Newton type method for solving nonlinear equations”, Proceedings of the Interdisciplinary National Conference on Soft computing and its Applications, Anna University, India, Vol. 1, pp. 123−128, 2015. 3. J. Jayakumar, Kalyanasundaram M., “Two Class of Sixth Order Modified Newton’s Method for Solving Nonlinear Equations”, Proceedings of the International Conference on Mathematics and its Applications, Anna University, India, pp. 967−975, 2014.

Glossary of Symbols R

Set of Real numbers

C

Set of Complex numbers

x

Element of R

z

Element of C

f (x)

Nonlinear function in R

x∗

Root of f (x)

x(0)

Initial Point

e

= x − x∗

e(k)

= x(k) − x∗

error

= |x(k+1) − x(k) |

ψ(x)

Iterative Function (I.F.)

d

Functional Evaluation (F.E.) per iteration



Total no. of function evaluations

EI

Efficiency Index

CE

Computational Efficiency index

EIT

=

EIO

= pd

p d 1

N

Number of iteration for scalar case

cj

=

p

Order of convergence

ACOC

f (j) (x∗ ) ,j j!f 0 (x∗ )

= 2, 3, ...

Approximated computational order of convergence

ρ˜k

ACOC for scalar case

x

= (x1 , x2 , ..., xn )T

F (x)

= (f1 (x), f2 (x), ..., fn (x))T

xvi

xvii

x∗

= (x∗1 , x∗2 , x∗3 , ..., x∗n )T , root of F (x)

F 0 (x)

Fr´echet derivative of F (x) n o S(x∗ , δ) Open ball kx∗ − xk < δ , δ > 0 n o S(x∗ , δ) Closed ball kx∗ − xk ≤ δ , δ > 0 F (x(k) )

F (x) evaluated at x = x(k)

e(k)

= x(k) − x∗

Cq

=(1/q!)[F 0 (x∗ )]−1 F (q) (x∗ ), q ≥ 2

M

Number of iteration for multivariate case

errmin pc

=kx(k+1) − x(k) k2 ACOC for system of equation

J(R)

Julia set

F (R)

Fatou set

TP

Test Problem

λ, β

Real parameter

λc

Critical value of λ

f

Inverse of f

ξ

Extraneous fixed points

1-D

One Dimensional in Bratu problem

2-D

Two Dimensional in Bratu problem

GPS

Global Positioning System

Chapter 1 Introduction One of the popular iterative methods for finding approximate solutions of nonlinear equations is the classical Newton’s method, which has quadratic convergence. Higher order methods such as Chebyshev’s, Halley’s, Chebyshev-Halley’s methods, etc., require evaluation of second or higher order derivatives and hence they are less efficient and more costly in terms of computational time. Calculating zeros of scalar nonlinear equation f (x) = 0 and system of nonlinear equations F (x) = 0 rank among the most significant problems in the theory and practice not only of applied mathematics but also of many branches of engineering sciences, physics, computer science, finance, to mention only some of the fields. These problems lead to a rich blend of mathematics, numerical analysis and computing science. Thus, it is important to study higher order variants of Newton’s method, which require only function and its first derivative calculations and are more robust when compared to Newton’s method.

1.1

Classification of iterative methods

Let f be a real single-valued function of a real variable. If f (x∗ ) = 0 then x∗ is said to be a zero of f or equivalently, a root of the equation f (x) = 0.

(1.1)

We will always assume that f has sufficient number of continuous derivatives in the neighborhood of x∗ . Roots of equation (1.1) can be found analytically only in some special cases. To solve (1.1) approximately and find the root x∗ , it is customary to

1

2

apply some iterative methods of the form x(k+1) = ψ(x(k) ), k = 0, 1, 2, ...

(1.2)

where x(k) is an approximation to the root x∗ isolated in a real interval [a,b], x(k+1) is the next approximation and ψ is a suitable continuous function defined on [a,b]. The iterative method starts with an initial approximation x(0) ∈ [a, b] and converges to x∗ . The function ψ is called Iteration Function (I.F.). Formula (1.2) defines the simplest iterative method where only one previous approximation x(k) is required for evaluating the next approximation x(k+1) . Such an iterative method is called one-point iterative method without memory. Let x(k+1) be determined by new information at x(k) and reused information at x(k−1) , ..., x(k−n) , 1 ≤ n ≤ k. Thus x(k+1) = ψ(x(k) ; x(k−1) , ..., x(k−n) ).

(1.3)

Then ψ is called a one-point I.F. with memory. The semicolon in equation (1.3) separates the point at which new data are used from the points at which old data are reused. Let x(k+1) be determined by new information at x(k) , φ1 (x(k) ), ..., φi (x(k) ), i ≥ 1. No old information is reused. Thus, x(k+1) = ψ(x(k) , φ1 (x(k) ), ..., φi (x(k) )).

(1.4)

Then ψ is called a multipoint I.F. without memory. Finally, let x(k+1) be determined by new information at x(k) , x(k−1) , ..., x(k−n) and reused information at x(k−n−1) , ..., x(k−m) . Thus x(k+1) = ψ(x(k) , x(k−1) , ..., x(k−n) ; x(k) , x(k−n−1) , ..., x(k−m) ), m > n.

(1.5)

Then ψ will be called a multi-point I.F. with memory. The semicolon in equation (1.5) separates the point at which new data are used from the points at which old data are reused.

1.2

Order of Convergence

The convergence rate of an iterative method is the issue of equal importance to the theory and practice of iterative methods as the convergence itself. The convergence rate is defined by the order of convergence.

3

Definition 1.2.1. Traub (1964) Let x(0) , x(1) , ..., x(k) , ... be a sequence converging to x∗ . Let e(k) = x(k) − x∗ . If there exists p ∈ R and C ∈ R − {0} such that e(k+1) → C, (e(k) )p

(1.6)

then p is called the order of the sequence and C is the asymptotic error constant. Definition 1.2.2. Wait (1979) If the sequence {xk } tends to a limit x∗ in such a way that x(k+1) − x∗ =C lim (k) k→∞ (x − x∗ )p for p ≥ 1, then the order of convergence of the sequence is said to be p, and C is known as the asymptotic error constant. If p = 1, p = 2 or p = 3, the convergence is said to be linear, quadratic or cubic, respectively. Let e(k) = x(k) − x∗ , then the relation     e(k+1) = C (e(k) )p + O (e(k) )p+1 = O (e(k) )p .

(1.7)

is called the error equation. The value of p is called the order of convergence of the method. In practice, the order of convergence is often determined by the following statement known as Schr¨ oder-Traub’s theorem in Traub (1964). Theorem 1.2.1. Let ψ be an I.F. such that ψ (p) is continuous in a neighborhood of x∗ . Then, ψ is of order p if and only if ψ(x∗ ) = x∗ ;

ψ j (x∗ ) = 0, j = 1, 2, ..., p − 1; ψ (p) (x∗ ) 6= 0.

(1.8)

The asymptotic error constant is given by |ψ(x(k) ) − x∗ | ψ (p) (x∗ ) = = Cp . lim k→∞ |x(k) − x∗ |p p!

(1.9)

For example, we consider Newton’s iteration ψ(x) = x − u(x), where u(x) = f (x) . f 0 (x)

By a direct calculation we find that ψ(x∗ ) = x∗ , ψ 0 (x∗ ) = 0, ψ 00 (x∗ ) =

f 00 (x∗ ) 6= 0. f 0 (x∗ )

We have 00 ∗ f (x ) |ψ(x(k) ) − x∗ | → C2 = 0 ∗ . (k) ∗ 2 |x − x | 2f (x )

(1.10)

Therefore, Newton’s iterations have second order convergence. The following two theorems are concerned with the order of the composition of iteration functions.

4

Theorem 1.2.2. Traub (1964) Let x∗ be a simple zero of a function f and let ψ1 (x) define an iterative method of order order p1 . Then a composite I.F. ψ2 (x) introduced by Newton’s method f (ψ1 (x)) , (1.11) ψ2 (x) = ψ1 (x) − f 0 (x) defines an iterative method of order p1 + 1. Theorem 1.2.3. Traub (1964) Let ψ1 (x), ψ2 (x),...,ψs (x) iterative functions of order p1 , p2 ,...,ps respectively. Then the composition ψ(x) = ψ1 (ψ2 (...(ψs (x))...)) defines the iterative method of order p1 p2 ...ps . Babajee (2012) developed a technique using weight functions to improve the order of old methods. Theorem 1.2.4. Babajee (2012) Let a sufficiently smooth function f : D ⊂ R → R has a simple root x∗ in the open interval D. Let ψold (x) be an I.F. of order p. Then the I.F. defined as ψnew (x) = ψold (x) − G × f (ψold (x)) is of local convergence of order p + q if G is the weight function satisfying the error equation  1  G = 0 ∗ 1 + CG eq + O(eq+1 ) , f (x ) where CG is a constant. Suppose that the error equation of the old I.F. is given by eold = ψold (x) − x∗ = Cold ep + ...

.

Then, the error equation of the new I.F. is given by 2 enew = ψnew (x) − x∗ = −CG Cold ep+q − c2 Cold e2p + ...,

where cj =

1.3

f (j) (x∗ ) , j = 2, 3, 4, ... j!f 0 (x)

(1.12)

.

Computational order of convergence

Together with the order of convergence, for practical purposes we describe the notion of computational order of convergence (COC). Namely, it is of interest to check the order of convergence of an iterative method during its practical implementation and estimate how much it differs from the theoretical order. Definition 1.3.1. Let x(k−1) , x(k) and x(k+1) be the last three successive approximations to the sought zero x∗ obtained in the iterative process x(k+1) = ψ(x(k) ) of presumably order p. Then, the computational order of convergence ρ can be approximated using the formula ρ=

ln |(x(k+1) − x∗ )/(x(k) − x∗ )| . ln |(x(k) − x∗ )/(x(k−1) − x∗ )|

(1.13)

5

The COC has been used in many papers to test numerically the order of convergence of new methods whose order has been theoretically studied. The value of zero x∗ is unknown in practice so that we use another approach that avoids the use of zero x∗ studied by Cordero and Torregrosa (2007) by introducing a more realistic relationship called approximated computational order of convergence (ACOC). The ACOC of a sequence {x(k) }k≥0 is defined by ρ˜k =

ln |ˆ e(k+1) /ˆ e(k) | , ln |ˆ e(k) /ˆ e(k−1) |

(1.14)

where eˆ(k) = x(k) − x(k−1) . It was proved by Grau and Diaz-Barrero (2000) that ρ˜k → p when eˆ(k−1) → 0 which means that ρ˜k ≈ p in the sense ρ˜k = 1. k→∞ p lim

The use of the computational order of convergence, given by (1.13) and (1.14) serves as a practical check on the theoretical error calculations. These formulae give mainly satisfactory results in practice. Apart from the estimation of a real convergence rate of an iterative method in practical realization, the computational order of convergence may be suitably applied in designing new root-solvers. Namely, in some complicated cases, it is not easy to find theoretical order of convergence of such a method. Test-examples that include the calculation of ACOC can be helpful to predict the convergence speed of designed method, which makes easier further convergence analysis.

1.4

Computational Efficiency

In practice, it is important to know certain characteristics of the applied root-finding algorithm, for instance, the number of numerical operations in calculating the desired root to the wanted accuracy, convergence speed, processor running time, occupation of storage space, etc. In spite of the ever-growing speed of modern computers, these features remain important issues due to the constantly increasing complexity of the problems solved by computers. To compare various numerical algorithms, it is necessary to define computational efficiency based on the speed of convergence (order), the cost of evaluating f and its derivatives (problem cost), and the cost of constructing the iterative process (combinatory cost).

6

Obviously, a root-finding method is more efficient as its amount of computational work is smaller, keeping the remaining parameters fixed. In other words, the most efficient method is the one that satisfies the posted stopping criterion for the smallest CPU (central processing unit) time. The following definitions are given to calculate Efficiency Index and informational Efficiency Index: Definition 1.4.1. Ostrowski (1960) Efficiency Index is defined as 1

EIO = p d ,

(1.15)

where p is the order of convergence of the method and d is the total number of new function evaluations (the values of f and its derivatives) per iteration. Definition 1.4.2. Traub (1964) The Informational Efficiency Index (EIT ) is defined as p (1.16) EIT = . d The following alternative formula, obtained by taking the logarithm of (1.15) EI ∗ =

log p , d

(1.17)

does not essentially differ from (1.15) (See McNamee (2007)). Computational cost of any I.F. ψ constructed for solving a nonlinear equation f depends on the number of function evaluations (F.E.) per iteration. The connection between the order of convergence of ψ and the cost of evaluation of f and its derivatives are given by the so-called fundamental theorem of one-point I.F., stated by Traub (1964). Theorem 1.4.1. Traub (1964) Let ψ be any one-point iterative function with order p and let dψ be the number of new F.E. per iteration. Then for any p there exists ψ with the informational efficiency EI(ψ) = p/dψ = 1 and for all ψ it holds EI(ψ) = p/dψ ≤ 1. Moreover, ψ must depend explicitly on the first p − 1 derivatives of f . Consequently, one-point iteration function with sufficiently smooth f cannot attain EIT greater than 1. This means that iterative methods with EIT greater than 1 could be found only in the class of multipoint methods, which will be discussed later in detail. The main goal in the construction of new methods is to obtain a method with the best possible efficiency. This means that according to the definition (1.15) or (1.17), it is desirable to attain as high as possible convergence order with the fixed number

7

of F.E. per iteration. In the case of multipoint methods without memory, this demand is closely related to the optimal order of convergence considered in the Kung-Traub conjecture. Kung-Traub Conjecture Kung and Traub (1974): Let ψ be an I.F. without memory with d function evaluations. Then p(ψ) ≤ popt = 2d−1 ,

(1.18)

where popt is the maximum order. Multipoint methods that satisfy the Kung-Traub conjecture is usually called “optimal methods”. Algorithms of optimal efficiency are of particular interest in the present trend in research where we have strived to get some optimal methods in this thesis. The Kung-Traub conjecture is supported by the families of multipoint methods of arbitrary order p, proposed in Kung and Traub (1974) and Zheng et al. (2011) and also by a number of particular multipoint methods, which will be discussed later in this chapter.

1.5

Initial approximations

Every iterative method for solving a nonlinear equation f requires the knowledge of an initial approximation x(0) to the sought root x∗ . Many one-point root-finding methods as well as multipoint iterative methods are based on Newton’s method, which is famous for its simplicity and good local convergence properties. However, a good convergence of Newton’s method cannot be expected when the initial guess is not properly chosen, especially when the slope of the function f is extremely flat or very steep near the root or f is of oscillatory type. The significance of the choice of initial approximations becomes even more important if higher-order iterative methods are applied due to their sensitivity to perturbations. If the initial approximation is not close enough to the zero, then these methods may converge slowly at the beginning of the iterative process, which consequently decreases their computational efficiency.

8

1.6

One-point iterative methods for simple zeros

In this section we give a review of the most frequently used one-point iterative methods for solving nonlinear equations. Since there is a vast literature studying these methods, including their derivation, convergence behavior, and numerical experiments, we present only basic iterative formulae which are used or cited in later chapters. Let ψ(x) be a general I.F. Let f (x) = 0 be a given nonlinear equation with a simple root x∗ located in some interval [a, b]. The best known iterative method for solving nonlinear equations is the classical Newton’s method (2nd N R) having quadratic convergence: ψ2nd N M (x) = x − u(x), where u(x) =

f (x) 0 , f (x) 6= 0. f 0 (x)

(1.19)

For small values of h the approximation f (x + h) − f (x) f 0 (x) ≈ f˜0 (x) = h

(1.20)

holds. Taking two consecutive approximations x(k−1) and x(k) , from (1.20) we obtain the approximation to the first derivative in the form f 0 (x(k) ) =

f (x(k) ) − f (x(k−1) ) . x(k) − x(k−1)

(1.21)

Substituting (1.21) into (1.19) yields the iterative formula ψSec (x(k−1) , x(k) ) = x(k) − f (x(k) )

x(k) − x(k−1) , f (x(k) ) − f (x(k−1) )

(1.22)

which defines the well-known secant method. The convergence order of this method is 1.618. It possesses a superlinear convergence and does not require the evaluation of derivative of f (x). Taking h = f (x) in (1.20) and substituting it in the Newton formula (1.19), we obtain the derivative free Steffensen method ψ2nd Ste (x) = x −

f (x)2 . f (x + f (x)) − f (x)

(1.23)

The iterative method (1.23) belongs to the class of multipoint methods and has order two. The Halley I.F. (3rd Hal) and Chebyshev I.F. (3rd Che) both having cubic convergence are given by ψ3rd Hal (x) = x −

u(x) , 1 − c2 (x)u(x)

(1.24)

9

ψ3rd Che (x) = x − u(x) − c2 (x)u(x)2 ,

(1.25)

where c2 (x) is defined as in (1.12). A half century ago, Traub (1964) proved that one-point iterative method for solving a single nonlinear equation of the form f (x) = 0, which requires the evaluation of the given function f and its first p − 1 derivatives, can reach the order of convergence at most p. For this reason, a great attention was paid to multipoint iterative methods since they overcome theoretical limits of one-point methods concerning the convergence order and computational efficiency. Beside Traub’s research presented in his fundamental book Traub (1964) this class of methods was also extensively studied in some papers published in the 1970s (See Jarratt (1966a, 1969), King (1973), Kung and Traub (1974)). Surprisingly, the interest in multipoint methods has grown again in the first decade of this century. However, some of the newly developed methods were represented by new iterative formulae, but without any improvement compared to the existing methods, others were only rediscovered methods of the 1960s (See for more details in Petkovic and Petkovic (2007)) and only a few new methods have brought a genuine advance in the theory and practice of iterative processes. In the next section, we give a review of some of the I.F. from Traub’s period to present date.

1.7

Review of Traub’s period to date

In the following, we review many iterative methods proposed from the period of Traub (1964), where we shall specify some of the methods with I.F. and many of the methods without I.F. All the methods given here with I.F. are more relevant to our present work. The most important contribution which deals with many methods comprehensively is the book by Traub (1964). In this book, Traub derived a number of cubically convergent two-point methods. One of the presented approaches relies on interpolation, which will be illustrated by several examples. Let x be fixed point and let f be a real function whose zero is sought. We construct an interpolation function Φ(t) such that Φ(r) (t) = f (r) (t), r = 0, 1, 2...n,

(1.26)

10

which makes use of rj values of functions f (j) with 0 ≤ j ≤ q < n. That is, we wish to replace the dependence of the interpolating function on the higher order derivatives of f by lower derivatives evaluated at a number of points. We do not require that Φ(t) is necessarily an algebraic polynomial. A general approach to the construction of multipoint methods of interpolatory type is presented in Traub (1964). To construct two-point methods of third order free from second derivative, we restrict ourselves to the special case when h i 0 0 Φ(t) = f (x) + (t − x) a1 f (x) + a2 f (x + b2 (t − x)) .

(1.27)

The condition Φ(x) = f (x) is automatically fulfilled and we only impose additional conditions (1.26) to hold at the point t = x for r = 1, 2. Further, following details found in Traub (1964), we obtain the following system of equations a1 + a2 = 1, 2b2 a2 = 1. This system has a solution for any b2 6= 0. For b2 = 1/2, it follows that a1 = 0, a2 = 1, so that (1.27) becomes  1 Φ(t) = f (x) + (t − x)f x + (t − x) . 2 0



For b2 = 1, there follows that a1 = a2 = 1/2 and from (1.27) we get   1 Φ(t) = f (x) + (t − x) f 0 (x) + f 0 (t) . 2

(1.28)

(1.29)

Let t = ψ be a zero of Φ, that is Φ(ψ) = 0. Putting t = ψ in (1.28) we get ψ(x) = x −

f (x) . f 0 (x + 21 (ψ − x))

(1.30)

This is an implicit relation in ψ. Substituting ψ by Newton’s approximation x − u(x) on the right hand side of (1.30), we get the third order midpoint method (3rd M P ) ψ3rd M P (x) = x −

f 0 (x

f (x) . − 12 u(x))

(1.31)

Note that (1.31) was rediscovered much later by Frontini and Sormani (2003) who used the quadrature rule of midpoints. Also, the same method has been rediscovered in different ways by Homeier (2003) and Ozban (2004). Another two-point method can be obtained from (1.29) by taking t = ψ. Then from (1.29) we get the relation ψ(x) = x −

2f (x) . + f 0 (ψ)

f 0 (x)

(1.32)

11

Replacing ψ by x − u(x) on the right-hand side of (1.32), we obtain a two-point method of third order known as arithmetic mean method (3rd AM ) ψ3rd AM (x) = x −

f 0 (x)

2f (x) . + f 0 (x − u(x))

(1.33)

Note that Traub pointed out that the 3rd AM is a generalization of Newton’s I.F. in the sense that the derivative appearing in Newton’s I.F. is replaced by the average of derivatives evaluated at x and at the Newton point of x. Also, note that the 3rd AM method was rediscovered after many years by Weerakoon and Fernando (2000), who derived this method by the use of numerical integration. Let us now employ the function f which is inverse to f into the interpolation formula (1.29). Then we have   1 0 0 Φ(t) = f(y) + (t − y) f (y) + f (t) . 2

(1.34)

For t = 0 we define ψ = Φ(0). Then, due to the relations f0 (y) = we obtain

dx 1 1 = 0 , f0 (0) = f0 (f (α)) = 0 , dy f (x) f (α)

! 1 1 1 + . ψ = x − f (x) 0 2 f (x) f 0 (α)

(1.35)

We estimate f 0 (α) by f 0 (x − u(x)), from (1.35) there follows the iterative method ! 1 1 1 ψ3rd HM (x) = x − f (x) 0 + . (1.36) 2 f (x) f 0 (x − u(x)) Traub (1964) pointed out that this I.F. is a generalization of Newton I.F. in the sense that the reciprocal of the derivative is replaced by the average of the reciprocal derivatives evaluated at x and at the Newton point of x. This method was later rediscovered by Homeier (2003) and Ozban (2004). Traub introduced a double step Newton method with fourth order convergence (4th N R) by evaluating f and f 0 at two points ψ4th N R (x) = x − u(x) −

f [x − u(x)] , f 0 [x − u(x)]

(1.37)

which was recently rediscovered by Noor et al. (2013) using the variational iteration technique.

12

A family of multistep I.F. with the first derivative evaluated at every step introduced by Traub (1964) and known as pth T M ψpth T M (x) = Ar (x), j = 2, 3, 4...r,

Aj (x) = Aj−1 (x) −

f [Aj−1 (x)] , f 0 (x)

(1.38)

A1 (x) = x.

Some of the main advantages of this pth T M method are summarized below: • An I.F. of order p is constructed from p − 1 values of f and one value of f 0 . • f 0 (x)−1 need to be calculated only once even for a high order I.F. • The form of Ar (x) suggests generalizing to systems of equations with f 0 (x)−1 replaced by [F 0 (x)]−1 where F 0 (x) is the Frechet derivative of the system. Thus, an I.F. of order p may be constructed which requires only one matrix inversion. • The recursive definition of Ar (x) permits its calculation in a simple loop on a computer. • The asymptotic error constant of a one-point iteration function of order p generally depends on f (p) (x∗ ), where p is arbitrary. But the asymptotic error constant of pth T M family depends only on

f 00 (x∗ ) . f 0 (x∗ )

The first optimal two point I.F. was constructed by Ostrowski (1960), several years before Traub’s extensive investigation in this area described in Traub (1964). It is given by ψ4th OM 1 (x) = x − u(x)

f (ψ2nd N M (x)) − f (x) . 2f (ψ2nd N M (x)) − f (x)

(1.39)

Another optimal fourth order I.F. introduced by Ostrowski (1960) is given by ! f (x) f (ψ2nd N M (x)) ψ4th OM 2 (x) = ψ2nd N M (x) − . (1.40) f 0 (x) f (x) − 2f (ψ2nd N M (x)) Jarratt (1966b) gave a three point fifth order I.F.(5th JM ) ψ5th JM (x) = x −

f 0 (x)

+

f 0 [ψ

6f (x) , 0 2nd N M (x)] + 4f [v1 (x)]

(1.41)

13

where v1 (x) = x − 18 u(x) −

f (x) 3 . 8 f 0 [ψ2nd N M (x)]

Also, Jarratt (1969) suggested optimal

two-point fourth order I.F. (4th JM ) and it is given by ψ4th JM (x) = x −

3f 0 (x − 32 u(x)) + f 0 (x)  f (x)  . 6f 0 (x − 23 u(x)) − 2f 0 (x) f 0 (x)

(1.42)

King (1973) developed a one-parameter family of optimal fourth-order I.F. (4th KM ), which is given as ! f (x) + βf (ψ2nd N M (x)) f (ψ2nd N M (x)) ψ4th KM (x) = ψ2nd N M (x) − . f (x) + (β − 2)f (ψ2nd N M (x)) f 0 (x) (1.43) An optimal fourth order I.F. (4th KT ) given by Kung and Traub (1974) ψ4th KT (x) = ψ2nd N M (x) −

f (ψ2nd N M (x)) h f 0 (x) 1−

1

i . f (ψ2nd N M (x)) 2 f (x)

(1.44)

Sharma (2005) suggested a third-order I.F. formed by the composition of Newton and Steffensen I.F.s for finding simple roots of a nonlinear equation. Per iteration this method requires two functions and one derivative evaluations. Abbasbandy (2005) presented some efficient numerical algorithms for solving a system of nonlinear equations based on the modified Adomian decomposition method. Chun (2006) constructed a Newton-like iteration methods for the computation of solutions of nonlinear equations. The new scheme is based on the homotopy analysis method applied to equations in general form equivalent to the nonlinear equations. It provides a tool to develop new Newton-like iteration methods or to improve the existing iteration methods which contain the well-known Newton iteration formula. The order of convergence and the corresponding error equations are derived analytically. Kanwar (2006) suggested a new family with cubic convergence obtained by discrete modification and his experiments show that the method is suitable in the cases where Steffensen or Newton-Steffensen methods fail. Kou et al. (2007b) presented a family of fifth order I.F. formed by the composition of Newton and third-order I.F. for solving nonlinear equations. Per iteration the new methods require two evaluations of the function, one first derivative and one second derivative. Sharma and Guha (2007) suggested a one-parameter family of sixth order methods (6th SG) for solving equations based on 4th OM 2. Each member of the family requires three evaluations of the given function and one evaluation of

14

its derivative per iteration. Numerical examples are presented and the performance is compared with Ostrowski method ! f (x) + af (ψ2nd N M (x)) f (ψ4th OM 2 (x)) , ψ6th SG (x) = ψ4th OM 2 (x) − f 0 (x) f (x) + (a − 2)f (ψ2nd N M (x)) (1.45) where a ∈ R. Chun (2007) presented a new two-parameter family of iterative methods for solving nonlinear equations which includes, as a particular case, the classical Potra and Ptak third-order method. Per iteration the new methods require evaluations of two functions and one of its derivative. He showed that each member of the family is cubically convergent. Salkuyeh (2007) gave a family of Newton-type methods free from second and higher order derivatives for solving nonlinear equations. The order of the convergence of this family depends on a function. Under a condition on this function this family converge cubically and by imposing one more condition on this function one can obtain methods of order four. It has been shown that this family includes many of the available iterative methods. Xiaojian (2007) developed a family of third-order I.F. (3rd P M ) by using power mean and it is given by f (x) ψ3rd P M (x) = x− , D(x, β)

D(x, β) = sign(f 0 (x))



1 f 0 (x)β + (f 0 (2nd N M ))β β 2 (1.46)

The cases β = 1, −1, 2 correspond to arithmetic mean I.F. (3rd AM), harmonic mean I.F. (3rd HM) and square mean I.F. (3rd SM) respectively. For the case β = 0, he obtained geometric mean I.F. (3rd GM) by allowing β −→ 0. Kou et al. (2007a) presented new modifications in Newton’s method known as 5th Kou. Analysis of convergence shows that these methods have an order of convergence five. ψ5th Kou (x) = ψ3rd AM (x) −

f (ψ3rd AM (x)) . f 0 (ψ2nd N M (x))

(1.47)

Using the linear interpolation on two points, Parhi and Gupta (2008) developed a three point sixth order I.F. by using 3rd AM and it is given by ! f (ψ3rd AM (x)) f 0 (x) + f (ψ2nd N M (x)) . ψ6th AM (x) = ψ3rd AM (x) − f 0 (x) 3f (ψ2nd N M (x)) − f 0 (x)

(1.48)

15

Cordero and Torregrosa (2008) presented a family of multi-point iterative methods for solving nonlinear equations and the order of convergence of its elements are studied. The computational efficiency of some elements of this family is also provided. Basu (2008) presented a family of cubically convergent schemes with three function evaluation per iteration based on Newton I.F. Also, based on inverse Newton I.F., he presented another family of cubically convergent schemes with three function evaluation per iteration. Finally, he combined a special case of the first family with a special case of second family and proposed a composite fourth order scheme. Thukral (2008) introduced a new Newton-type method for solving a nonlinear equation. This new method was proved to have cubic convergence. They examined the effectiveness of the rational Newton method by comparing the performance with the well established methods for approximating the root of a given nonlinear equation. Noor and Waseem (2009) suggested two new two-step iterative methods for solving the system of nonlinear equations using quadrature formulae. They proved that these methods have cubic convergence. Wang and Liu (2009) developed two new families of sixth-order methods for getting simple roots of nonlinear equations. Per iteration, these methods require two function evaluation and two first derivative evaluations, which gave EIO = 1.565. Hueso et al. (2009a) presented a modification of Newton’s method for nonlinear systems whose Jacobian matrix is singular. They proved, under certain conditions, this modified Newton’s method has quadratic convergence. Moreover, to confirm the theoretical results they tested different numerical examples to compare this variant with the classical Newton’s method. Hueso et al. (2009b) developed a family of predictor-corrector methods free from the second derivative for solving systems of nonlinear equations. In general, the obtained methods have an order of convergence three, but in some particular cases the order is four. Chun and Kim (2010) presented some new third-order iterative methods for finding a simple root of a nonlinear scalar equation. A geometric approach based on the circle of curvature is used to construct the new methods. Awawdeh (2010) employ the homotopy analysis method to derive a family of iterative methods for solving systems of nonlinear algebraic equations. This approach yields second and third order iterative methods which are more efficient than their

16

classical counterparts such as Newton’s, Chebychev’s and Halley’s methods. Ezquerro et al. (2010) discuss an extension of Gander’s result for quadratic equations. Gander provides a general expression for iterative methods with order of convergence at least three in the scalar case. Taking into account an extension of this result, they define a family of iterations in Banach spaces with R-order of convergence at least four for quadratic equations. Petkovic et al. (2010) derived a new class of three-point I.F. for solving nonlinear equations of eighth order. These methods have been developed by combining fourth order methods from the class of optimal two-point methods and a modified Newton’s method in the third step, obtained by a suitable approximation of the first derivative based on interpolation by a nonlinear fraction. It is proved that the new three-step methods reach eighth order convergence using only four function evaluations, which supports the Kung-Traub conjecture on the optimal order of convergence. Kim (2010) developed a new two-step biparametric family of sixth-order iterative methods free from second derivatives to find a simple root of a nonlinear algebraic equation. Cordero et al. (2010a) developed three point sixth order I.F. by using method of Homeier (2003) and linear interpolation on two points and it is given by ψ6th CM (x) = ψ3rd HM (x) −

2f (ψ3rd HM (x))f 0 (ψ2nd N M (x)) . f 0 (ψ2nd N M (x))2 − f 0 (x)2 + 2f 0 (ψ2nd N M (x))f 0 (x) (1.49)

Cordero et al. (2010b) suggested a reduced composition technique which has been used on Newton and Jarratt’s methods in order to obtain an optimal relation between convergence order, functional evaluations and the number of operations. Also, Cordero et al. (2010c) presented a new family of iterative methods for solving nonlinear equations with sixth and seventh order convergence. The new methods in these two papers are obtained by composing known methods of third and fourth order with Newton’s method and using an adequate approximation of the last derivative, which provides high order of convergence and reduces the required number of functional evaluations per step. Li et al. (2010) gave modification of Newton’s method with higher-order convergence. The modification of Newton’s method is based on King’s fourth-order

17

method. The new method requires four evaluations of the function and two evaluations of first derivatives per iteration. Analysis of convergence demonstrates that the order of convergence is sixteen. Noor (2010) gave a new modified homotopy perturbation method and analyzed a class of iterative methods for solving nonlinear equations. This modification of the homotopy method is quite flexible. These methods include the two-step Newton method as a special case and it is of fourth-order convergent. Thukral and Petkovic (2010) derived a family of three-point iterative methods for solving nonlinear equations by using a suitable parametric function and two arbitrary real parameters. They proved that the methods have convergence order eight requiring only four function evaluations per iteration. The proposed class of methods is optimal which supports the Kung-Traub hypothesis. An efficient fourth-order technique with two first derivative evaluations and one function evaluation has been presented by Khattri and Abbasbandy (2011). Cordero et al. (2011a) derived new iterative methods with order of convergence four and higher, for solving nonlinear systems, by composing iteratively golden ratio methods with a modified Newton’s method. Sharma and Sharma (2011) developed a new third order method for finding multiple roots of nonlinear equations based on the scheme for simple roots developed by Kou et al. (2007b). Further investigation gave rise to new third and fourth order family of methods which do not require second derivative. The fourth order family has optimal order, since it requires three evaluations per step. Ardelean (2011) studied the basins of attraction for some of the iterative methods for solving the equation P (z) = 0, where P : C → C is a complex polynomial. The beautiful fractal pictures generated by these methods are presented too. Grau-Sanchez et al. (2011) gave two new iterative methods and analyzed their convergence. A generalization of the efficiency index used in the scalar case for several variables in iterative methods for solving systems of nonlinear equations is revisited. Analytic proofs of the local order of convergence based on developments of multilineal functions and numerical concepts were used to illustrate the results. An approximation of the computational order of convergence is computed independently without the knowledge of the root and the necessary time to get one correct decimal

18

is studied. Arroyo et al. (2011) discussed the problem of determination of the preliminary orbit of a celestial body. They compared the results obtained by the classical Gauss’s method with those obtained by some higher order iterative methods for solving nonlinear equations. The original problem of determination of the preliminary orbits was posed by means of a nonlinear equation. They modified this equation in order to obtain a nonlinear system which describes the mentioned problem and derived a new efficient iterative method for solving it. Scott et al. (2011) observed that the iterative methods are classified by the order, informational efficiency and efficiency index. Here, the authors considered other criteria, namely the basins of attraction of the method and its dependence on the order. Further, several methods of various orders and their basins of attraction are presented. Khattri and Argyros (2011) introduced a four parameter family of sixth order convergent iterative methods for solving nonlinear scalar equations. Methods of this family require evaluation of four functions per iteration. These methods are totally free of derivatives. Convergence analysis shows that this family is sixth order convergent. Yun (2011) developed a new simple iteration formula, which do not require any derivative evaluation. It is proved that the convergence order of the new method is quadratic. Sharma and Guha (2011) presented a one-parameter family of iterative methods for solving nonlinear equations. All the methods of the family have third-order convergence, except the one which has fourth-order convergence. It is shown that the fourth order method is found to be more efficient than the third-order methods. Based on Ostrowski’s method, Cordero et al. (2011b) proposed a new family of eighth-order methods for solving nonlinear equations. In terms of computational cost, each iteration requires three evaluations of functions and one evaluation of its first derivative. This method is optimal according to Kung and Traub’s conjecture. An efficient fourth order I.F. (4th KA) presented by Khattri and Abbasbandy (2011) uses one function and two first derivative evaluations per computing step 21 9 2 15 3  f (x) ψ4th KA (x) = x − 1 + τ − τ + τ 8 2 8 f 0 (x) 

where τ =

f 0 (x− 32 u(x)) . f 0 (x)

(1.50)

19

Khattri and Log (2011) developed a simple yet practical algorithm for constructing derivative free iterative methods having higher order convergence. Soleymani (2011) gave some new sixth-order modifications of Jarratt methods for solving single nonlinear equation. Sharifi et al. (2012) developed a general class of I.F. using two evaluations of the first order derivative and one evaluation of the function per computing step. It is proved that the new class of I.F. has fourth-order convergence and found to be optimal. The derived class is further extended for multiple roots. Kanwar et al. (2012) developed a new cubically convergent family of super Halley method based on power means. Some well known methods can be regarded as particular cases of the proposed family. New classes of higher order multipoint iterative methods free from second order derivative are derived from semi-discrete modifications of the above mentioned methods. It is shown that the super-Halley method is the only method which produces fourth order multipoint iterative methods. Furthermore, these multipoint methods with cubic convergence have also been extended to finding the multiple zeros of nonlinear functions. Soleymani et al. (2012c) investigated the construction of some classes of twopoint I.F. without memory for finding simple roots of nonlinear scalar equations. These classes are built via weight functions and reach optimal order four using three function evaluations based on Weerakoon and Fernando (2000) and Homeier (2003). Chun et al. (2012) developed new fourth order optimal root finding methods for solving nonlinear equations. The classical Jarratt’s family of fourth order method is obtained as a special case. They present results which describe the conjugacy classes and the dynamics of the presented optimal method for complex polynomials of degree two and three. Cordero et al. (2012a) introduced a technique for solving nonlinear systems that improved the order of convergence of any given iterative method which uses the Newton iteration as a predictor. The main idea is to compose a given iterative method of order p with a modification of the Newton method that introduces just one evaluation of the function, obtaining a new method of order p + 2. Cordero et al. (2012b) presented a new technique for designing iterative methods for solving nonlinear systems. This procedure, called pseudocomposition, uses a known method as a predictor and the Gaussian quadrature as a corrector. The order of convergence

20

of the resulting scheme depends, among other factors, on the order of the last two steps of the predictor. They also introduce a new iterative algorithm of order six, and apply the mentioned technique to generate a new method of order ten. Babajee (2012) improved the order of the midpoint iterative method from three to four with the same number of function evaluations using weight functions to obtain a class of two-point fourth order Jarratt-type methods. Then he proved a general result to further improve the order of the old methods through an additional function evaluation using weight functions. In this way, he developed two three-point sixth order midpoint methods using different weights. A family of n-point I.F. of order 2n is proposed. Using weight functions, he further improved the methods to obtain a five-point sixteenth order midpoint method with the same efficiency index as the optimal two-point fourth order Jarratt-type methods. Chun and Neta (2012) developed a sixth order I.F. which requires an additional evaluation of the function f at the point iterated by 4th KT I.F. ψ6th CN M (x) = ψ4th KT (x) −

f (ψ4th KT (x)) h f 0 (x) 1 −

f (ψ2nd N M (x)) f (x)



i . f (ψ4th KT (x)) 2 f (x)

(1.51)

Babajee and Jaunky (2013) derived an optimal fourth-order Newton-secant method with three function evaluations using weight functions and they show that it is a member of the King family of fourth-order methods. Also, they obtain an eighth order optimal Newton-secant method. The authors proved the local convergence of the methods and apply the methods to solve a fourth order polynomial arising in ocean acidifications and study their dynamics. Cordero et al. (2013) suggested a new technique to obtain derivative-free methods with optimal order of convergence in the sense of the Kung-Traub conjecture for solving nonlinear smooth equations. Jaiswal (2013) developed a new modification of method free from derivatives by approximating the derivatives in the Newton-Steffensen third-order method by the central difference quotient. They proved that the method obtained preserves the order of convergence, without calculating any derivative. Herceg and Herceg (2013) derived a family of six sets of means based modifications of Newton’s method for solving nonlinear equations. Each set is a parametric class of methods. Some wellknown methods of this family are 3rd AM , 3rd HM , 3rd GM and 3rd P M .

21

Ardelean (2013) introduced a new third-order iterative method for solving nonlinear equations. This method converges on larger intervals than some other similar known methods. A comparison between the new method and other third-order methods is presented by using the basins of attraction for the real roots of some test problems. Zhou et al. (2013) presented two families of higher order iterative methods for solving multiple roots of nonlinear equations. One is of order three and the other is of order four. The third order family contains several iterative methods known already. The presented fourth order iterative family has optimal order. Local convergence analysis and some special cases of the presented families are discussed. Behl and Kanwar (2013) derived a one-parameter family of Chebyshev’s method for finding simple roots of nonlinear equations. They developed a new fourth-order variant of Chebyshev’s method for this family without adding any functional evaluation to the previously used three functional evaluations. Chebyshev-Halley type methods are seen as the special cases of the proposed family. New classes of higher order multipoint iterative methods free from second-order derivative are also derived from semi-discrete modifications of cubically convergent methods. Fourthorder multipoint iterative methods are optimal, since they require three functional evaluations per step. Soleymani et al. (2013) suggested a general class of multi-point iteration methods with various orders. The error analysis is presented to prove the convergence order. Also, a thorough discussion on the computational complexity of the new iterative methods is given. Abad et al. (2013) developed two iterative methods of order four and five for solving nonlinear systems of equations. An application of this method to solve the nonlinear system for the Global Positioning System (GPS) and several academic nonlinear systems are tested. Petkovic et al. (2013, 2014) developed a family of Jarratt’s type two-point methods which produces some existing and some new methods. Babajee (2014) has recently improved the 3rd HM method to get a fourth order method for solving single nonlinear equation. This method is one of the members in the family of higher order multi-point iterative methods based on power mean given in the present thesis and Babajee et al. (2015a).

22

Jaiswal (2014) suggested a new class of third and fourth order iterative methods for solving nonlinear equations. The multivariate extension of some of these methods is also deliberated. The efficiency of the new fourth order method over some existing fourth order methods is also confirmed by basins of attraction analysis. Sharma and Gupta (2014) presented a three step iterative method of convergence order five for solving systems of nonlinear equations. The methodology is based on a two step method with cubic convergence given in Homeier (2004). Computational efficiency in its general form is discussed and a comparison between the efficiency of the proposed technique and existing ones is made. Sharma and Arora (2014) presented a family of three-point iterative methods based on Newton’s method for solving nonlinear equations. In terms of computational cost, this family requires four function evaluations and has convergence order eight. Therefore, it is optimal in the sense of Kung-Traub hypothesis. Singh and Gupta (2014) gave two new three step higher order iterative methods by modifying two third-order two step methods for solving nonlinear equations. This is done by introducing Newton’s method as a third step in both methods. The derivative in the third step is approximated by using linear interpolation and divided differences leading to new sixth order methods. Singh and Jaiswal (2014) proposed a family of third-order and optimal fourth-order iterative methods. These methods are constructed through weight function concept. Sharma et al. (2014) presented a derivative free two step family of fourth order methods for solving systems of nonlinear equations using the well known TraubSteffensen method in the first step. In order to determine the local convergence order, they apply the first order divided difference operator for functions of several variables and direct computation by Taylor’s expansion. Cordero et al. (2014) presented a family of optimal iterative methods for solving nonlinear equations with eighth-order convergence. These methods are based on Chun’s fourth-order method. They use the Ostrowski’s efficiency index and several numerical tests in order to compare the new methods with other known eighth-order ones. They also extend this comparison to the dynamical study of the different methods. Zheng et al. (2015) introduced a modified Chebyshev-like method with order

23

four and study the semilocal convergence of the method by using majorizing functions for solving nonlinear equations in Banach spaces. They proved an existenceuniqueness theorem and give a-priori error bounds which demonstrates the R-order of the method. Moreover, the local convergence of this method is also analyzed. Cordero et al. (2015) suggested a parametric family of iterative methods for solving nonlinear systems which have third order convergence. The numerical section is devoted to estimating the solution of the classical Bratu problem by transforming it as a nonlinear system by using finite differences, and solving it with different elements of the iterative family. Shah and Noor (2015) presented some new classes of iterative methods for solving nonlinear equations by using an auxiliary function together with decomposition technique. These new methods include the Halley method and its variant forms as special cases. Ullah et al. (2015) considered the multi-step iterative method to solve systems of nonlinear equations. Since the Jacobian evaluation and its inversion are expensive, in order to achieve a better computational efficiency, they compute Jacobian and its inverse only once in a single cycle of the proposed multi-step iterative method. Actually the involved systems of linear equations are solved by employing the LU-decomposition, rather than inversion. The base iterative method has convergence order five and then they use a matrix polynomial of degree two to design a multi-step method. Each inclusion of single step in the base method will increase the convergence order by three. The general expression given for the order is 3s − 1, where s ≥ 2 is the number of steps of the multi-step iterative method. Computational efficiency is also discussed in comparison with other existing methods. In this chapter, we have reviewed the historical development of higher order multipoint iterative methods without memory from the period of Traub (1964) to the present date. Mainly we have pointed out the order of convergence of the methods, any novelty in the idea of the proposed methods and efficiency index were discussed. Although, efforts have been made to review lot of literature, due to the limited time available, I may have missed certain important papers in this area for which I apologise the concerned authors for their very good contribution and development of this field of research.

Chapter 2 Some new variants of Newton’s method of order three, four and five In this chapter, we propose new Newton’s type methods for solving scalar nonlinear equation f (x) = 0 with convergence order three, four and five using the idea of Simpson’s quadrature rule and power mean. The main goal and motivation in the construction of new methods is to get better efficiency index. In other words, it is desirable to attain convergence order as high as possible with fixed number of function evaluations per iteration. In Section 2.1, we present the construction of the new methods. Convergence analysis to obtain the truncation error by using Taylor series is presented in Section 2.2. In Section 2.3, numerical experiments and their results are tabulated. Comparison of the efficiency indices and concluding remarks are given in the last section. Outcome of the new methods are given in concluding in remarks which shows that our methods are efficient.

2.1

Construction of new methods

The Newton I.F. can be constructed from its local linear model, of the function f (x), which is the tangent drawn to the function f (x) at the current point x. The local linear model at x is L(x) = f (x) + f 0 (x)(ψ − x).

(2.1)

This linear model can be interpreted from the viewpoint of Newton’s theorem Z ψ f (ψ) = f (x) + IN T , where IN T = f 0 (t)dt. (2.2) x

24

25

Weerakoon and Fernando (2000) showed that if the indefinite integral IN T is approximated by the rectangle IN T = f 0 (x)(ψ − x), the Newton’s I.F. is obtained by setting L(x) = 0. Hasanov et al. (2002) obtained a new linear model     1 0 0 x+ψ 0 L1 (x) = f (x) + f (x) + 4f + f (ψ) (ψ − x). (2.3) 6 2     1 0 0 x+ψ 0 + f (ψ) . by approximating IN T by Simpson’s formula IN T ≈ 6 f (x) + 4f 2 Solving the new linear model, they obtained the implicit I.F. ψ(x) = x −

6f (x)   . 0 (ψ) + f f 0 (x) + 4f 0 x+ψ 2

(2.4)

Using the Newton I.F. to estimate ψ with in f 0 , they obtained a third order Simpson’s variant method (3rd SV ) ψ3rd SV (x) = x −

f 0 (x) + 4f 0



6f (x) 

x+ψ2nd N M (x) 2

.

(2.5)

+ f 0 (ψ2nd N M (x))

This 3rd SV I.F. has cubic convergence, requires one function and three first order derivative evaluations resulting in a decrease in EI. Therefore, 3rd SV I.F. is computationally more expensive. Also, 3rd SV I.F. is a special case of family of third order I.F. given in Frontini and Sormani (2003). Further, Cordero and Torregrosa (2007) extended (2.5) to solve systems of nonlinear equation. However, the method (2.5) is not efficient (see Babajee and Dauhoo (2006), Babajee (2015b)). Babajee (2015b) derived a new fifth order method with four function evaluations by using a weight function in (2.5) and it is given by ψ5th DKR (x) = x −

f 0 (x) + 4f 0



6f (x) 

x+ψ2nd N M (x) 2

where H(τ ) = 1 + 41 (τ − 1)2 − 38 (τ − 1)3 , τ =

× H(τ )

(2.6)

+ f 0 (ψ2nd N M (x))

f 0 (ψ2nd N M (x)) . f 0 (x)

To obtain a new class of method, we rewrite equation (2.5) as follows ψ3rd SV (x) = x −

3f (x) f 0 (x)+f 0 (ψ2nd N M (x)) 2

+

2f 0



x+ψ2nd N M (x) 2

.

(2.7)

Replacing arithmetic mean with power means in equation (2.7), we have obtained a new class of third order method 3rd SP M given in Jayakumar and Madhu (2013) ψ3rd SP M (x) = x −

0

where D(x, β) = sign(f (x))



3f (x)  , x+ψ2nd N M (x) D(x, β) + 2f 0 2

f 0 (x)β +(f 0 (ψ2nd N M (x)))β 2

 β1

, β ∈ R.

(2.8)

26

Remark 2.1.1. The result provided in Jayakumar and Madhu (2013) has third order convergence, whereas for the value of β = −5 we get fourth order convergence which will be shown in the next section. Fourth order simpson’s power mean method (4th SP M ) is a special case of I.F. (2.8) and it is given by ψ4th SP M (x) = x −

3f (x) .  x+ψ2nd N M (x) D(x, −5) + 2f 0 2

(2.9)

Fifth order simpson’s power mean method (5th SHM ): By improving 3rd SP M , a fifth order method is obtained by putting β = −1 in (2.8) along with weight function, we have ψ5th SHM (x) = x −

3f (x)   × H(τ ), x+ψ2nd N M (x) D(x, −1) + 2f 0 2

(2.10)

! where D(x, −1) = 1 + 61 (τ − 1)2 −

2.2

1 2

7 (τ 24

1 + f 0 (ψ nd1 (x)) f 0 (x) 2 NM

− 1)3 , τ =

represents harmonic mean and H(τ ) =

f 0 (ψ2nd N M (x)) . f 0 (x)

Convergence Analysis of the methods

Theorem 2.2.1. Let x∗ ∈ D, be a simple zero of a sufficiently differentiable function f : D ⊆ R → R for an open interval D. If x(0) is sufficiently close to x∗ , then the method (2.8) has cubic convergence. Proof. Let e = x − x∗ and x∗ be a simple zero of the function f (x) = 0. By expanding f (x) and f 0 (x) by Taylor series about x∗ , we obtain  f (x) = f 0 (x∗ ) e + c2 e2 + c3 e3 + c4 e4 + c5 e5 + c6 e6 + ...

(2.11)

 f 0 (x) = f 0 (x∗ ) 1 + 2c2 e + 3c3 e2 + 4c4 e3 + 5c5 e4 + 6c6 e5 + ...

(2.12)

and

where cj =

f (j) (x∗ ) , j!f 0 (x∗ )

j = 2, 3, 4... We have ∗

2

ψ2nd N M (x) = x + c2 e −

2(c22

3

− c3 )e +



4c32



− 7c2 c3 + 3c4 e4 + ...

(2.13)

27

Expanding f 0 (ψ2nd N M (x)) by Taylor’s series about x∗ and using (2.13), we get  f 0 (ψ2nd N M (x)) = f 0 (x∗ ) 1 + 2c22 e2 + 4(c2 c3 − c32 )e3 + ...

(2.14)

Further, we have x + ψ2nd N M (x) 1 1 = x∗ + e + c2 e2 − (c22 − c3 )e3 + ... 2 2 2 Again expand f 0 ( f0

x+ψ2nd N M (x) ) 2

x + ψ

by Taylor’s series about x∗ and use (2.15) to get

2nd N M (x)

2

(2.15)



 3 = f 0 (x∗ ) 1 + c2 e + (c22 + c3 )e2 4  7 1 3 3 + (−2c2 + c2 c3 + c4 )e + ... . 2 2

(2.16)

From equations (2.12) and (2.14) respectively, we have 0

β

0

∗ β

f (x) = f (x )



1 + 2βc2 e + (3βc3 + 2β(β −

1)c22 )e2

 + ...

  f 0 (ψ2nd N M (x))β = f 0 (x∗ )β 1 + 2βc22 e2 + ... From (2.17) and (2.18), we get 1/β  0 β f (x) + f 0 (ψ2nd N M (x))β 0 sign(f (x)) 2   1 = f 0 (x∗ ) 1 + c2 e + (c22 + βc22 + 3c3 )e2 + ... 2 Adding (2.16) and (2.19), we get  0 β 1/β   f (x) + f 0 (ψ2nd N M (x))β 0 0 x + ψ2nd N M (x) sign(f (x)) + 2f 2 2    1 = f 0 (x∗ ) 3 + 3c2 e + (5 + β)c22 + 6c3 e2 2   1 + − 3(3 + β)c32 + 3(5 + β)c2 c3 + 6c4 e3 + ... . 2

(2.17) (2.18)

(2.19)

(2.20)

If equations (2.11) and (2.20) are substituted in (2.8), we have     1 1  ψ3rd class (x) = x − e − (5 + β)c22 e3 + c2 2(7 + 2β)c22 − 3(5 + β)c3 e4 + ... 6 6 (2.21) Finally, we have  1 2 ψ3rd SP M (x) − x = (5 + β)c2 e3 + O(e4 ). 6 ∗

Hence, the method (2.8) has cubic convergence for any β ∈ R \ {−5}.

(2.22)

28

Remark 2.2.1. For β = −5 in (2.22) produces fourth order convergence of the method 4th SP M (2.9). Theorem 2.2.2. Let x∗ ∈ D, be a simple zero of a sufficiently differentiable function f : D ⊆ R → R for an open interval D. If x(0) is sufficiently close to x∗ , then the method (2.10) has fifth order convergence. Proof. Expanding f 0 (ψ2nd N M (x)) by Taylor’s series about x∗ and using (2.13), we get  f (ψ2nd N M (x)) = f (x ) 1 + 2c22 e2 + 4(c2 c3 − c32 )e3    3 4 + c2 8c2 − 11c2 c3 + 6c4 e + ... 0

0



(2.23)

Further, we have x + ψ2nd N M (x) 1 1 = x∗ + e + c2 e2 − (c22 − c3 )e3 2 2 2   1 3 4c2 − 7c2 c3 + 3c4 e4 + ... + 2 Again expand f 0 ( f

0

x + ψ

x+ψ2nd N M (x) ) 2

2nd N M (x)

2



by Taylor’s series about x∗ and use (2.24) to get

 3 = f (x ) 1 + c2 e + (c22 + c3 )e2 4  1 7 + − 2c32 + c2 c3 + c4 e3 2 2  2 37c c 9c2 c4 5c5  4 2 3 + 4c42 − + 3c23 + + e + ... 4 2 16 0

(2.24)



(2.25)

Using equations (2.23) and (2.12), we have τ=

f 0 (ψ2nd N M (x)) f 0 (x)

= 1 − 2c2 e + (6c22 − 3c3 )e2 − 4(4c32 − 4c2 c3 + c4 )e3

(2.26)

+ (40c42 − 61c22 c3 + 9c23 + 22c2 c4 − 5c5 )e4 + ... H(τ ) = 1 +

 2c22 2 5c3 e + − 2 + 2c2 c3 e3 + 3 3

1 (−26c42 − 37c22 c3 + 9c23 + 16c2 c4 )e4 + ... 6

(2.27)

29

We have 1 1 1 + 0 D(x, −1) = 0 2 f (x) f (x − u(x))

!

  3 = f 0 (x∗ ) 1 + c2 e + c3 e2 + c32 − c2 c3 + 2c4 e3 2 +



(2.28)

!  9 5 − 3c42 + 6c22 c3 − c23 − c2 c4 + c5 e4 + ... 4 2

Using equations (2.28) and (2.25),  x + ψ nd (x)  2 NM D(x, −1) + 2f 0 2 = f 0 (x∗ ) 3 + 3c2 e + (2c22 + 3c3 )e2 +



 − 3c32 + 6c2 c3 + 3c4 e3

(2.29)

!

 5 25 + 5c42 − c22 c3 + 8c2 c4 + (6c23 + 5c5 ))e4 + ... 2 8

Finally, by using (2.11), (2.29) and (2.27) in (2.10), we have ψ5th SHM (x) − x∗ =

 1 184c42 − 16c22 c3 − 6c23 + c5 e5 + O(e6 ) 24

(2.30)

Hence, the proposed new method (2.10) has fifth order convergence.

2.3

Numerical Examples

In this section, we give numerical results on some examples to compare the efficiency of the proposed methods 3rd SP M , 4th SP M , 5th SHM with 2nd N M , 3rd AM and 5th DKR methods. Numerical computations have been carried out in the M ATLAB software, rounding to 500 significant digits. Depending on the precision of the computer, we use the stopping criteria for the iterative process error = |x(k+1) − x(k) | < , where  = 10−50 and N is the number of iterations required for convergence. dψ represents the total number of function evaluations. The ACOC denoted as ρ˜k is given by ρ˜k =

ln |(x(k+1) − x(k) )/(x(k) − x(k−1) )| . ln |(x(k) − x(k−1) )/(x(k−1) − x(k−2) )|

In Tables ??-??, we have compared the number of iterations, convergence order, function evaluations, total number of function evaluations for convergence, approximated computational order of convergence, absolute error and CPU time.

30

Table ?? displays the results for f1 (x), x(0) = −0.7. It is observed from this table, proposed methods and other comparable methods agree with the computational order of convergence (ρ˜k ) and the method 5th SHM converges with less number of iterations and least error. Similarly, the following Tables ?? to ?? display the results for functions f1 (x) to f5 (x) with different starting points. The observation of the results in these tables show a similar pattern as found in Table ??.

2.4

Concluding Remarks

We have compared the efficiency index of 2nd N M , 3rd AM , 5th DKR along with proposed methods in Table ?? and it is observed that 5th SHM I.F. has better efficiency index. Hence, we conclude that the proposed 5th SHM I.F. performs better than 2nd N M and can be a competitor to 2nd N M and other methods of equivalent order available in the literature. However, this method is not optimal.

Chapter 3 Class of modified Newton’s method having fifth and sixth order convergence In this chapter, we propose new Newton’s type methods for solving scalar nonlinear equation f (x) = 0 with convergence order five and six using the idea of power mean. The main goal and motivation in the construction of new methods is to get better efficiency index. In other words, it is desirable to attain convergence order as high as possible with fixed number of function evaluations per iteration. In Section 3.1, the construction of the new methods is presented. Truncation error using Taylor series and the convergence analysis are derived in Section 3.2. In Section 3.3, numerical examples and their results are tabulated. Comparison of the efficiency indices and concluding remarks are given in the last section. Outcome of the new methods are given in concluding in remarks which shows that our methods are efficient.

3.1 3.1.1

Construction of new methods A three step fifth order I.F.

In continuation to the equations (2.1) and (2.2) considered in section 2.1, Weerakoon and Fernando (2000) also gave a new linear model L2 (x) = f (x) +

1 0 (f (x) + f 0 (ψ)) (ψ − x) 2

31

(3.1)

32

by approximating the definite integral with the trapezoidal rule IN T ≈ 21 (f 0 (x) + f 0 (ψ)) (ψ − x), they obtained an implicit I.F. ψ(x) = x −

2f (x) . + f 0 (ψ)

f 0 (x)

(3.2)

Using the Newton I.F. to compute f 0 (ψ) by f 0 (ψ2nd N M (x)), they obtained ψ3rd AM (x) = x −

2f (x) . f 0 (x) + f 0 (ψ2nd N M (x))

(3.3)

Ozban (2004) used the harmonic mean instead of the arithmetic mean in (3.3) and obtained Harmonic mean Newton’s method having cubic convergence   f (x) f 0 (x) + f 0 (ψ2nd N M (x)) ψ3rd HM (x) = x − . f 0 (x)f 0 (ψ2nd N M (x))

(3.4)

Luki´c and Ralevi´c (2005) used the geometric mean instead of the arithmetic mean in (3.3) and obtained Geometric mean Newton’s method having cubic convergence ψ3rd GM (x) = x −

f (x) p . sign(f 0 (x)) f 0 (x)f 0 (ψ2nd N M (x))

(3.5)

Adding one more Newton step in 3rd GM I.F., we have obtained a new I.F. having fifth order convergence with two functions and two derivative evaluations proposed in Madhu and Jayakumar (2015a) ψ5th GM (x) = ψ3rd GM (x) −

f (ψ3rd GM (x)) , f 0 (ψ2nd N M (x))

(3.6)

which may be called as 5th GM method.

3.1.2

New class of I.F. with order six

Xiaojian (2007) used the power mean instead of the arithmetic mean in (3.3) they obtained class of Newton’s method having cubic convergence with three function evaluations f (x) , D(x, β) = sign(f 0 (x)) ψ3rd P M (x) = x− D(x, β)



1 f 0 (x)β + (f 0 (ψ2nd N M (x)))β β 2 (3.7)

Madhu and Jayakumar (2014) suggested a new class of three step I.F. having sixth order convergence ψ6th P M M (x) = ψ3rd P M (x) −

f (ψ3rd P M (x)) f 0 (ψ3rd P M (x))

(3.8)

33

This method requires five function evaluations to converge per iterative cycle. The efficiency index for 6th P M M method EIO = 1.430 is better than Newton’s method but lower than 3rd PM whose EIO = 1.442. In order to improve the efficiency, we approximate f 0 (ψ3rd P M (x)) by a combination of already computed function values. We use an approximation by linear interpolation of f 0 (ψ3rd P M (x)) that does not introduce any new function evaluation, by using linear interpolation on two points (x, f 0 (x)) and (ψ2nd N M (x), f 0 (ψ2nd N M (x))). Thus, f 0 (ψ3rd P M (x)) =

ψ rd (x) − ψ2nd N M (x) 0 ψ3rd P M (x) − x 0 f (ψ2nd N M (x)) + 3 P M f (x). ψ2nd N M (x) − x x − ψ2nd N M (x)

Then f 0 (ψ3rd P M (x)) = f 0 (x) −

f 0 (x)2 − f 0 (x)f 0 (ψ2nd N M (x))  0 β 0 1 f (x) +(f (ψ2nd N M (x)))β β sign(f 0 (x)) 2

(3.9)

By substituting this approximation f 0 (ψ3rd P M (x)) in equation (3.8), we obtain a new class of I.F. (6th P M M ) with two functions and its two first derivative evaluations (see Madhu and Jayakumar (2014)) ψ6th P M M (x) = ψ3rd P M (x) −

f (ψ3rd P M (x)) f 0 (x) −

f 0 (x)2 −f 0 (x)f 0 (ψ2nd N M (x))  0 β 1 β f (x) +(f 0 (ψ nd (x)))β 2 NM sign(f 0 (x)) 2

(3.10)

where β ∈ R. The special case with β = 1 is already reported in Parhi and Gupta (2008) and the case β = −1 is found in Cordero et al. (2010a).

3.2

Convergence Analysis of the methods

3.3

Numerical Examples

In this section, we give numerical results on some some examples to compare the efficiency of the proposed methods 5th GM , 6th P M M with 2nd N M , 3rd AM , 3rd HM , 3rd GM and 3rd P M methods. Numerical computations have been carried out in the M ATLAB software, rounding to 500 significant digits. Depending on the precision of the computer, we use the stopping criteria for the iterative process error = |x(k+1) − x(k) | < , where  = 10−50 and N is the number of iterations required

34

for convergence. dψ represents the total number of function evaluations. The ACOC denoted as ρ˜k is given by In Tables ??-??, we have compared the number of iterations, convergence order, function evaluation per iteration, total number of function evaluations, computational order of convergence, error and CPU time. In all the examples, our proposed class of 6th P M M I.F. has better results when compared with 2nd N M , 3rd AM , 3rd GM , 3rd HM , 5th GM . Table ?? displays the results for f1 (x), x(0) = −0.7. It is observed from this table, proposed methods and other comparable methods agree with the computational order of convergence (ρ˜k ) and the method 6th P M M converges with less number of iterations and least error. Similarly, the Tables ?? to ?? display results for functions f1 (x) to f5 (x) with different starting points. The observation of the results in these tables show a similar pattern as found in Table ??. However, in Table ??, for the choice of β = −1, 6th P M M method produces divergent result. Next, we attempt to find the best integer value of β in [−20, 20] for the class of 3rd P M and 6th P M M I.F. that produces minimum number of iterations and its corresponding error. Table ?? displays the results for f1 (x)-f5 (x) with suitable initial points and the best integer value for β ∈ [−20, 20] along with number iterations and error. For all the examples, 6th P M M method produces less number of iterations with least error when compared with 3rd P M method. To find the best integer value of β in both the class of I.F. leads us to determine the best I.F. Next, the plots for iteration and least error for each integer value of β ∈ [−20, 20], that is 41 different values of β are given. We have taken f2 (x) for verifying the methods 3rd P M and 6th P M M .

Figures 3.1-3.4 display comparison of number

of iterations and error for different β values for f2 (x), x(0) = −1.7, for 3rd P M I.F. and 6th P M M I.F. respectively. We find that all the members in the 3rd P M class are convergent whereas all the members in the 6th P M M methods are convergent except for β = −1. This is one of the advantages of finding best integer value of β in [−20, 20].

35

8

7.5

Iteration

7

6.5

6

5.5 Iteration 5 −20

−15

−10

−5

0 β

5

10

15

20

Figure 3.1: Comparison of iterations for f2 (x), x(0) = −1.7 for 3rd P M

−50

10

Error

Error

−100

10

−150

10

−20

−15

−10

−5

0 β

5

10

15

20

Figure 3.2: Comparison of error for f2 (x), x(0) = −1.7 for 3rd P M

36

5 Iteration 4.8 4.6 4.4

Iteration

4.2 4 3.8 Divergent in 3rd iteration, β = − 1

3.6 3.4 3.2 3 −20

−15

−10

−5

0 β

5

10

15

20

Figure 3.3: Comparison of iterations for f2 (x), x(0) = −1.7 for 6th P M M

−50

10

Error

−100

10

−150

Error

10

−200

10

−250

10

Divergent in β = − 1

−300

10

−20

−15

−10

−5

0 β

5

10

15

20

Figure 3.4: Comparison of error for f2 (x), x(0) = −1.7 for 6th P M M

37

3.4

Concluding Remarks

We have compared the efficiency index of 2nd N M and 3rd P M I.F. along with proposed methods 5th GM and 6th P M M in Table ??. It is observed that, proposed 5th GM and 6th P M M I.F. have better efficiency index than the compared methods. Hence, we conclude that the proposed 5th GM and 6th P M M I.F. perform better than 2nd N M and can be a competitor to 2nd N M and other methods of equivalent order available in the literature.

Chapter 4 Two families of Newton-type methods having fourth and sixth order convergence In this chapter, we propose two new Newton-type methods for solving scalar nonlinear equation f (x) = 0 with convergence order four and six using the idea of power mean and weight functions. The main goal and motivation in the construction of new methods is to get better efficiency index. In other words, it is desirable to attain convergence order as high as possible with fixed number of function evaluations per iteration. In the case of multipoint methods without memory, this demand is closely connected with the optimal order stated in the Kung-Traub conjecture. In Section 4.1, we present the construction of the new methods. Convergence analysis to obtain the truncation error by using Taylor series is derived in Section 4.2. In Section 4.3, numerical experiments and their results are tabulated. Comparison of the efficiency indices and concluding remarks are given in the last section. Outcome of the new methods are given in concluding in remarks which shows that our methods are efficient.

4.1 4.1.1

Construction of new methods Family of Optimal fourth order I.F.

We recall that 3rd P M I.F. (3.7) discussed in Chapter 3 is not an optimal method. To improve this method as an optimal method, we proposed a new 4th M J I.F. as

38

39

follows: ψ4th M J (x) = x −

21/β f (x)  1/β [H(τ ) × G(t)], 0 0 β 0 β sign(f (x)) f (x) + f (y)

(4.1)

where f 0 (y) 2 y = x − u(x), τ = 0 , t = u(x). 3 f (x) Expanding H(τ ) about 1 and G(t) about 0, we have H(τ ) × G(t) = H(1)G(0) + (τ − 1)H 0 (1)G(0) + +

(τ − 1)2 00 H (1)G(0) 2

(τ − 1)3 000 (t − 0)3 H (1)G(0) + H(1)G000 (0) + ... 6 6

.

Choosing H, G and their derivatives as follows H(1) = 1, H 00 (1) =

G(0) = 1,

β+5 , 8

H 0 (1) = −1/4,

G0 (0) = 0 = G00 (0),

H 000 (1) = G000 (0) = −1,

we get a weight function as 1 H(τ ) × G(t) = 1 − (τ − 1) + 4



β+5 8



(τ − 1)2 −

 1 (τ − 1)3 + t3 . 6

This new method (4.1) has one function and two derivative evaluations. Hence, this method reaches optimal convergence with high efficiency Kung and Traub (1974).

4.1.2

Family of sixth order I.F.

In order to improve the method (4.1) by increasing the order of convergence to six (6th M J), we define the following I.F.: ψ6th M J (x) = ψ4th M J (x) −

21/β f (ψ4th M J (x)) sign(f 0 (x)) (f 0 (x)β + f 0 (y)β )1/β

K(τ ),

(4.2)

where K(τ ) is obtained by expanding it about τ = 1 as follows: K(τ ) = K(1) + (τ − 1)K 0 (1) +

(τ − 1)2 00 K (1) + ... 2

.

By choosing K(1) = 1, K 0 (1) = −1, K 00 (1) = K 000 (1) = ... = 0, we get K(τ ) = 2 − τ. This new method (4.2) has two functions and two derivative evaluations. Hence, in the sense of Kung and Traub (1974), this method is not optimal.

40

Remark 4.1.1. The cases β = 1, −1, 2 in (4.1) and (4.2) may be respectively called as 4th AM and 6th AM (arithmetic mean method), 4th HM and 6th HM (harmonic mean method) and 4th SM and 6th SM (square mean method). The case β → 0 produces 4th GM and 6th GM (geometric mean method). These two new family of I.F. (4.1) and (4.2) have been developed in Madhu and Jayakumar (2015b).

4.2

Convergence Analysis of the methods

Theorem 4.2.1. For sufficiently smooth function f : D ⊂ R → R having a simple root x∗ in the open interval D. If x(0) is sufficiently close to x∗ , then the 4th M J family of I.F. (4.1) has local fourth-order convergence. Proof. Taylor expansion of f (x) and f 0 (x) about x∗ gives h i f (x) = f 0 (x∗ ) e + c2 e2 + c3 e3 + c4 e4 + c5 e5 + c6 e6 + . . .

(4.3)

and h i 2 3 4 5 f (x) = f (x ) 1 + 2c2 e + 3c3 e + 4c4 e + 5c5 e + 6c6 e + . . . 0

0



(4.4)

so that     t = u(x) = e − c2 e2 + 2 c22 − c3 e3 + 7c2 c3 − 4c32 − 3c4 e4    4 2 2 5 + 8c2 − 20c2 c3 + 6c3 + 10c2 c4 − 4c5 e + − 16c52 + 52c32 c3  2 2 − 33c2 c3 − 28c2 c4 + 17c3 c4 + 13c2 c5 − 5c6 e6 + . . .

(4.5)

  2 3 e 2 2 4 2 3 c − c3 e + 4c2 − 7c2 c3 + 3c4 e4 y = x + + c2 e − 3 3 3 2 3  4 4 2 5 + 4c2 − 10c22 c3 + 3c23 + 5c2 c4 − 2c5 e5 + 16c2 − 52c32 c3 3 3  + 33c2 c23 + 28c22 c4 − 17c3 c4 − 13c2 c5 + 5c6 e6 + . . .

(4.6)

and ∗

Again, the Taylor expansion of f 0 (y) about x∗ gives h     4 f 0 (y) = f 0 (x∗ ) 1 + 32 c2 e + 31 4c22 + c3 e2 + 27 − 18c32 + 27c2 c3 + c4 e3    1 2 4 2 2 4 + 81 432c2 − 864c2 c3 + 216c3 + 396c2 c4 + 5c5 e + 81 432c52 (4.7)  i −1080c32 c3 + 486c2 c23 + 540c22 c4 − 234c3 c4 − 236c2 c5 − c6 e5 + . . .

41

Using equations (4.4) and (4.7), we have  8  2  32 3 40 104  3 4 2 c4 e τ = 1 − c2 e + 4c2 − c3 e + − c2 + c2 c3 − 3 3 3 3 27   4 4 4 2 2 4 + 540c2 − 999c2 c3 + 216c3 + 363c2 c4 − 100c5 e − 1296c52 81 81  − 3186c32 c3 + 1485c2 c23 + 1320c22 c4 − 567c3 c4 − 453c2 c5 + 121c6 e5 .

(4.8)

Also, we have 1/β f 0 (x)β + f 0 (y)β 2 h  4 1 1 = f 0 (x∗ ) 1 + c2 e + 2(2 + β)c22 + 15c3 e2 + − 4(4 + 5β)c32 3 9 27  1  + 6(5 + 4β)c2 c3 + 56c4 e3 + (168 + 478β + 6β 2 − 4β 3 )c42 243  (4.9) 2 2 − 54(7 + 17β)c2 c3 + 6(47 + 52β)c2 c4 + 3(36(1 + 2β)c3 + 205c5 ) e4 2  + 8(−10 − 70β − 3β 2 + 2β 3 )c52 + 2(75 + 731β + 12β 2 − 8β 3 )c32 c3 243



− 6(32 + 103β)c22 c4 + 39c3 (c4 + 8βc4 ) + c2 ((27 − 756β)c23  i 5 + 2(77 + 100β)c5 ) + 366c6 e + . . . From (4.3) and (4.9), we get 21/β f (x)  1/β sign(f 0 (x)) f 0 (x)β + f 0 (y)β   1 2 2 1 = e − c2 e2 − βc2 + 3c3 e3 + 10(2 + 3β)c32 + 3(3 − 8β)c2 c3 − 29c4 e4 3 9 27  2 (−228 − 437β + 3β 2 + 2β 3 )c42 + 54(4 + 13β)c22 c3 − 39(−3 + 4β)c2 c4 + 243  1  − 3(9(−3 + 4β)c23 + 62c5 ) e5 + − 2(−1116 − 3379β + 87β 2 + 58β 3 )c52 729 + 6(−717 − 2767β + 24β 2 + 16β 3 )c32 c3 + 18(47 + 321β)c22 c4  + 3c2 ((393 − 400β)c5 + 27(−7 + 88β)c23 ) − 9((−231 + 208β)c3 c4 + 163c6 ) e6 (4.10)

42

We have  1 1 H(τ ) × G(t) = 1 + c2 e + (1 + 2β)c22 + 6c3 e2 3 9  1  + − 8(73 + 27β)c32 + 36(5 + 4β)c2 c3 + 3(−9 + 52c4 ) e3 162 1 + 36(79 + 25β)c42 − 6(563 + 192β)c22 c3 + c2 (81 + (314 + 208β)c4 ) 162 ! 1 + 8(18(2 + β)c23 + 25c5 ) e4 + − 288(104 + 33β)c52 + 36(1607 486 + 498β)c32 c3 − c22 (729 + 8(1787 + 597β)c4) − 2c2 (63(151 + 48β)c23 ! − (641 + 400β)c5 ) + 6(c3 (81 + 473c4 + 208βc4 ) + 121c6 ) e5 + . . . (4.11) Using (4.10) and (4.11) in (4.1), we obtain !

ψ4th M J (x) − x∗ =

1 10(47 + 6β)c32 − 162c2 c3 + 9(3 + 2c4 ) e4 162

+

1 243

− 2(2081 + 334β − 3β 2 + 2β 3 )c42 + 36(131 !

+ 15β)c22 c3 − 486c23 − 135c2 (1 + 4c4 ) + 72c5 e5 +

1 2(23679 + 4464β − 114β 2 + 56β 3 )c52 729

(4.12)

− 6(15019 + 2402β − 24β 2 + 16β 3 )c32 c3 + 3c22 (405 + 6974c4 + β(−9 + 780c4 )) + 54c2 ((551 + 60β)c23 ! − 45c5 ) − 54(3c3 (5 + 33c4 ) − 7c6 ) e6 + O(e7 ). Thus, it is found that 4th MJ method has fourth order convergence. Theorem 4.2.2. For sufficiently smooth function f : D ⊂ R → R having a simple root x∗ in the open interval D. If x(0) is sufficiently close to x∗ , then the 6th M J family of I.F. (4.2) has sixth-order convergence.

43

Proof. Taylor expansion of f (ψ4th M J (x)) about x∗ , we have h 1   f (ψ4th M J (x)) = f 0 (x∗ ) 10(47 + 6β)c32 − 162c2 c3 + 9(3 + 2c4 ) e4 162  1 − 2(2081 + 334β − 3β 2 + 2β 3 )c42 + 36(131 + 15β)c22 c3 − 486c23 + 243  1  − 135c2 (1 + 4c4 ) + 72c5 e5 + 2(23679 + 4464β − 114β 2 + 56β 3 )c52 729 − 6(15019 + 2402β − 24β 2 + 16β 3 )c32 c3 + 3c22 (405 + 6974c4 + β(−9  i + 780c4 )) + 54c2 ((551 + 60β)c23 − 45c5 ) − 54(3c3 (5 + 33c4 ) − 7c6 ) e6 (4.13) Then, we have 4 8 K(τ ) = 2 − τ = 1 + c2 e + (−4c22 + c3 )e2 + O(e3 ). 3 3

(4.14)

Using (4.9), (4.12), (4.13) and (4.14) in (4.2), we obtain  1 (2(20 + β)c22 − 9c3 ) 10(47 + 6β)c32 − 162c2 c3 1458 ! (4.15)  + 9(3 + 2c4 ) e6 + O(e7 ).

ψ6th M J (x) − x∗ =

Thus, it is found that 6th MJ method has sixth order convergence.

4.3

Numerical Examples

In this section, we give numerical results on some test functions to compare the efficiency of the proposed family of 4th M J, 6th M J with 2nd N M , 3rd P M , 4th JM , 4th KA, 6th CN M and 6th SJ methods. Numerical computations have been carried out in the M ATLAB software, rounding to 500 significant digits. Depending on the precision of the computer, we use the stopping criteria for the iterative process error = |x(k+1) − x(k) | < , where  = 10−50 and N is the number of iterations required for convergence. dψ represents the total number of function evaluations. The ACOC denoted as ρ˜k is given by In Tables ?? - ??, we have compared the number of iterations, convergence order, function evaluations per iteration, total number of function evaluation, computational order of convergence, error and CPU time. Tables ?? and ?? display the results for

44

f1 (x) and f2 (x) respectively for different starting point. The new family of methods and other comparable methods agree with computational order of convergence. The members of the 4th M J family converge in less number iterations and with least error than 3rd P M family whereas both methods use the same number of function evaluations. Also, 6th M J family converges with less number of iterations and least error. Tables ?? and ?? display the results for f3 (x) and f4 (x) respectively for different starting point. The results are compared for best integer value of β in [−20, 20] for the family of 3rd P M , 4th M J and 6th M J along with 2nd N M . It is observed that for f3 (x), 6th M J method produces convergence with less number of iterations and low error for the case β = −11. For f4 (x), 6th M J method produces convergence with less number of iterations and low error for the case β = −20. Tables ?? - ?? display the results for f1 (x) to f8 (x) for different starting points. The results are compared for best integer value of β in [−20, 20] for the family of 3rd P M , 4th M J and 6th M J along with 2nd N M , 4th JM , 4th KA, 6th CN M and 6th SJ.

4.4

Concluding Remarks

We have compared the efficiency indices of some existing I.F. along with proposed family of methods in Table ??. It is found that 4th M J I.F. has good efficiency index compared with 3rd P M I.F. whereas both have same number of function evaluations. Hence, we conclude that the proposed 4th M J and 6th M J I.F. perform better than 2nd N M and can be a competitor to 2nd N M and other methods of equivalent order available in the literature. Further, 4th M J family of methods has optimal order of convergence.

Chapter 5 Family of higher order multi-point iterative methods based on power mean In this chapter, we propose new Jarratt’s type methods for solving scalar nonlinear equation f (x) = 0 with convergence order four, six and twelve using the idea of power mean and weight functions. The main goal and motivation in the construction of new methods is to get better efficiency index. In other words, it is desirable to attain convergence order as high as possible with fixed number of function evaluations per iteration. In the case of multipoint methods without memory, this demand is closely connected with the optimal order stated in the Kung-Traub conjecture. In Section 5.1, we present the construction of the new methods. Convergence analysis to obtain the truncation error by using Taylor series is derived in Section 5.2. In Section 5.3, numerical experiments and their results are tabulated. In Section 5.4, we have studied basins of attraction for the proposed methods along with Newton’s method. Comparison of the efficiency indices and concluding remarks are given in the last section.

45

46

5.1

Construction of new methods

5.1.1

Family of Optimal fourth order I.F.

Consider the I.F. (4.1) given in section 4.1.1. By introducing a weight function H(τ, β) instead of H(τ ) × G(t), a new family of fourth order I.F. (4th PM) is proposed: ψ4th P M (x) = x −

f (x) H(τ, β), E(x, β)

(5.1)

where f 0 (x− 32 u(x)) + τ β )1/β , τ = , f 0 (x)  (τ − 1)2 . H(τ, β) = 1 − 14 (τ − 1) + β+5 8

E(x, β) =

5.1.2

f 0 (x) sign(f 0 (x))(1 21/β

Family of sixth order I.F.

By improving the order of convergence to six (6th PM) by considering one more step in the method (5.1), we have ψ6th P M (x) = ψ4th P M (x) − G1 × f (ψ4th P M (x)),

5.1.3

G1 =

2−τ . E(x, β)

(5.2)

Family of twelfth order I.F.

Recently Soleymani et al. (2012a) improved a sixth-order Jarratt method to a twelfthorder one. In a similar way, applying 2nd NM on (5.2) and using Theorem 1.2.4, we obtain a new family of twelfth order I.F.: ψ12th P M (x) = ψ6th P M (x) −

f (ψ6th P M (x)) . f 0 (ψ6th P M (x))

However, we need to compute two more function evaluations in the above method. To reduce one function evaluation, we estimate f 0 (ψ6th P M (x)) by the following polynomial: q(t) = a0 + a1 (t − x) + a2 (t − x)2 + a3 (t − x)3 , which satisfies the following conditions q(x) = f (x), q 0 (x) = f 0 (x), q(ψ4th P M (x)) = f (ψ4th P M (x)), q(ψ6th P M (x)) = f (ψ6th P M (x)).

(5.3)

47

On implementing the above conditions on (5.3), we obtain four linear equations with four unknowns a0 , a1 , a2 and a3 . From q(x) = f (x), q 0 (x) = f 0 (x), we get a0 = f (x) and a1 = f 0 (x). To find a2 and a3 , we solve the following equations: f (ψ4th P M (x)) = f (x) + f 0 (x)(ψ4th P M (x) − x) + a2 (ψ4th P M (x) − x)2 +a3 (ψ4th P M (x) − x)3 , f (ψ6th P M (x)) = f (x) + f 0 (x)(ψ6th P M (x) − x) + a2 (ψ6th P M (x) − x)2 +a3 (ψ6th P M (x) − x)3 . By applying divided differences, the above equations simplify to a2 + a3 (ψ4th P M (x) − x) = f [ψ4th P M (x), x, x]

(5.4)

a2 + a3 (ψ6th P M (x) − x) = f [ψ6th P M (x), x, x]

(5.5)

where f [y, x] =

f [y, x] − f 0 (x) f (y) − f (x) , f [y, x, x] = . y−x y−x

Solving equations (5.4) and (5.5), we have a2 =

1 f [ψ6th P M (x), x, x](ψ4th P M (x) − x) ψ4th P M (x) − ψ6th P M (x) ! − f [ψ4th P M (x), x, x](ψ6th P M (x) − x) ,

(5.6)

! 1 f [ψ4th P M (x), x, x] − f [ψ6th P M (x), x, x] . a3 = ψ4th P M (x) − ψ6th P M (x) Further, using equation (5.6) we have the estimation f 0 (ψ6th P M (x)) ≈ q 0 (ψ6th P M (x)) = a1 + 2a2 (ψ6th P M (x) − x) + 3a3 (ψ6th P M (x) − x)2 =

1 f 0 (x)(ψ4th P M (x) − ψ6th P M (x)) ψ4th P M (x) − ψ6th P M (x) + 2f [ψ6th P M (x), x, x](ψ4th P M (x) − x)(ψ6th P M (x) − x) ! + (f [ψ4th P M (x), x, x] − 3f [ψ6th P M (x), x, x])(ψ6th P M (x) − x)2 .

Therefore, finally we propose a new family of twelfth order I.F. (12th P M ): ψ12th P M (x) = ψ6th P M (x) − G2 × f (ψ6th P M (x)),

G2 =

1 . q 0 (ψ6th P M (x))

(5.7)

48

Remark 5.1.1. These new family of methods (5.1), (5.2) and (5.7) have been recently proposed in Babajee et al. (2015a).

5.2

Convergence Analysis of the methods

5.3

Numerical Examples

In this section, we give numerical results on some test functions to compare the efficiency of the proposed family of 4th P M , 6th P M , 12th P M with 2nd N M and 3rd P M methods. Numerical computations have been carried out in the M ATLAB software, rounding to 500 significant digits. Depending on the precision of the computer, we use the stopping criteria for the iterative process Tables ?? and ?? show the corresponding results for f1 (x) to f8 (x). If the initial points are very close to the root, then we obtain the least number of iterations and lowest error. Hence, the proposed methods 4th P M , 6th P M and 12th P M have better efficiency when compared to 2nd N M and 3rd P M . Suppose the initial points are not close to the root, then we may not get least error in 4th P M I.F. when compared to 3rd P M I.F. (for example see in Table ??, f2 (x) with x(0) =−1.7; f3 (x) with x(0) = 0.5; and f8 (x) with x(0) = 1.5). It is observed from the asymptotic error constant in the equations (??), (??) and (??), we obtain least error for mostly non-positive values of β. We next consider a test function which is of simple cubic type (See Drexler (1997) and Babajee and Dauhoo (2006)): f9 (x) = x3 + ln x, x > 0, x ∈ R for which the logarithm restricts the function to be positive and its convex properties of the function are favorable for global convergence. But we focus on the behaviour of the methods with the starting points which are equally spaced with ∆x = 0.01 in the interval (0, 5] to check the robustness of the methods. The root x∗ = 0.704709490254913 is correct to 14 digits. A starting point is considered as divergent if it does not satisfy the condition |x(k+1) − x(k) | < 10−13 in at most 100 iterations and x ≤ 0 at any iterates. We denote the quantity ωc as the mean number

49

501 3rdPM

500.8

4thPM

500.6

6thPM 12ndPM

500.4

NS

500.2 500 499.8 499.6 499.4 499.2 499 −20

−15

−10

−5

0 β

5

10

15

20

15

20

(a) Variation of Ns with β 6.5 3rdPM 6

th

4 PM th

6 PM 5.5

12ndPM

wc

5 4.5 4 3.5 3 2.5 −20

−15

−10

−5

0 β

5

10

(b) Variation of wc with β Figure 5.1: Results for f9 (x) of iterations from a starting point until convergence and a penalty of 100 iterations is imposed for diverging points. Let Ns denotes the number of successful points of the 500 starting points. We tested for 41 values of β in the interval [−20, 20] with ∆β = 1. Figure 5.1 (a) display that all the members of the 4 families are globally convergent for f9 (x). Figure 5.1 (b) shows that the mean (wc ) reduces as the order of the method increases. We note that the mean increases rapidly for the 3rd P M family as β > 0. This shows the improved fourth, sixth and twelfth order family are more efficient than the third order family.

50

For the 3rd P M family, it is the member β = −9 which is the most efficient method with the lowest wc = 4.45. For the 4th P M family, it is the member β = 19 which is the most efficient method with the lowest wc = 4.33. For the 6th P M family, it is the member β = −5 which is the most efficient method with the lowest wc = 3.94. For the 12th P M family, it is the member β = −6 which is the most efficient method with the lowest wc = 2.70. It is the advantage of varying β. We also note that as the order of the methods increases, the mean iteration number of the members considered is almost constant, resulting in the stability of the methods. Next section compares the dynamic behaviour in the complex plane for 2nd N M and 4th P M methods. Detailed study on this aspect is available in Chapter 6.

5.4

Dynamic Behaviour in the Complex Plane

We consider the square region [−2, 2] × [−2, 2] and in this region, we have 160000 equally spaced grid points with mesh h = 0.01. It is composed of 400 columns and 400 rows which can be related to the pixels of a computer display would represent a region of the complex plane Soleymani et al. (2012a). Each grid point is used as an initial point z0 and the number of iterations until convergence is counted for each point. Now, we draw the polynomiographs of p1 (z) = z 3 − 1 with roots α1 = 1, α2 = −0.5000 − 0.8660i and α3 = −0.5000 + 0.8660i. We assign ’red color’ if each grid point converge to the root α1 , ’green color’ if they converge to the root α2 and ’blue color’ if they converge to the root α3 in at most 200 iterations and if |zn − αj | < 10−4 , j = 1, 2, 3. In this way, the basins of attraction for each root would be assigned a characteristic color. If the iterations do not converge as per the above condition for some specific initial points, we assign ’black color’. Bahman Kalantari coined the term ”polynomiography” to be the art and science of visualization in the approximation of roots of polynomials using I.F. Kalantari (2009). Figure 5.2 shows the polynomiography of the 2nd N M and 4th P M methods and thier global convergence.

51

−2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −2

−1.5

−1

−0.5

0

0.5

1

1.5

2

1

1.5

2

1.5

2

(a) 2nd N M (1.19) −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −2

−1.5

−1

−0.5

0

0.5

(b) β = 1 in 4th P M (5.1) −2 −1.5 −1 −0.5 0 0.5 1 1.5 2 −2

−1.5

−1

−0.5

0

0.5

1

(C) β = −1 in 4th P M (5.1) Figure 5.2: Polynomiographs of p1 (z)

52

5.5

Concluding Remarks

We compare the efficiency indices of some existing I.F. along with proposed family of I.F. in Table ??. The proposed 4th P M I.F. has good efficiency index compared with 3rd P M I.F. whereas both have same number of function evaluations. It is observed that, proposed 4th P M , 6th P M and 12th P M I.F. have better efficiency indices compared to 2nd N M and 3rd P M . Hence, we conclude that the proposed 4th P M , 6th P M and 12th P M I.F. performs better than 2nd N M and can be a competitor to 2nd N M and other methods of equivalent order available in the literature. Further, it is noted that 4th P M family has optimal order convergence. Dynamic behaviour of 4th P M family of methods for β = 1 and −1 for whom the polynomiographs are displayed. Whereas for other values of β in the interval [−20, 20], polynomiographs are yet to be explored in detail. This is due to fact that the polynomial p3 (z) has complex roots and the M ATLAB program for obtaining basin attractors has to be developed for different β values. This particular study is open for further research.

Chapter 6 Some New Multi-point Iterative Methods and their Basins of Attraction In this chapter, we propose some Jarratt’s type methods for solving scalar nonlinear equation f (x) = 0 with convergence order four, six and twelve using the idea of weight functions. The main goal and motivation in the construction of new methods is to get better efficiency index. In other words, it is desirable to attain convergence order as high as possible with fixed number of function evaluations per iteration. In the case of multipoint methods without memory, this demand is closely connected with the optimal order stated in the Kung-Traub conjecture. Section 6.1 presents the development of the new fourth order methods and their convergence analysis. Further, the extension of new fourth order methods to sixth and twelfth order methods are given in Section 6.2. Section 6.3 includes some numerical examples and results for the new family of methods along with some equivalent methods, including Newton’s method. In Section 6.5, we obtain all possible extraneous fixed points for these methods as a special study. In Section 6.4, we have studied basins of attraction for the proposed fourth order methods, Newton’s method and some existing methods. Section 6.6 discusses an application on Planck’s radiation law problem. Comparison of the efficiency indices and concluding remarks are given in the last section.

53

54

6.1

Construction of new methods

Let us consider a third order I.F. (3rd N W ) for solving nonlinear equation which was presented by Noor and Waseem (2009) y =x−

2 f (x) , 3 f 0 (x)

ψ3rd N W (x) = x −

4f (x) . + 3f 0 (y)

(6.1)

f 0 (x)

This method (6.1) is of order three with three evaluations per full iteration. To improve the order of the above method with the same number of function evaluations leading to an optimal method, we proposed the following class of I.F. which includes weight functions (See Madhu and Jayakumar (2016b)) ψ4th M J (x) = x −

  4f (x) × H(τ ) × G(η) , f 0 (x) + 3f 0 (y)

where H(τ ) and G(η) are two weight functions with τ =

6.1.1

f 0 (y) f 0 (x)

and η =

(6.2) f 0 (x) . f 0 (y)

Convergence Analysis of the methods

Theorem 6.1.1. Let f : D ⊂ R → R be a sufficiently smooth function having continuous derivatives up to fourth order. If f (x) has a simple root x∗ in the open interval D and x(0) is chosen in a sufficiently small neighborhood of x∗ , then the class of method (6.2) is of local fourth-order convergence, when 5 H(1) = G(1) = 1, H 0 (1) = G0 (1) = 0, H 00 (1) = , 8 1 G00 (1) = , |H 000 (1)| = |G000 (1)| < ∞ 2

(6.3)

and it satisfies the error equation ψ4th M J (x)−x∗ =

  1  −81c2 c3 + 9c4 + c32 147 + 32H 000 (1) − 32G000 (1) e4 +O(e5 ). 81

Proof. Taylor expansion of f (x) and f 0 (x) about x∗ gives h i f (x) = f 0 (x∗ ) e + c2 e2 + c3 e3 + c4 e4 + . . .

(6.4)

h i f 0 (x) = f 0 (x∗ ) 1 + 2c2 e + 3c3 e2 + 4c4 e3 + . . .

(6.5)

and

55

so that y = x∗ +

  2 3 e 2 2 4 2 + c2 e − c2 − c3 e 3 + 4c2 − 7c2 c3 + 3c4 e4 + . . . 3 3 3 3

. (6.6)

Again, using Taylor expansion of f 0 (y) about x∗ gives h 2   i 1 4 f 0 (y) = f 0 (x∗ ) 1+ c2 e+ 4c22 +c3 e2 + −18c32 +27c2 c3 +c4 e3 +. . . . (6.7) 3 3 27 Using equations (6.5) and (6.7), we have   4 8  8 3 τ = 1 − c2 e + 4c22 − c3 e2 − 36c2 − 45c2 c3 + 13c4 e3 + . . . 3 3 27

(6.8)

and   4 4 8 3 − 5c22 + 6c3 e2 + 8c2 − 21c2 c3 + 13c4 e3 + . . . η = 1 + c2 e + 3 9 27

(6.9)

Using equations (6.4), (6.5) and (6.7), then we have  4f (x) 1  4 2 3 3 = e − c e + 3c − 3c c − c4 e + . . . 2 3 2 2 f 0 (x) + 3f 0 (y) 9

.

(6.10)

Expanding the weight function H(τ ) and G(η) about 1 using Taylor series, we get 1 1 H(τ ) = H(1) + (τ − 1)H 0 (1) + (τ − 1)2 H 00 (1) + (τ − 1)3 H 000 (1) + O(H (4) (1)), 2 6 1 1 G(η) = G(1) + (η − 1)G0 (1) + (η − 1)2 G00 (1) + (η − 1)3 G000 (1) + O(G(4) (1)). 2 6 (6.11) Using equations (6.10) and (6.11) in equation (6.2), such that the conditions in equation (6.3) are satisfied, we obtain ψ4th M J (x)−x∗ =

  1  −81c2 c3 + 9c4 + c32 147 + 32H 000 (1) − 32G000 (1) e4 +O(e5 ). 81 (6.12)

Equation (6.12) shows that method (6.2) has fourth order convergence. Remark 6.1.1. Note that for each choice of |H 000 (1)| < ∞ and |G000 (1)| < ∞ in equation (6.12) will give rise to a new optimal fourth order method. Method (6.2) has efficiency index EIO = 1.587, better than method (6.1).

56

Two members in the class of method (6.2) satisfying Condition (6.3), with corresponding weight functions, are given in the following: By choosing H 000 (1) = G000 (1) = 0, we get a new method called as 4th M J1  2   2  5 1 4f (x) 1+ τ −1 1+ η−1 , (6.13) ψ4th M J1 (x) = x− 0 f (x) + 3f 0 (y) 16 4 where its error equation is ∗

ψ4th M J1 (x) − x =

 49 27

c32

1  4 − c2 c3 + c4 e + O(e5 ). 9

By choosing H 000 (1) = 0, G000 (1) = 1, we get another method called as 4th M J2  2  4f (x) 5 ψ4th M J2 (x) = x − 0 1+ τ −1 × f (x) + 3f 0 (y) 16  (6.14) 2 1  3  1 1+ η−1 + η−1 , 4 6 where its error equation is ψ4th M J2 (x) − x∗ =

 115

1  c32 − c2 c3 + c4 e4 + O(e5 ). 81 9

Remark 6.1.2. By this way, we can propose many such fourth order methods similar to 4th M J1 and 4th M J2. Further, the methods 4th M J1 and 4th M J2 are equally good, since they have the same order of convergence and efficiency. Based on the analysis done using basins of attraction, we find that 4th M J1 is marginally better than 4th M J2, and hence, we have considered 4th M J1 to propose higher order methods, namely 6th M J3 and 12th M J4.

6.2

Higher Order Methods

We improve the method 4th M J1 to a new sixth order method called as 6th M J3  f (ψ4th M J1 (x)) 1  3η − 1 . ψ6th M J3 (x) = ψ4th M J1 (x) − f 0 (x) 2

(6.15)

Babajee et al. (2015a) improved a sixth order Jarratt method to a twelfth order method. Using their technique, we obtain a new twelfth order method (12th M J4) ψ12th M J4 (x) = ψ6th M J3 (x) −

f (ψ6th M J3 (x)) , f 0 (ψ6th M J3 (x))

(6.16)

57

where f 0 (ψ6th M J3 (x)) is approximated to reduce one function evaluation: f 0 (ψ6th M J3 (x)) ≈

 1 f 0 (x)(ψ4th M J1 (x) − ψ6th M J3 (x)) ψ4th M J1 (x) − ψ6th M J3 (x)

+ 2f [ψ6th M J3 (x), x, x](ψ4th M J1 (x) − x)(ψ6th M J3 (x) − x)  + (f [ψ4th M J1 (x), x, x] − 3f [ψ6th M J3 (x), x, x])(ψ6th M J3 (x) − x)2 , where

f [ψ4th M J1 (x), x] − f 0 (x) , ψ4th M J1 (x) − x f [ψ6th M J3 (x), x] − f 0 (x) f [ψ6th M J3 (x), x, x] = . ψ6th M J3 (x) − x

f [ψ4th M J1 (x), x, x] =

6.2.1

6.3

Convergence Analysis of the methods

Numerical Examples

In this section, we give numerical results on some test functions to compare the efficiency of the new methods with some existing methods. Numerical computations have been carried out in the M ATLAB software with 500 significant digits. Depending on the precision of the computer, we use the stopping criteria for the iterative process error = |x(k+1) − x(k) | < , where  = 10−50 and N is the number of iterations required for convergence. dψ represents the total number of function evaluations. The ACOC denoted as ρ˜k is given by Tables ?? to ?? display the results for f1 (x) to f5 (x) respectively. It is observed that 4th M J1, 4th M J2 methods converge in less number of iterations and with low error when compared to 2nd N M and 3rd N W methods.

58

6.4

Basins of attraction

Sections 6.1 and 6.3 discussed methods whose roots are in the real domain, that is f : D ⊂ R → R. The study can be extended to functions defined in the complex plane f : D ⊂ C → C having complex zeros. From the fundamental theorem of algebra, a polynomial of degree n with real or complex coefficients has n roots which may or may not be distinct. In such a case a complex initial guess is needed for convergence of complex zeros. Note that we need some basic definitions in order to study functions for complex domain with complex zeros. We give below some definitions required for our study, which are found in Blanchard (1984), Amat et al. (2004), Scott et al. (2011). Let R : C → C be a rational map on the Riemann sphere. Definition 6.4.1. For z ∈ C we define its orbit as the set orb(z) = {z, R(z), R2 (z), ..., Rn (z), ...}. Definition 6.4.2. A periodic point z0 of the period m is such that Rm (z0 ) = z0 where m is the smallest integer. Definition 6.4.3. The Julia set of a nonlinear map R(z) denoted by J(R) is the closure of the set of its repelling periodic points. The complementary of J(R) is the Fatou set F (R). Definition 6.4.4. If O is an attracting periodic orbit of period m, we define the basins of attraction to be the open set A ∈ C consisting of all points z ∈ C for which the successive iterates Rm (z), R2m (z), ... converge towards some point of O. Lemma 6.4.1. Every attracting periodic orbit is contained in the Fatou set of R. In fact the entire basins of attraction A for an attracting periodic orbit is contained in the Fatou set. However, every repelling periodic orbit is contained in the Julia set. In the following subsections, we produce some beautiful graphs obtained for the proposed methods and for some existing methods using M ATLAB software (See Chicharro et al. (2013)). In fact, an iteration function is a mapping of the plane into itself. The common boundaries of these basins of attraction constitute the Julia set of the iteration function and its complement is the Fatou set. This section is necessary in this Chapter to show that how the proposed methods could be considered in

59

polynomiography. In the following section, we describe the basins of attraction for Newton’s method and some higher order Newton type methods for finding complex roots of polynomials p1 (z) = z 3 − 1 and p2 (z) = z 4 − 1.

6.4.1

Polynomiographs of p1 (z) = z 3 − 1

We consider the square region [−2, 2] × [−2, 2] and in this region, we have 160000 equally spaced grid points with mesh h = 0.01. This mesh is composed of 400 columns and 400 rows which can be related to the pixels of a computer display that may represent a region of the complex plane, as given in Soleymani et al. (2012b). Each grid point is used as an initial point z0 and the number of iterations until convergence is counted for each point. Now, we draw the polynomiographs of p1 (z) = z 3 − 1 with roots α1 = 1, α2 = −0.5000 − 0.8660i and α3 = −0.5000 + 0.8660i. We assign ’red color’ if each grid point converges to the root α1 , ’green color’ if they converge to the root α2 and ’blue color’ if they converge to the root α3 in at most 200 iterations and if |zn − αj | < 10−4 , j = 1, 2, 3. In this way, the basins of attraction for each root would be assigned a characteristic color. If the iterations do not converge as per the above condition for some specific initial points, we assign ’black color’. Figure 6.1(a)-(j) show the polynomiographs of the methods for the cubic polynomial p1 (z). There are diverging points for the methods 3rd N W , 4th SBS1, 4th SBS2 and 4th SKK. All the starting points are convergent for the methods 2nd N M , 4th JM , 4th SJ, 4th SKS, 4th M J1 and 4th M J2. In Table 6.1, we classify the number of converging and diverging grid points for each iterative method. Note that a point z0 belongs to the Julia set if and only if dynamics in a neighborhood of z0 displays sensitive dependence on the initial conditions, so that nearby initial conditions lead to wildly different behavior after a number of iterations. For this reason some of the methods are getting many divergent points. The common boundaries of these basins of attraction constitute the Julia set of the iteration function.

6.4.2

Polynomiographs of p2 (z) = z 4 − 1

Next, we draw the polynomiographs of p2 (z) = z 4 − 1 with roots α1 = 1, α2 = −1, α3 = i and α4 = −i. We assign a yellow color if each grid point converges to the

60

−2

−2

−1.5

−1.5

−1

−1

−0.5

−0.5

0

0

0.5

0.5

1

1

1.5

1.5

2 −2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2 −2

−1.5

(a) 2nd N M (1.19) −2

−2 −1.5

−1

−1

−0.5

−0.5

0

0

0.5

0.5

1

1

1.5

1.5

−1.5

−1

−0.5

0

0.5

1

1.5

2

2 −2

−1.5

th

−2

−2 −1.5

−1

−1

−0.5

−0.5

0

0

0.5

0.5

1

1

1.5

1.5

−1

−0.5

0

0.5

1

1.5

2

2 −2

−1.5

(e) 4th SBS2 (??) −2

−2 −1.5

−1

−1

−0.5

−0.5

0

0

0.5

0.5

1

1

1.5

1.5

−1.5

−1

−0.5

0

0.5

1

1.5

2

2 −2

−1.5

th

(g) 4 SJ (??) −1.5

−1.5

−1

−1

−0.5

−0.5

0

0

0.5

0.5

1

1

1.5

1.5

−1

−0.5

th

0

0.5

2

−1

−0.5

0

0.5

1

1.5

2

−1

−0.5

0

0.5

1

1.5

2

−1

−0.5

0

0.5

1

1.5

2

1.5

2

(h) 4 SKS (??) −2

−1.5

1.5

th

−2

2 −2

1

(f) 4th SKK (??)

−1.5

2 −2

0.5

(d) 4 SBS1 (??)

−1.5

−1.5

0

th

(c) 4 JM (1.42)

2 −2

−0.5

(b) 3rd N W (6.1)

−1.5

2 −2

−1

1

(i) 4 M J1 (6.13)

1.5

2

2 −2

−1.5

−1

−0.5

th

0

0.5

1

(j) 4 M J2 (6.14)

Figure 6.1: Polynomiographs of p1 (z)

61

Table 6.1: Comparison of convergent and divergent grids for p1 (z) Methods 2nd N M (1.19) 3rd N W (6.1) 4th JM (1.42) 4th SBS1 (??) 4th SBS2 (??) 4th SKK (??) 4th SJ (??) 4th SKS (??) 4th M J1 (6.13) 4th M J2 (6.14)

Convergent grid points Divergent grid points Real roots Complex roots 56452 103548 0 52372 98670 8958 56474 103526 0 23174 44160 92666 48018 91308 20674 34722 82590 42688 52587 107143 0 55178 104822 0 55892 104108 0 54622 105378 0

root α1 , red color if they converge to the root α2 , green color if they converge to the root α3 and blue color if they converge to the root α4 in atmost 200 iterations and if |zn − αj | < 10−4 , j = 1, 2, 3, 4. Therefore, the basins of attraction for each root would be assigned a corresponding color. If the iterations do not converge as per the above condition for some specific initial points, we assign black color. Table 6.2: Comparison of convergent and divergent grids for p2 (z) Methods 2nd N M (1.19) 3rd N W (6.1) 4th JM (1.42) 4th SBS1 (??) 4th SBS2 (??) 4th SKK (??) 4th SJ (??) 4th SKS (??) 4th M J1 (6.13) 4th M J2 (6.14)

Convergent grid points Divergent grid points Real roots Complex roots 80010 79990 0 68133 68120 23747 80001 79999 0 53792 53792 52416 60098 60466 39436 54584 54584 50832 79427 79427 1146 79961 79959 80 79962 79979 59 79968 79954 78

Figure 6.2(a)-(j) show the polynomiographs of the methods for the quartic polynomial p2 (z). There are diverging points for the methods 3rd N W , 4th SBS1, 4th SBS2, 4th SKK, 4th SJ, 4th SKS, 4th M J1 and 4th M J2. All the starting points are convergent for 2nd N M and 4th JM methods. In Table 6.2, we classify the number of converging and diverging grid points for each iterative method. Also, we observe

62

−2

−2

−1.5

−1.5

−1

−1

−0.5

−0.5

0

0

0.5

0.5

1

1

1.5

1.5

2 −2

−1.5

−1

−0.5

0

0.5

1

1.5

2

2 −2

−1.5

(a) 2nd N M (1.19) −2

−2 −1.5

−1

−1

−0.5

−0.5

0

0

0.5

0.5

1

1

1.5

1.5

−1.5

−1

−0.5

0

0.5

1

1.5

2

2 −2

−1.5

th

−2

−2 −1.5

−1

−1

−0.5

−0.5

0

0

0.5

0.5

1

1

1.5

1.5

−1

−0.5

0

0.5

1

1.5

2

2 −2

−1.5

(e) 4th SBS2 (??) −2

−2 −1.5

−1

−1

−0.5

−0.5

0

0

0.5

0.5

1

1

1.5

1.5

−1.5

−1

−0.5

0

0.5

1

1.5

2

2 −2

−1.5

th

(g) 4 SJ (??) −1.5

−1.5

−1

−1

−0.5

−0.5

0

0

0.5

0.5

1

1

1.5

1.5

−1

−0.5

th

0

0.5

2

−1

−0.5

0

0.5

1

1.5

2

−1

−0.5

0

0.5

1

1.5

2

−1

−0.5

0

0.5

1

1.5

2

1.5

2

(h) 4 SKS (??) −2

−1.5

1.5

th

−2

2 −2

1

(f) 4th SKK (??)

−1.5

2 −2

0.5

(d) 4 SBS1 (??)

−1.5

−1.5

0

th

(c) 4 JM (1.42)

2 −2

−0.5

(b) 3rd N W (6.1)

−1.5

2 −2

−1

1

(i) 4 M J1 (6.13)

1.5

2

2 −2

−1.5

−1

−0.5

th

0

0.5

1

(j) 4 M J2 (6.14)

Figure 6.2: Polynomiographs of p2 (z)

63

that 4th SKS, 4th M J1 and 4th M J2 methods are divergent at less number of grid points than the method of 3rd N W , 4th SBS1, 4th SBS2, 4th SKK, 4th SJ. From this comparison based on the basins of attractions for cubic and quartic polynomials, we could generally say that 2nd N M , 4th JM , 4th SKS, 4th M J1 and 4th M J2 methods are more reliable in solving nonlinear equations. Also, by observing the polynomiographs of p1 (z) and p2 (z), we find certain symmetrical patterns to the x-axis and y-axis, where the starting point z0 leads to convergent real or complex pair of roots of the respective polynomials.

6.5

A study on extraneous fixed points

Definition 6.5.1. A point z0 is a fixed point of R if R(z0 ) = z0 . Definition 6.5.2. A point z0 is called attracting if |R0 (z0 )| < 1, repelling if |R0 (z0 )| > 1 and neutral if |R0 (z0 )| = 1. If the derivative is also zero then the point is super attracting. It is interesting to note that all the iterative methods can be written as ψ(x) = x − Gf (xn )u(x), u(x) =

f (x) . f 0 (x)

(6.17)

As per the definition, x∗ is a fixed point of this method, since u(x∗ ) = 0. However, the points ξ 6= x∗ at which Gf (ξ) = 0 are also fixed points of the method, since Gf (ξ) = 0, second term on the right side of (6.17) vanishes. Hence, these points ξ are called extraneous fixed points. Moreover, for a general iteration function given by Rp (z) = z − Gf (z)u(z), z ∈ C,

(6.18)

the nature of extraneous fixed points can be discussed. Based on the nature of the extraneous fixed points, the convergence of the iteration process will be determined. For more details on this aspect, the paper by Vrscay and Gilbert (1988) will be useful. In fact, they investigated that if the extraneous fixed points are attractive then the method will give erroneous results. If the extraneous fixed points are repelling or neutral, then the method may not converge to a root near the initial guess.

64

In this section, we will discuss the extraneous fixed points of each method for the polynomial z 3 − 1. As Gf does not vanish in theorem 6.5.1, there are no extraneous fixed points. Theorem 6.5.1. There are no extraneous fixed points for 2nd N M (1.19) and 3rd N W (6.1). Theorem 6.5.2. There are six extraneous fixed points for 4th JM (1.42). Proof. The extraneous fixed point of Jarratt method for which Gf =

3f 0 (y(z)) + f 0 (z) 6f 0 (y(z)) − 2f 0 (z)

2f (z) are found. Upon substituting y(z) = z − 3f 0 (z) , we get the equation

1+7z 3 +19z 6 2+14z 3 +11z 6

= 0.

The extraneous fixed points are found to be 0.411175 ± 0.453532i, − 0.598358 ± 0.129321i, 0.187183 ± 0.582854i. All these fixed points are repelling (since |R0 (z0 )| > 1) Theorem 6.5.3. There are fifty two extraneous fixed points for the method (??). Proof. As we found for the method (??), 

Gf = 1 +

0 (z) 3 f 0f(y(z))



0

f (z) +

69 (f 0 (z)−f 0 (y(z)))3 64 f 0 (z)2

+

f (z)4 f 0 (z) f 0 (y(z))4

+

3 (f 0 (y(z))−f 0 (z))2 8 f 0 (z)



.

The extraneous fixed points are found to be 0.385139 ± 0.301563i, − 0.453731 ± 0.182759i, 0.0685914 ± 0.484322i, − 0.461227, 0.690937, − 1.38146 ± 1.63298i, − 0.888193 ± 0.382434i, − 0.626419 ± 0.447214i, − 0.616918 ± 0.228042i, − 0.546519 ± 0.138633i, − 0.504031 ± 0.0757213i, − 0.483094 ± 0.0349619i, − 0.345468 ± 0.598527i, − 0.0785635 ± 0.774853i, 0.0935008 ± 0.707441i, 0.140045 ± 0.571188i, 0.177037 ± 0.495057i, 0.200558 ± 0.445297i, 0.205838 ± 1.20668i, 0.229093 ± 0.403647i, 0.253758 ± 0.407757i, 0.299419 ± 0.396038i, 0.365488 ± 0.402274i, 0.461153 ± 0.411868i, 0.659879 ± 0.470766i, 0.704721 ± 0.329989i, 1.56532 ± 0.938337i. All these fixed points are repelling (since |R0 (z0 )| > 1).

65

Theorem 6.5.4. There are thirty nine extraneous fixed points for the method (??). Proof. For the method (??),   0 (z) f (z)3 f 0 (z) f 0 (z) + + 81f Gf = 1 + 3 f 0f(y(z)) 0 (y(z))3 +

3 (f 0 (y(z))−f 0 (z))2 8 f 0 (z)



.

The extraneous fixed points are at 0.385139 ± 0.301563i, − 0.453731 ± 0.182759i, 0.0685914 ± 0.484322i 3.98917 ± 6.90945i, − 7.97834, 0.41942 ± 0.726456i, − 0.838839, 0.277253 ± 0.480215i, − 0.554505, 0.46341 ± 0.53288i, − 0.693192 ± 0.134885i, 0.229782 ± 0.667764i, 0.367096 ± 0.467142i, − 0.588105 ± 0.0843435i, 0.221009 ± 0.551486i, 0.280074 ± 0.381388i, − 0.470329 ± 0.0518574i, 0.190255 ± 0.433246i, 0.615945 ± 0.214444i, − 0.493687 ± 0.426202i, − 0.122258 ± 0.640646i. All these fixed points are repelling (since |R0 (z0 )| > 1). Theorem 6.5.5. There are twenty four extraneous fixed points for the method (??). Proof. As we found for the method (??), Gf =





1

f 0 (y(z)) 1+ f 0 (z)

4

f 0 (z) + + ff0(z) (z)3



2f 0 (z) − 47 f 0 (y(z)) +

3 f 0 (y(z))2 4 f 0 (z)



.

The extraneous fixed points are found to be 0.272187 ± 0.394392i, 0.20546 ± 0.432916i, − 0.477646 ± 0.0385246i, 0.676726 ± 0.202542i, − 0.513769 ± 0.484791i, − 0.162957 ± 0.687333i, − 2.12619 ± 2.22671i, − 0.51922 ± 0.277607i, − 0.217805 ± 0.487789i, 0.210804 ± 0.604566i, 0.524089 ± 0.172222i, 2.12832 ± 2.00454i. All these fixed points are repelling (since |R0 (z0 )| > 1). Theorem 6.5.6. There are eighteen extraneous fixed points for the method (??). Proof. For the method (??), Gf =



17 8



9 f 0 (y(z)) 4 f 0 (z)

+

9 8



f 0 (y(z)) f 0 (z)

2 

7 4



3 f 0 (y(z)) 4 f 0 (z)



.

The extraneous fixed points are at −0.333371 ± 0.577415i, 0.666742, 0.229257 ± 0.397085i, − 0.458515,

66

0.710065 ± 0.231721i, − 0.555709 ± 0.499074i, − 0.154356 ± 0.730795i, 0.275117 ± 0.402579i, − 0.486202 ± 0.0369693i, 0.211085 ± 0.439548i. All these fixed points are repelling (since |R0 (z0 )| > 1). Theorem 6.5.7. There are twelve extraneous fixed points for the method (??). Proof. For the method (??), Gf =



− 12 +

9 f 0 (z) 8 f 0 (y(z))

+

3 f 0 (y(z)) 8 f 0 (z)

 .

The extraneous fixed points are at 0.289483 ± 0.382811i, 0.186782 ± 0.442105i, − 0.476265 ± 0.0592945i, 0.605298 ± 0.2466i, − 0.516211 ± 0.400903i, − 0.0890867 ± 0.647503i. All these fixed points are repelling (since |R0 (z0 )| > 1). Theorem 6.5.8. There are twenty four extraneous fixed points for the method (6.13). Proof. For the method (6.13), Gf =



1

1+3



f 0 (y(z)) f 0 (z)

f 0 (z) +

5 (f 0 (y(z))−f 0 (z))2 16 f 0 (z)



0

0

(y(z))) f 0 (z) + 41 f 0 (z) (f (z)−f f 0 (y(z))2

2



.

The extraneous fixed points are at 0.622907 ± 0.52714i, − 0.767969 ± 0.275883i, 0.145063 ± 0.803023i, 0.310217 ± 0.445061i, − 0.540543 ± 0.0461255i, 0.230326 ± 0.491187i, 0.280277 ± 0.377418i, 0.186715 ± 0.431436i, − 0.466992 ± 0.0540183i, 0.602147 ± 0.210285i, − 0.483186 ± 0.416332i, − 0.118961 ± 0.626617i. All these fixed points are repelling (since |R0 (z0 )| > 1). Theorem 6.5.9. There are thirty extraneous fixed points for the method (6.14). Proof. For the method (6.14), Gf =

1



 0

1 + 3 f f(y(z)) 0 (z) 

f 0 (z) +

5 (f 0 (y(z)) − f 0 (z))2  16 f 0 (z)

1 0 (f 0 (z) − f 0 (y(z)))2 1 0 (f 0 (z) − f 0 (y(z)))3  . f (z) + f (z) + f (z) 4 f 0 (y(z))2 6 f 0 (y(z))3 0

67

The extraneous fixed points are at 0.280277 ± 0.377418i, 0.186715 ± 0.431436i, − 0.466992 ± 0.0540183i, 0.602147 ± 0.210285i, − 0.483186 ± 0.416332i, − 0.118961 ± 0.626617i, 0.701957 ± 0.574647i, − 0.848638 ± 0.320589i, 0.146681 ± 0.895237i, 0.296076 ± 0.447202i, − 0.535326 ± 0.0328086i, 0.23925 ± 0.48001i, 0.414766 ± 0.407081i, 0.145159 ± 0.562739i, − 0.559926 ± 0.155658i. All these fixed points are repelling (since |R0 (z0 )| > 1).

6.6

An application problem

To test our methods, we consider the following Planck’s radiation law problem found in Bradie (2006) and Jain (2013) 8πchλ−5 , ϕ(λ) = ch/λkT e −1

(6.19)

which calculates the energy density within an isothermal blackbody. Here, λ is the wavelength of the radiation, T is the absolute temperature of the blackbody, k is Boltzmann’s constant, h is the Planck’s constant and c is the speed of light. Suppose, we would like to determine wavelength λ which corresponds to maximum energy density ϕ(λ). From (6.19), we get ϕ0 (λ) =

 8πchλ−6  (ch/λkT )ech/λkT  − 5 = A · B. ech/λkT − 1 ech/λkT − 1

It can be checked that a maxima for ϕ occurs when B = 0, that is, when  (ch/λkT )ech/λkT  ech/λkT − 1

= 5.

Here putting x = ch/λkT , the above equation becomes 1−

x = e−x . 5

(6.20)

Define x f (x) = e−x − 1 + . 5

(6.21)

The aim is to find a root of the equation f (x) = 0. Obviously, one of the root x = 0 is not taken for discussion. As argued in Bradie (2006), the left-hand side of

68

Table 6.3: Comparison of results for Planck’s radiation law problem x(0) 4.0 4.5 5.0 5.5

N 7 6 6 6

2nd N M (1.19) dψ ρ˜k error N 14 2.00 1.4e−101 4 12 1.99 4.5e−063 4 12 2.00 1.4e−101 4 12 1.99 5.4e−066 4

4th DJ (??) dψ ρ˜k error 16 4.00 3.3e−086 16 4.00 9.1e−110 16 3.99 8.5e−185 16 4.00 9.9e−112

Table 6.4: Comparison of results for Planck’s radiation law problem x(0) 4.0 4.5 5.0 5.5

N 4 4 4 4

4th M J1 (6.13) dψ ρ˜k error N 12 3.99 2.2e−069 4 12 3.99 2.1e−093 4 12 3.99 1.3e−168 4 12 3.99 1.6e−095 4

4th M J2 (6.14) dψ ρ˜k error 12 3.99 1.4e−069 12 3.99 1.6e−093 12 3.99 1.1e−168 12 3.99 1.4e−095

(6.20) is zero for x = 5 and e−5 ≈ 6.74 × 10−3 . Hence, it is expected that another root of the equation f (x) = 0 might occur near x = 5. The approximate root of the equation (6.21) is given by x∗ ≈ 4.96511423174427630369. Consequently, the wavelength of radiation (λ) corresponding to which the energy density is maximum is approximated as λ ≈

ch . (kT )4.96511423174427630369

We apply the methods 2nd N M , 4th DJ, 4th M J1, 4th M J2, 6th M J3 and 12th M J4 to solve (6.21) and compared the results in Tables 6.3 - 6.5. From these tables, we note that the root x∗ is reached faster by 12th M J4 method than by other methods. This is due to the fact that it has the highest efficiency EIO = 1.644. Table 6.6 displays the results for f zero command in M ATLAB, where N1 is the number of

Table 6.5: Comparison of results for Planck’s radiation law problem x(0) 4.0 4.5 5.0 5.5

N 4 4 3 3

6th M J3 (6.15) dψ ρ˜k error N 16 5.99 3.3e−224 3 16 5.99 3.3e−306 3 12 5.99 3.7e−093 3 12 5.95 3.2e−052 3

12th M J4 (6.16) dψ ρ˜k error 15 12.11 7.7e−144 15 12.03 6.4e−198 15 12.00 0 15 11.96 3.2e−203

69

Table 6.6: Results for Planck’s radiation law problem in fzero x(0) 4.0 4.5 5.0 5.5

N1 8 5 1 5

N 6 5 4 5

dψ 23 16 6 15

error 1.1102e−016 −1.1102e−016 1.1102e−016 −1.1102e−016

x∗ 4.9651 4.9651 4.9651 4.9651

iterations to find the interval containing the root, error is the after N number of iterations. For f zero command, zeros are considered to be points where the function actually crosses, not just touches the x-axis. It is observed that the present methods converge with less number of total function evaluations than f zero solver.

6.7

Concluding Remarks

We compare the efficiency index of some I.F. along with proposed methods in Table 6.7. The proposed 4th M J I.F. has good efficiency index compared with 3rd N W I.F. whereas both have same number of function evaluations. It is observed that, proposed methods 4th M J, 6th M J3 and 12th M J4 have better efficiency indices compared to 2nd N M . Hence, we conclude that the methods 4th M J, 6th M J3 and Table 6.7: Comparison of Efficiency Index Methods 2 N M (1.19) 3rd P M (6.1) 4th M J (6.2) 6th M J3 (6.15) 12th M J4 (6.16) nd

p f 2 1 3 1 4 1 6 2 12 3

f0 1 2 2 2 2

d 2 3 3 4 5

EIT 1 1 1.33 1.5 2.4

EIO 1.414 1.442 1.587 1.565 1.644

12th M J4 performs better than 2nd N M and can be a competitor to 2nd N M and other methods of equivalent order available in the literature.

Chapter 7 Improved Harmonic Mean Newton-type methods for system of nonlinear equations In this chapter, we propose a new fourth order Newton-like method based on harmonic mean and its multi-step version for solving system of nonlinear equations F (x) = 0 where F (x) = (f1 (x), f2 (x), ..., fn (x))T , x = (x1 , x2 , ..., xn )T , fi : Rn → R, ∀i = 1, 2, . . . , n and F : D ⊂ Rn → Rn is a smooth map and D is an open and convex set, where we assume that x∗ = (x∗1 , x∗2 , x∗3 , ..., x∗n )T is a zero of the  T (0) (0) (0) system and x(0) = x1 , x2 , ..., xn is an initial guess sufficiently close to x∗ . For example, problems of the above type arise while solving boundary value problems for differential equations. The differential equations are reduced to system of nonlinear equations, which are in turn solved by the familiar Newton’s method having convergence order two (See Ostrowski (1960)). In Section 7.1, we present a new algorithm that has fourth order convergence and its multi-step version with order 2r + 4, r ≥ 1, r is a positive integer. In Section 7.2, we study the convergence analysis of the new methods using the point of attraction theory. In Section 7.3, efficiency indices and computational efficiency indices for the new methods are discussed. Section 7.4 presents numerical examples and comparison with some known methods. Furthermore, we also study an application problem called the 1-D Bratu problem in Section 7.5. Concluding remarks are given in the last section. Outcome of the new methods are given in concluding in remarks which shows that our methods are efficient.

70

71

7.1

Construction of new methods

One of the basic procedure for solving system of nonlinear equations is the classical second order Newton’s method (2nd NM). It is defined by x(k+1) = G2nd N M (x(k) ) = x(k) − u(x(k) ),

u(x(k) ) = [F 0 (x(k) )]−1 F (x(k) ) (7.1)

where k = 0, 1, 2,... and [F 0 (x(k) )]−1 is the inverse of first Fr´echet derivative F 0 (x(k) ) of the function F (x(k) ). It is straightforward to see that this method requires the evaluation of one function, one first derivative and one matrix inversion per iteration. Homeier (2005) proposed a third order iterative method called Harmonic Mean Newton’s method for solving a scalar nonlinear equation. Grau-Sanchez et al. (2012) proposed the following extension to solve a system of nonlinear equation F (x) = 0, henceforth called as 3rd HM x(k+1) = G3rd HM (x(k) ) (7.2)  1 = x(k) − [F 0 (x(k) )]−1 + [F 0 (G2nd N M (x(k) ))]−1 F (x(k) ) 2  We note that 21 [F 0 (x(k) )]−1 + [F 0 (G2nd N M (x(k) ))]−1 is the average of the inverses of two Jacobians. In general, such third order methods free of second derivatives like (7.2) can be used for solving system of nonlinear equations. These methods require one function evaluation and two first order Fr´echet derivative evaluations. The convergence analysis of a few such methods using point of attraction theory can be found in Babajee (2010). This 3rd HM method is more efficient than Halley’s method because it does not require the evaluation of a third order tensor of n3 values while finding the number of function evaluations. We propose a fourth order Harmonic Mean Newton’s method (4th HM ) for solving systems of nonlinear equations (method proposed in Babajee et al. (2015b)): x(k+1) = G4th HM (x(k) ) = x(k) − H1 (x(k) )A(x(k) )F (x(k) ) 1 1 H1 (x(k) ) = I − (τ (x(k) ) − I) + (τ (x(k) ) − I)2 , 4 2  2 1 A(x(k) ) = [F 0 (x(k) )]−1 + [F 0 (y(x(k) ))]−1 , y(x(k) ) = x(k) − u(x(k) ), 2 3 (k) 0 (k) −1 0 (k) (k) 0 (k) −1 (k) τ (x ) = [F (x )] F (y(x )), u(x ) = [F (x )] F (x ) (7.3) where I is the n×n identity matrix. The new fourth order method requires evaluation of one function and two first order Fr´echet derivatives for each iteration.

72

We further improve the 4th HM method by additional function evaluations to get a multi-step version called (2r + 4)th HM method given by x(k+1) = G(2r+4)th HM (x(k) ) = µr (x(k) ) µj (x(k) ) = µj−1 (x(k) ) − H2 (x(k) )A(x(k) )F (µj−1 (x(k) )) (k)

(k)

H2 (x ) = 2I − τ (x ),

(7.4)

j = 1, 2, ..., r, r ≥ 1

µ0 (x(k) ) = G4th HM (x(k) ) Remark 7.1.1. Note that this multi-step version has order 2r+4, where r is a positive integer and r ≥ 0. The case r = 0 is the 4th HM method. Remark 7.1.2. The multi-step version requires one more function evaluation for each iteration. The proposed new methods does not require the evaluation of second or higher order Fr´echet derivatives and still reaches higher order convergence.

7.2

Convergence Analysis of the methods

In order to prove the convergence results, we recall some important definitions and results from the theory of point of attraction. The main theorem is going to be demonstrated by means of the n-dimensional Taylor expansion of the functions involved. Let F : D ⊆ Rn −→ Rn be sufficiently Fr´echet differentiable in D. By using the notation introduced in Cordero et al. (2010b), the qth derivative of F at u ∈ Rn , q ≥ 1, is the q-linear function F (q) (u) : Rn × · · · × Rn −→ Rn such that F (q) (u)(v1 , . . . , vq ) ∈ Rn . It is easy to observe that 1. F (q) (u)(v1 , . . . , vq−1 , ·) ∈ L(Rn ) 2. F (q) (u)(vσ(1) , . . . , vσ(q) ) = F (q) (u)(v1 , . . . , vq ), for all permutation σ of {1, 2, . . . , q}. So, in the following we will denote: (a) F (q) (u)(v1 , . . . , vq ) = F (q) (u)v1 . . . vq , (b) F (q) (u)v q−1 F (p) v p = F (q) (u)F (p) (u)v q+p−1 .

73

It is well known that, for x∗ + h ∈ Rn lying in a neighborhood of a solution x∗ of the nonlinear system F (x) = 0, Taylor’s expansion can be applied (assuming that the Jacobian matrix F 0 (x∗ ) is nonsingular) and # " p−1 X Cq hq + O(hp ), F (x∗ + h) = F 0 (x∗ ) h +

(7.5)

q=2

where Cq = (1/q!)[F 0 (x∗ )]−1 F (q) (x∗ ), q ≥ 2. We observe that Cq hq ∈ Rn since F (q) (x∗ ) ∈ L(Rn × · · · × Rn , Rn ) and [F 0 (x∗ )]−1 ∈ L(Rn ). In addition, we can express F 0 as " F 0 (x∗ + h) = F 0 (x∗ ) I +

p−1 X

# qCq hq−1 + O(hp ),

(7.6)

q=2

where I is the identity matrix. Therefore, qCq hq−1 ∈ L(Rn ). From (7.6), we obtain   [F 0 (x∗ + h)]−1 = I + X1 h + X2 h2 + X3 h3 + · · · [F 0 (x∗ )]−1 + O(hp ),

(7.7)

where X1 = −2C2 , X2 = 4C22 − 3C3 , X3 = −8C23 + 6C2 C3 + 6C3 C2 − 4C4 , .. . We denote e(k) = x(k) − x∗ as the error in the kth iteration. The equation p

e(k+1) = Le(k) + O(e(k)

p+1

),

where L is a p-linear function L ∈ L(Rn ×· · ·×Rn , Rn ), is called the error equation p

and p is the order of convergence. Observe that e(k) is (e(k) , e(k) , · · · , e(k) ). Definition 7.2.1 (Point of Attraction). Ortega and Rheinbolt (1970) Let G : D ⊂ Rn → Rn . Then x∗ is a point of attraction of the iteration x(k+1) = G(x(k) ), k = 0, 1, ...

(7.8)

if there is an open neighbourhood S of x∗ defined by S(x ) = {x ∈ R kx − x∗ k < δ}, δ > 0, ∗

n

such that S ⊂ D and, for any x(0) ∈ S, the iterates {x(k) } defined by equation (7.8) all lie in D and converge to x∗ .

74

Theorem 7.2.1 (Ostrowski Theorem). Ortega and Rheinbolt (1970) Assume that G : D ⊂ Rn → Rn has a fixed point x∗ ∈ int(D) and G(x) is Fr´echet differentiable on x∗ . If ρ(G0 (x∗ )) = σ < 1

(7.9)

then x∗ is a point of attraction for x(k+1) = G(x(k) ). We now prove a general result that shows x∗ is a point of attraction of a general iteration function G(x) = P (x) − Q(x)R(x). Theorem 7.2.2. Let F : D ⊂ Rn −→ Rn be sufficiently Fr´echet differentiable at each point of an open convex neighborhood D of x∗ ∈ D, which is a solution of the system F (x) = 0. Suppose that P, Q, R : D ⊂ Rn → Rn are sufficiently Fr´echet differentiable functionals (depending on F ) at each point in D with P (x∗ ) = x∗ , Q(x∗ ) 6= 0 and R(x∗ ) = 0. Then, there exists a ball n o S = S(x∗ , δ) = kx∗ − xk ≤ δ ⊂ S0 , δ > 0, on which the mapping G : S → Rn ,

G(x) = P (x) − Q(x)R(x), for all x ∈ S

is well-defined; moreover, G is Fr´echet differentiable at x∗ , thus G0 (x∗ ) = P 0 (x∗ ) − Q(x∗ )R0 (x∗ ). Proof. Clearly, G(x∗ ) = x∗ . kG(x) − G(x∗ ) − G0 (x∗ )(x − x∗ )k = kP (x) − Q(x)R(x) − x∗ − (P 0 (x∗ ) − Q(x∗ )R0 (x∗ ))(x − x∗ )k ≤ kP (x) − x∗ − P 0 (x∗ )(x − x∗ )k + k − Q(x)R(x) + Q(x∗ )R0 (x∗ )(x − x∗ )k, using triangle inequality. Since P (x) is differentiable in x∗ and P (x∗ ) = x∗ , we can assume that δ was chosen sufficiently small such that kP (x) − x∗ − P 0 (x∗ )(x − x∗ )k ≤ kx − x∗ k,

75

for all x ∈ S with  > 0 depending on δ and  = 0 in case P (x) = x. Since P , Q and R are continuously differentiable functions, then Q0 , R0 and R00 are bounded: kQ0 (x)k ≤ K1 , kR0 (x)k ≤ K2 , kR00 (x)k ≤ K3 . Now by mean value theorem for integrals 1

Z



Q0 (x∗ + t(x − x∗ )) dt (x − x∗ )

Q(x) = Q(x ) + 0

and Z R(x) =

1

R0 (x∗ + s(x − x∗ )) ds (x − x∗ ),

0

so that kQ(x)R(x) − Q(x∗ )R0 (x∗ )(x − x∗ )k

 Z 1

0 ∗ ∗ 0 ∗ ∗ R (x + s(x − x )) − R (x ) ds (x − x∗ )2 = Q(x )

0

Z 1Z 1

0 ∗ ∗ 0 ∗ ∗ ∗ 2 + Q (x + t(x − x )) R (x + s(x − x )) dt ds (x − x )

0 0

 Z 1 Z 1

00 ∗ ∗ ∗ R (x + sλ(x − x )) ds dλ s (x − x∗ )2 ≤ Q(x )

0 0

Z 1Z 1

0 ∗ ∗ 0 ∗ ∗ ∗ 2 Q (x + t(x − x )) R (x + s(x − x )) dt ds (x − x ) , +

0 0 using triangle inequality, Z

1

1

Z

≤ kQ(x )k kR00 (x∗ + sλ(x − x∗ ))k ds dλ |s| kx − x∗ k2 Z 1Z 1 0 0 + kQ0 (x∗ + t(x − x∗ ))k kR0 (x∗ + s(x − x∗ ))k dt ds kx − x∗ k2 , ∗

0

0

using Schwartz inequality,   K3 ∗ ≤ kQ(x )k + K1 K2 kx − x∗ k2 , since Q0 , R0 and R00 are bounded, 2   K3 ∗ ≤δ kQ(x )k + K1 K2 kx − x∗ k, since kx − x∗ k ≤ δ. 2 Combining, we have   K3 ∗ kG(x) − G(x ) − G (x )(x − x )k ≤ δ  + kQ(x )k + K1 K2 kx − x∗ k 2 ∗

0





76

which shows that G(x) is differentiable in x∗ since δ and  are arbitrary and kQ(x∗ )k, K1 , K2 and K3 are constants. Thus G0 (x∗ ) = P 0 (x∗ ) − Q(x∗ )R0 (x∗ ). Next, we prove that the convergence order of the proposed methods (7.3) and (7.4). Theorem 7.2.3. Let F : D ⊆ Rn −→ Rn be sufficiently Fr´echet differentiable at each point of an open convex neighborhood D of x∗ ∈ Rn that is a solution of the system F (x) = 0. Let us suppose that x ∈ S = S(x∗ , δ) and F 0 (x) is continuous and nonsingular in x∗ , and x(0) is close enough to x∗ . Then x∗ is a point of attraction of the sequence {x(k) } obtained using the iterative expression equation (7.3). Furthermore, the sequence converges to x∗ with order four, where the error equation obtained is 4

5

e(k+1) = G4th HM (x(k) ) − x∗ = L1 e(k) + O(e(k) ), (7.10)

1 1 8 79 L1 = C23 − C2 C3 − C3 C2 + C4 27 9 9 9

Proof. We first show that x∗ is a point of attraction using Theorem 7.2.2. In this case, P (x) = x,

Q(x) = H1 (x)A(x),

R(x) = F (x).

Now, since F (x∗ ) = 0, we have 2 y(x∗ ) = x∗ − [F 0 (x∗ )]−1 F (x∗ ) = x∗ , 3 τ (x∗ ) = F 0 (x∗ )−1 F 0 (y(x∗ )) = [F 0 (x∗ )]−1 F 0 (y(x∗ )) = I,  1 A(x∗ ) = [F 0 (x∗ )]−1 + [F 0 (y(x∗ ))]−1 = [F 0 (x∗ )]−1 , 2

H1 (x∗ ) = I,

Q(x∗ ) = H1 (x∗ )A(x∗ ) = I[F 0 (x∗ )]−1 = [F 0 (x∗ )]−1 6= 0, R(x∗ ) = F (x∗ ) = 0, P (x∗ ) = x∗ ,

R0 (x∗ ) = F 0 (x∗ ),

P 0 (x∗ ) = I,

G0 (x∗ ) = P 0 (x∗ ) − Q(x∗ )R0 (x∗ ) = I − [F 0 (x∗ )]−1 F 0 (x∗ ) = 0, so that ρ(G0 (x∗ )) = 0 < 1 and by Ostrowski’s theorem, x∗ is a point of attraction of equation (7.3). Next we establish the fourth order convergence of this method. From

77

equations (7.5) and (7.6) we obtain h i 2 3 4 5 F (x(k) ) = F 0 (x∗ ) e(k) + C2 e(k) + C3 e(k) + C4 e(k) + O(e(k) )

(7.11)

and h i 5 (k) (k) 2 (k) 3 (k) 4 F (x ) = F (x ) I + 2C2 e + 3C3 e + 4C4 e + 5C5 e + O(e(k) ), 0

0

(k)



where e(k) = x(k) − x∗ . We have h i 2 3 4 [F 0 (x(k) )]−1 = I + X1 e(k) + X2 e(k) + X3 e(k) [F 0 (x∗ )]−1 + O(e(k) ) (7.12) where X1 = −2C2 , X2 = 4C22 − 3C3 and X3 = −8C23 + 6C2 C3 + 6C3 C2 − 4C4 . Then 2

3

4

[F 0 (x(k) )]−1 F (x(k) ) = e(k) − C2 e(k) + 2(C22 − C3 )e(k) + O(e(k) ), and the expression for y(x(k) ) is 2 4 1 2 3 y(x(k) ) = x∗ + e(k) + C2 e(k) − (C22 − C3 )e(k) 3 3 3 8 4 5 +(2C4 − C2 C3 − 2C3 C2 + 8C23 )e(k) + O(e(k) ). 3 Taylor expansion of the Jacobian matrix F 0 (y(x(k) )) is h 0 (k) 0 ∗ F (y(x )) = F (x ) I + 2C2 (y(x(k) ) − x∗ ) + 3C3 (y(x(k) ) − x∗ )2 i 5 (k) ∗ 3 (k) ∗ 4 + 4C4 (y(x ) − x ) + 5C5 (y(x ) − x ) + O(e(k) ) h i 2 3 4 = F 0 (x∗ ) I + N1 e(k) + N2 e(k) + N3 e(k) + O(e(k) ), 1 8 4 8 4 4 2 N1 = C2 , N2 = C22 + C3 , N3 = − C23 + C2 C3 + C3 C2 + C4 . 3 3 3 3 3 3 27 Therefore, τ (x(k) ) = [F 0 (x(k) )]−1 F 0 (y(x(k) )) = I + (N1 + X1 )e(k) + (N2 + X1 N1 + X2 )e(k) 3

2

4

+(N3 + X1 N2 + X2 N1 + X3 )e(k) + O(e(k) ) 4 8 2 = I − C2 e(k) + (4C22 − C3 )e(k) 3 3  32 3 16 104 3 4 + − C2 + 8C2 C3 + C3 C2 − C4 e(k) + O(e(k) ) 3 3 27

78

and then  1 2 1 τ (x(k) ) − I + τ (x(k) ) − I 4 2   1 2 1 2 (k) 2 = I + C2 e + − C2 + C3 e(k) 3 9 3   8 3 14 4 26 3 4 + − C2 + C2 C3 − C3 C2 + C4 e(k) + O(e(k) ) 3 9 3 27

H1 (x(k) ) = I −

(7.13)

Also, h 2 F 0 (y(x(k) ))−1 = [F 0 (x∗ )]−1 I − N1 e(k) + (N12 − N2 )e(k)  3i  4 + N1 N2 + N2 N1 − N13 − N3 e(k) + O(e(k) ) h i 2 3 4 = I + Y1 e(k) + Y2 e(k) + Y3 e(k) [F 0 (x∗ )]−1 + O(e(k) ), (7.14) where

2 1 8 Y1 = − C2 , Y2 = − C22 − C3 , 3 9 3 112 3 22 10 4 Y3 = C2 − C2 C3 − C3 C2 − C4 27 9 9 27

On the other hand, using equations (7.12) and (7.14), the harmonic mean can be expressed as   h 14 2 5 4 2 (k) A(x ) = I − C2 e + C2 − C3 e(k) 3 9 3   i 22 56 52 3 16 (k) 3 [F 0 (x∗ )]−1 (7.15) + − C2 + C2 C3 + C3 C2 − C4 e 27 9 9 27 (k)

4

+ O(e(k) ) Using equations (7.13) and (7.15), we have h  106 17 (k) 2 (k) 2 H1 (x )A(x ) = I − C2 e + (C2 − C3 ) e + − C23 + C2 C3 27 9 10  (k) 3 i 0 ∗ −1 10 4 [F (x )] + O(e(k) ) + C3 C2 − C4 e 9 9 (7.16) (k)

(k)

Finally, by using equations (7.11) and (7.16) in Equation (7.3) with some simplifications, the error equation can be expressed as: e(k+1) = x(k) − x∗ − H1 (x(k) )A(x(k) )F (x(k) )   79 3 8 1 1 4 5 = C2 − C2 C3 − C3 C2 + C4 e(k) + O(e(k) ) 27 9 9 9

(7.17)

Thus from equation (7.17), it can be concluded that the order of convergence of the 4th HM method is four.

79

For the case r ≥ 1 we state and prove the following theorem. Theorem 7.2.4. Let F : D ⊆ Rn −→ Rn be sufficiently Fr´echet differentiable at each point of an open convex neighborhood D of x∗ ∈ Rn that is a solution of the system F (x) = 0. Let us suppose that x ∈ S = S(x∗ , δ) and F 0 (x) is continuous and nonsingular in x∗ , and x(0) is close enough to x∗ . Then x∗ is a point of attraction of the sequence {x(k) } obtained using the iterative expression equation (7.4). Furthermore the sequence converges to x∗ with order 2r + 4, where r is a positive integer and r ≥ 1. Proof. In this case, P (x) = µj−1 (x),

Q(x) = H2 (x)A(x),

R(x) = F (µj−1 (x)), j = 1, ..., r.

We can show by induction that µj−1 (x∗ ) = x∗ ,

µ0j−1 (x∗ ) = 0,

∀ j = 1, ..., r

so that P (x∗ ) = µj−1 (x∗ ) = x∗ , H2 (x∗ ) = I, Q(x∗ ) = H2 (x∗ )A(x∗ ) = I[F 0 (x∗ )]−1 = [F 0 (x∗ )]−1 6= 0, R(x∗ ) = F (µj−1 (x∗ )) = F (x∗ ) = 0, P 0 (x∗ ) = µ0j−1 (x∗ ) = 0, R0 (x∗ ) = F 0 (µj−1 (x∗ ))µ0j−1 (x∗ ) = 0, G0 (x∗ ) = P 0 (x∗ ) − Q(x∗ )R0 (x∗ ) = 0. So ρ(G0 (x∗ )) = 0 < 1 and by Ostrowski’s theorem, x∗ is a point of attraction of equation (7.4). A Taylor expansion of F (µj−1 (x(k) )) about x∗ yields   F (µj−1 (x(k) )) = F 0 (x∗ ) (µj−1 (x(k) ) − x∗ ) + C2 (µj−1 (x(k) ) − x∗ )2 + ... (7.18) Also, let   8 4 2 (k) 2 H2 (x ) = I + C2 e + −4C2 + C3 e(k) + ... 3 3 (k)

(7.19)

80

Using equations (7.15) and (7.19), we have h i 2 H2 (x(k) )A(x(k) ) = I + L2 e(k) + ... [F 0 (x∗ )]−1 ,

L2 = −

38 2 C + C3 (7.20) 9 2

Using equations (7.18) and (7.20), we obtain µj (x(k) ) − x∗ = µj−1 (x(k) ) − x∗ − H2 (x(k) )A(x(k) )F (µj−1 (x(k) )) = L2 e

(k) 2

(7.21) (k)



(µj−1 (x ) − x ) + ...

Proceeding by induction of equation (7.21) and using equation (7.10), we have µr (x(k) ) − x∗ = L1 L2 r e(k)

(2r+4)

+ O(e(k)

(2r+5)

), r ≥ 1,

which shows that the method has 2r + 4 order of convergence. Consider the following iterative methods for solving system of nonlinear equation for the purpose of comparing results: Two step fourth order Newton’s method (4th N R): x(k+1) = G4th N R (x(k) ) = G2nd N M (x(k) ) − F 0 (G2nd N M (x(k) ))−1 F (G2nd N M (x(k) ))

(7.22)

which was recently rediscovered by Noor et al. (2013) using the variational iteration technique. Recently, Sharma et al. (2013) developed a fourth order method, which is given by x(k+1) = G4th SGS (x(k) ) = x(k) − W (x(k) )F 0 (x(k) )−1 F (x(k) ),   1 3 0 (k) −1 0 (k) 9 0 (k) −1 0 (k) (k) W (x ) = −I + F (y ) F (x ) + F (x ) F (y ) , 2 4 4 2 y(k) = x(k) − F 0 (x(k) )−1 F (x(k) ). 3

(7.23)

Cordero et al. (2012a) presented a sixth order method, which is given by x(k+1) = G6th CHM T (x(k) ) = z(x(k) ) − [F 0 (G2nd N M (x(k) ))]−1 F (z(x(k) )) z(x(k) ) = G2nd N M (x(k) ) − H(x(k) )[F 0 (x(k) )]−1 F (G2nd N M (x(k) )) h i H(x(k) ) = 2I − F 0 (x(k) )−1 F 0 (G2nd N M (x(k) )) .

(7.24)

81

7.3

Efficiency of the Methods

In this section, we consider the efficiency index to compare the proposed methods. Definition 7.3.1. Ostrowski (1960) Efficiency Index of an iterative method is defined 1

as EI = p d , where p is the order of convergence and d is the total number of new function evaluations ( F and F 0 ) per iteration. Definition 7.3.2. Cordero et al. (2010b) Computational Efficiency of an iterative method is defined as CE = p1/(d+op) , where p is the order of convergence, d is the total number of new function evaluations and op is the total number of operations per iteration. For calculating the term op, number of products and quotients required for solving m linear system with the coefficient matrix solved by LU factorization is ( 13 n3 + mn2 − 31 n), where n is the size of each system. Table 7.1 shows the comparison of EI and CE. From this table, we observe that for the values of n = 5 and 10, where n is the size of the system, EI and CE values indicate that the proposed methods have greater value and hence more efficient. Table 7.1: Comparison of EI and CE Method 2nd N M (7.1) 3rd HM (7.2) 4th N R (7.22) 4th SGS (7.23)

EI 2 3 4

4th HM (7.3) 6th HM (7.4)

1 n+2n2

1 3n+2n2

4 6

1 n+2n2

1 2n+2n2

4

6th CHM T (7.24) 6

1 n+n2

1 n+2n2

1 2n+2n2

n=5 1.0234 1.0202 1.0234 1.0255 1.0279 1.0255 1.0303

n=10 1.0063 1.0052 1.0063 1.0066 1.0078 1.0066 1.0082

n=5

n=10

2

1 1 n3 +2n2 + 2 n 3 3

1.0073

1.0013

3

1 2 n3 +4n2 + 1 n 3 3

1.0060

1.0010

4

1 2 n3 +4n2 + 4 n 3 3

1.0073

1.0013

4

1 2 n3 +5n2 + 1 n 3 3

1.0066

1.0012

6

1 2 n3 +6n2 + 7 n 3 3

1.0073

1.0014

4

1 2 n3 +5n2 + 1 n 3 3

1.0066

1.0012

6

1 2 n3 +7n2 + 4 n 3 3

1.0068

1.0013

CE

Figure 7.1 displays the performance of different methods with respect to EI and CE. It is observed from the figure, the new method 6th HM is better than 2nd N M and all other compared methods, for all n ≥ 2 with respect to both EI and CE.

82

Efficiency index

Computational efficiency index

1.18

1.07 nd

2nd NM

2 NM 3rd HM

1.16 1.14

3rd HM

1.06

4th NR

4th NR

4th SGS

4th SGS 1.05

6th CHMT

1.12

th

6 CHMT

4th HM

4th HM

6th HM

6th HM

1.04

EI

CE

1.1 1.08

1.03

1.06 1.02 1.04 1.01

1.02 1

2

3

4

5

6 n

7

8

9

10

1

2

3

4

5

6 n

7

8

9

10

Figure 7.1: Comparison of Efficiency index and Computational efficiency index

7.4

Numerical Examples

In this section, we compare the performance of the proposed methods (7.3) and (7.4) with some known methods. The numerical experiments have been carried out using M ATLAB software for the test problems given below. The approximate solutions are calculated correct to 1000 digits by using variable precision arithmetic. We use the following stopping criterion for the iterations: errmin = kx(k+1) − x(k) k2 < 10−100

(7.25)

We have used the approximated computational order of convergence pc given by (see Cordero and Torregrosa (2007)) pc ≈

log (kx(k+1) − x(k) k2 /kx(k) − x(k−1) k2 ) log (kx(k) − x(k−1) k2 /kx(k−1) − x(k−2) k2 )

(7.26)

Tables ??-?? show the results for the test problems (TP1, TP2, TP3), from which we conclude that the 10th HM method is the most efficient method with least number of iterations and residual error. Also, we have given CPU time for the proposed methods and some existing methods. Next, we consider the (2r + 4)th HM family of methods (7.4) for finding the least value of r and thus the value of p in order to get the number of iterations M = 2 and errmin = 0. To achieve this, TP1 requires r = 6 (p = 16), TP2 requires r = 18 (p = 40) and TP3 requires r = 8 (p = 20).

83

7.5

Application

The 1-D Bratu problem Buckmire (2003) is given by d2 U + λ exp U (x) = 0, λ > 0, 0 < x < 1, dx2

(7.27)

with the boundary conditions U (0) = U (1) = 0. The 1-D planar Bratu problem has two known, bifurcated, exact solutions for values of λ < λc , one solution for λ = λc and no solution for λ > λc . The critical value of λc is simply 8(η 2 − 1), where η is the fixed point of the hyperbolic cotangent function coth (x). The exact solution to the equation (7.27) is known and can be presented here as # cosh (x − 21 ) 2θ  , U (x) = −2 ln cosh 4θ "

(7.28)

where θ is a constant to be determined, which satisfies the boundary conditions and is carefully chosen and assumed to be the solution of the differential equation (7.27). Using a similar procedure as in Odejide and Aregbesola (2006), we show how to obtain the critical value of λ. Substitute equation (7.28) in equation (7.27), simplify 1 and collocate at the point x = because it is the midpoint of the interval. Another 2 point could be chosen, but low order approximations are likely to be better if the collocation points are distributed somewhat evenly throughout the region. Then, we have   θ θ = 2λ cosh . 4 2

2

Differentiating equation (7.29) with respect to θ and setting

(7.29) dλ = 0, the critical dθ

value λc satisfies 1 θ = λc cosh 2

    θ θ sinh . 4 4

(7.30)

By eliminating λ from equations (7.29) and (7.30), we have the value of θc for the critical λc satisfying θc = coth 4



θc 4

 (7.31)

for which θc = 4.798714560 can be obtained using an iterative method. We then get λc = 3.513830720 from equation (7.29). Figure 7.2 illustrates this critical value of λ. The finite dimensional problem using standard finite difference scheme is given

84

Variation of θ with λ 20

θ

15

λc

10

5

0 0

0.5

1

1.5

2

2.5

λ

3

3.5

4

Figure 7.2: Variation of θ for different values of λ. by Fj (Uj ) =

Uj+1 − 2Uj + Uj−1 + λ exp Uj = 0, j = 1, 2, ..., n − 1 h2

(7.32)

with discrete boundary conditions U0 = Un = 0 and the step size h = 1/n. There are n − 1 unknowns. The Jacobian is a sparse matrix and its typical number of nonzero per row is three. It is known that the finite difference scheme converges to the lower solution of the 1-D Bratu using the starting vector U (0) = (0, 0, .., 0)T . We use n = 100 and test for 350 λ’s in the interval (0, 3.5] (h = 0.01). (k+1)

For each λ, we let Mλ be the minimum number of iterations for which kUj (k)

(k)

Uj k2 < 1e − 13, where the approximation Uj



is calculated correct to 14 decimal

places. Let Mλ be the mean of iteration number for 350 λ’s. Table 7.2: Comparison of number of λ’s (out of 350 λ’s) for 1-D Bratu problem Method 2nd N M (7.1) 3rd HM (7.2) 4th SGS (7.23) th 6 CHM T (7.24) 4th HM (7.3) 6th HM (7.4)

M =2 M =3 M =4 M =5 M >5 0 12 114 143 81 0 140 206 2 2 4 237 100 8 1 3 213 124 8 2 4 234 103 7 2 35 281 32 1 1

Mλ 4.92 3.62 3.33 3.42 3.35 3.00

Figure 7.3 and Table 7.2 give the results for the 1-D Bratu problem, where M represents the number of iterations for convergence. It can be observed from figure 7.3, from four methods considered, as λ increases to its critical value the number

85

of iterations required for convergence increases. From the table 7.2, it is observed that as the order of method increases, the mean of iteration number (Mλ ) decreases. Also, 6th HM is the most efficient method among the six methods because it has the lowest Mλ and the highest number of λ converging in 2 iterations. 10 9 2ndNM 3rdHM

8

4thHM 6thHM



7 6 5 4 3 2 0

0.5

1

1.5

λ

2

2.5

3

3.5

Figure 7.3: Variation of number of iteration with λ for 1-D Bratu problem For each λ, we find the minimum order of the (2r + 4)th HM family so that we reach convergence in 2 iterations and the results are shown in Figure 7.4. It can be observed that as the value of λ increases, the value of p required for convergence in 2 iterations also increases. For λ ∈ [0.01, 0.04], we require p = 4 (4th HM ). For λ ∈ [0.05, 0.35], we require p = 6 (6th HM ). For λ ∈ [0.36, 0.83], we require p = 8 (8th HM ). For λ ∈ [0.84, 1.29], we require p = 10 (10th HM ). For λ ∈ [1.30, 1.66], we require p = 12 (12th HM ). For λ ∈ [1.66, 1.95], we require p = 14 (14th HM ). For λ ∈ [1.96, 2.19], we require p = 16 (16th HM ). For λ ∈ [2.20, 2.37], we require p = 18 (18th HM ). For λ ∈ [2.38, 2.52], we require p = 20 (20th HM ). For λ ∈ [2.53, 2.64], we require p = 22 (22th HM ) and so on. We notice that the width of the interval decreases and the order of the family is very high as λ tends to its critical value. Finally, for λ = 3.5, we require p = 260 to reach convergence in 2 iterations.

7.6

Concluding Remarks

In this chapter, a fourth order method and its multi-step version having higher order convergence using weight functions to solve systems of nonlinear equations have

86

160 140

order of method p

120 100 80 60 40 20 0 0

0.5

1

1.5

λ

2

2.5

3

3.5

Figure 7.4: Order of the (2r + 4)th HM family for each λ. been proposed. The proposed methods do not require the evaluation of second or higher order Fr´echet derivatives to reach fourth order or higher order of convergence. We have proved a general result that shows x∗ is a point of attraction of a general iteration function. Also for the proposed new methods, it is verified that x∗ is a point of attraction. A few examples have been verified using the proposed methods and compared them with some known methods, which illustrate the superiority of the new methods. The proposed new methods have been applied on a practical problem called 1-D Bratu problem. The results obtained are interesting and encouraging for the new methods. Hence, the proposed methods can be considered competent enough to Newton’s method and some of the existing methods.

Chapter 8 Efficient Newton-type methods for system of nonlinear equations In this chapter, we have presented some efficient iterative methods of convergence order four, five and six for solving system of nonlinear equations. Our aim is to achieve higher order Newton-type methods with only one inverse of Jacobian matrix. Moreover, we pay special attention to the less number of linear systems to be used in the iterative process. The fourth order method is a two step method, whereas new fifth and sixth order methods are composed of three steps, namely, Newton iteration as the first step and weighted Newton iteration as the second and third step. It is proved that the root x∗ is a point of attraction for the new iterative schemes. The performance of the new methods are verified through numerical examples. As an application, we have implemented the present methods on Chandrasekhar’s equation and 1-D Bratu problem. In Section 8.1, a fourth order iterative method for solving systems of nonlinear equation is proposed. Further, two new methods having fifth and sixth order convergence are given. Section 8.2 discusses a convergence analysis of the proposed methods. In section 8.3, efficiency indices and computational efficiency indices for the new methods are discussed. Section 8.4 verifies the new methods with numerical examples and their results are compared with some existing methods. Section 8.5 includes two application problems, namely Chandrasekhar’s equation and 1-D Bratu problem. Concluding remarks are given in the last section. Outcome of the new methods are given in concluding in remarks which shows that our methods are efficient.

87

88

8.1

Construction of new methods

Petkovic et al. (2013, 2014) recently developed a fourth order iterative method for solving single equation which is given below:  2  f (x) 9 3 τ −1 ψ4th P et (x) = x − 1 − (τ − 1) + 4 8 f 0 (x) 2 f (x) f 0 (y) , y =x− , τ = . 3 f 0 (x) f 0 (x)

(8.1)

We have extended the method (8.1) for system of nonlinear equations having fourth order convergence with a total number of function evaluations n + 2n2 . Further, new fifth and sixth order methods with total number of function evaluations 2n + 2n2 and 2n + 2n2 respectively, are proposed by using only one inverse evaluation of first Fr´echet derivative per iteration. These three methods have been developed in Madhu and Jayakumar (2016a).

8.2

Convergence Analysis of the methods

In order to prove the convergence results, we recall some important definitions and results from the theory of point of attraction given in Section 7.2. Theorem 8.2.1. Let F : D ⊆ Rn −→ Rn be sufficiently Fr´echet differentiable at each point of an open convex neighborhood D of x∗ ∈ Rn , that is a solution of the system F (x) = 0. Let us suppose that x ∈ S = S(x∗ , δ) and F 0 (x) is continuous and nonsingular in x∗ , and x(0) close enough to x∗ . Then the sequence {x(k) }k≥0 obtained using the iterative expression (??) converges locally to x∗ with order 4, where the error equation obtained is 4

5

e(k+1) = G4th M 1 (x(k) ) − x∗ = L1 e(k) + O(e(k) ), L1 =

1 C4 − 4C2 C3 + 5C23 + 3C3 C2 . 9

Proof. We first show that x∗ is point of attraction using Theorem 7.2.2. In this case,

89

P (x) = x, Q(x) = H1 (x)[F 0 (x)]−1 , R(x) = F (x). Now, since F (x∗ ) = 0, we have 2 y(x∗ ) = x∗ − [F 0 (x∗ )]−1 F (x∗ ) = x∗ , 3 τ (x∗ ) = F 0 (x∗ )−1 F 0 (y(x∗ )) = [F 0 (x∗ )]−1 F 0 (y(x∗ )) = I,

H1 (x∗ ) = I,

Q(x∗ ) = I[F 0 (x∗ )]−1 = [F 0 (x∗ )]−1 6= 0, R(x∗ ) = F (x∗ ) = 0, P (x∗ ) = x∗ ,

R0 (x∗ ) = F 0 (x∗ ),

P 0 (x∗ ) = I,

then G0 (x∗ ) = P 0 (x∗ ) − Q(x∗ )R0 (x∗ ) = I − [F 0 (x∗ )]−1 F 0 (x∗ ) = 0, so that ρ(G0 (x∗ )) = 0 < 1 and by Ostrowski’s Theorem, x∗ is a point of attraction of (??). We next establish fourth order convergence of the method. From (7.5) and (7.6), we obtain h i 7 (k) (k) 2 (k) 3 (k) 4 (k) 5 (k) 6 F (x ) = F (x ) e +C2 e +C3 e +C4 e +C5 e +C6 e +O(e(k) ), (k)

0



(8.2) and h i 6 (k) (k) 2 (k) 3 (k) 4 (k) 5 F (x ) = F (x ) I+2C2 e +3C3 e +4C4 e +5C5 e +6C6 e +O(e(k) ), 0

(k)

0



where Ck = (1/k!)[F 0 (x∗ )]−1 F (k) (x∗ ), k = 2, 3, . . ., and e(k) = x(k) − x∗ . We have h i 2 3 4 5 [F 0 (x(k) )]−1 = I + X2 e(k) + X3 e(k) + X4 e(k) + X5 e(k) + X6 e(k) [F 0 (x∗ )]−1 , (8.3) where X2 = −2C2 , X3 = 4C22 − 3C3 , X4 = −8C23 + 6C2 C3 + 6C3 C2 − 4C4 , X5 = 16C24 − 12C3 C22 − 12C2 C3 C2 + 8C4 C2 − 12C22 C3 + 9C32 + 8C2 C4 − 5C5 , X6 = −32C25 + 24C3 C23 + 24C2 C3 C22 − 16C4 C22 + 24C22 C3 C2 − 18C32 C2 −16C2 C4 C2 + 10C5 C2 + 24C23 C3 − 18C3 C2 C3 − 18C2 C32 + 12C4 C3 −16C22 C4 + 12C3 C4 + 10C2 C4 − 6C6 . Then 2

3

4

[F 0 (x(k) )]−1 F (x(k) ) = e(k) + K0 e(k) + K1 e(k) + K2 e(k) + K3 e(k) + K4 e

(k) 6

+ O(e

(k) 7

5

(8.4) ),

90

where K0 = −C2 , K1 = 2C22 − 2C3 , K2 = −4C23 + 4C2 C3 + 3C3 C2 − 3C4 , K3 = 8C24 − 6C3 C22 − 6C2 C3 C2 + 4C4 C2 − 8C22 C3 + 6C32 + 6C2 C4 − 4C5 , K4 = −5C6 − 2C2 C5 − 14C22 C4 + 9C3 C4 + 16C23 C3 − 12C3 C2 C3 − 12C2 C32 +8C4 C3 − 16C25 + 12C3 C23 + 12C2 C3 C22 − 8C4 C22 + 12C22 C3 C2 −9C32 C2 − 8C2 C4 C2 + 5C5 C2 + 10C2 C4 . Also, the expression for y(x(k) ) is 2 1 y(x(k) ) = x∗ + e(k) − K, 3 3 where

2

3

4

5

6

K = K0 e(k) + K1 e(k) + K2 e(k) + K3 e(k) + K4 e(k) .

Taylor expansion of Jacobian matrix F 0 (y(x(k) )) is h F 0 (y(x(k) )) = F 0 (x∗ ) I + 2C2 (y(x(k) ) − x∗ ) + 3C3 (y(x(k) ) − x∗ )2 + 4C4 (y(x(k) ) − x∗ )3 + 5C5 (y(x(k) ) − x∗ )4 i 6 + 6C6 (y(x(k) ) − x∗ )5 + O(e(k) ) h i 2 3 4 5 6 = F 0 (x∗ ) I + N1 e(k) + N2 e(k) + N3 e(k) + N4 e(k) + N5 e(k) + O(e(k) ), (8.5) where 2 4 1 8 8 4 4 C2 , N2 = C22 + C3 , N3 = − C23 + C2 C3 + C3 C2 + C4 , 3 3 3 3 3 3 27 16 4 4 8 2 8 16 2 2 = 4C2 C4 − C2 C3 + C2 − 4C2 C3 C2 − C3 C2 + C3 + C4 C2 3 3 3 3 9 5 + C5 , 81 16 32 32 = C2 C5 − 8C22 C4 + C23 C3 − 8C2 C32 − C25 + 8C2 C3 C22 3 3 3 16 16 16 2 +8C2 C3 C2 − C2 C4 C2 + 4C3 C4 − C3 C2 C3 + C3 C23 3 3 3 16 40 6 2 −4C3 C2 + C4 C3 + C2 + C6 . 9 81 243

N1 = N4

N5

91

Therefore, τ (x(k) ) = [F 0 (x(k) )]−1 F 0 (y(x(k) )) = I + (N1 + X2 )e(k) + (N2 + X2 N1 + X3 )e(k) +(N3 + X2 N2 + X3 N1 + X4 )e(k)

2

3

+(N4 + X2 N3 + X3 N2 + X4 N1 + X5 )e(k)

4

(8.6) 5

6

+(N5 + X2 N4 + X3 N3 + X4 N2 + X5 N1 + X6 )e(k) + O(e(k) ). and then H1 (x(k) ) = I −

 9 2 3 τ (x(k) ) − I + τ (x(k) ) − I 4 8 2

3

(8.7) 4

5

6

= I + R1 e(k) + R2 e(k) + R3 e(k) + R4 e(k) + R5 e(k) + O(e(k) ), where R1 = C2 , R2 = 2C3 − C22 , R3 = −4C23 + 2C2 C3 − 4C3 C2 +

26 C4 , 9

183 C2 C4 − 32C22 C3 + 30C24 − 5C2 C3 C2 + 10C3 C22 9 100 14 C5 , − C4 C2 + 3 27 104 2 2022 C2 C5 − C C4 + 24C23 C3 − 10C2 C32 − 32C25 − 5C5 C2 = − 108 9 2 105 +14C2 C3 C22 + 16C22 C3 C2 − 16C2 C4 C2 − C3 C4 + 22C3 C2 C3 9 28 10 363 30 −31C3 C23 + 15C32 C2 − C4 C3 − C2 + C6 + 12C4 C22 − C2 C4 . 3 27 81 4

R4 = −

R5

Using (8.4) and (8.7), we have 4

5

6

7

H1 (x(k) )[F 0 (x(k) )]−1 F (x(k) ) = e(k) + S1 e(k) + S2 e(k) + S3 e(k) + O(e(k) ), (8.8) where 1 S1 = − C4 + 4C2 C3 − 5C23 − 3C3 C2 , 9 8 156 S 2 = − C5 − C2 C4 − 34C22 C3 + 2C32 + 36C24 27 9 56 +12C3 C22 − 10C2 C3 C2 − C4 C2 , 9 42 2670 149 2 78 S 3 = − C6 − C2 C5 − C2 C4 − C3 C4 + 36C23 C3 + 51C22 C3 C2 81 108 9 9 64 +26C3 C2 C3 − 20C2 C32 − C4 C3 − 74C25 − 45C3 C23 + 29C2 C3 C22 9 130 1 100 10 10 + C4 C22 + 12C32 C2 + C2 C4 C2 − C5 C2 + C2 C4 − C2 . 9 3 27 4 27

92

Next, by using (8.8) in (??) we have G4th M 1 (x(k) ) = x(k) − H1 (x(k) )[F 0 (x(k) )]−1 F (x(k) ) 4

5

6

7

= x∗ − S1 e(k) − S2 e(k) − S3 e(k) + O(e(k) ).

(8.9)

Finally we obtain e(k+1) = G4th M 1 (x(k) ) − x∗ =

1 9

 4 5 C4 − 4C2 C3 + 5C23 + 3C3 C2 e(k) + O(e(k) ).

Hence, we see that the method (??) has fourth order convergence. Theorem 8.2.2. Let F : D ⊆ Rn −→ Rn be sufficiently Fr´echet differentiable at each point of an open convex neighborhood D of x∗ ∈ Rn , that is a solution of the system F (x) = 0. Let us suppose that x ∈ S = S(x∗ , δ) and F 0 (x) is continuous and nonsingular in x∗ , and x(0) close enough to x∗ . Then the sequence {x(k) }k≥0 obtained using the iterative expression (??) converges locally to x∗ with order 5, where the error equation obtained is 5

6

e(k+1) = G5th M 2 (x(k) ) − x∗ = L2 e(k) + O(e(k) ), L2 =

2 C2 C4 − 8C22 C3 + 10C24 + 6C2 C3 C2 . 9

Proof. We first show that x∗ is point of attraction using Theorem 7.2.2. In this case, Q(x) = [F 0 (x)]−1 ,

P (x) = G4th M 1 (x),

R(x) = F (G4th M 1 (x)).

We can show by induction that G4th M 1 (x∗ ) = x∗ , G04th M 1 (x∗ ) = 0, so that P (x∗ ) = G4th M 1 (x∗ ) = x∗ , Q(x∗ ) = [F 0 (x∗ )]−1 6= 0, R(x∗ ) = F (G4th M 1 (x∗ )) = F (x∗ ) = 0, P 0 (x∗ ) = G04th M 1 (x∗ ) = 0, R0 (x∗ ) = F 0 (G4th M 1 (x∗ ))G04th M 1 (x∗ ) = 0, G0 (x∗ ) = P 0 (x∗ ) − Q(x∗ )R0 (x∗ ) = 0. So ρ(G0 (x∗ )) = 0 < 1 and by Ostrowski’s Theorem, x∗ is a point of attraction of (??). We next establish fifth order convergence of the method. Expanding F (G4th M 1 (x(k) )) about x∗ , we have h i 7 (k) 4 (k) 5 (k) 6 F (G4th M 1 (x )) = F (x ) − S1 e − S2 e − S3 e + O(e(k) ). (k)

0



(8.10)

93

Using equations (8.3) and (8.10), we get 4

[F 0 (x(k) )]−1 F (G4th M 1 (x(k) )) = −S1 e(k) − (S2 + X2 S1 )e(k) (k) 6

− (S3 + X2 S2 + X3 S1 )e

5

+ O(e

(k) 7

(8.11) ).

Again, using (8.11) and (8.9) in (??), we get G5th M 2 (x(k) ) = G4th M 1 (x(k) ) − [F 0 (x(k) )]−1 F (G4th M 1 (x(k) ))  4 5 6 4 = x∗ − S1 e(k) − S2 e(k) − S3 e(k) − − S1 e(k)  5 6 7 −(S2 + X2 S1 )e(k) − (S3 + X2 S2 + X3 S1 )e(k) + O(e(k) ) . From the above equation, finally we obtain e(k+1) = G5th M 2 (x(k) ) − x∗  5 2 6 2 4 C2 C4 − 8C2 C3 + 10C2 + 6C2 C3 C2 e(k) + O(e(k) ). = 9 Hence, we see that the method (??) has fifth order convergence. Theorem 8.2.3. Let F : D ⊆ Rn −→ Rn be sufficiently Fr´echet differentiable at each point of an open convex neighborhood D of x∗ ∈ Rn , that is a solution of the system F (x) = 0. Let us suppose that x ∈ S = S(x∗ , δ) and F 0 (x) is continuous and nonsingular in x∗ , and x(0) close enough to x∗ . Then the sequence {x(k) }k≥0 obtained using the iterative expression (??) converges locally to x∗ with order 6, where the error equation obtained is 6

7

e(k+1) = G6th M 3 (x(k) ) − x∗ = L3 e(k) + O(e(k) ) L3 =

184 3 138 2 230 5 46 2 C2 + C2 C4 + 4C3 C2 C3 − 3C32 C2 − 5C3 C23 − C2 C3 + C C3 C2 . 9 81 9 9 2

Proof. In this case, P (x) = G4th M 1 (x),

Q(x) = H2 (x)[F 0 (x)]−1 ,

R(x) = F (G4th M 1 (x)).

94

We can show by induction that G4th M 1 (x∗ ) = x∗ ,

G04th M 1 (x∗ ) = 0, so that

P (x∗ ) = G4th M 1 (x∗ ) = x∗ , H2 (x∗ ) = I, Q(x∗ ) = H2 (x∗ )[F 0 (x∗ )]−1 = I[F 0 (x∗ )]−1 = [F 0 (x∗ )]−1 6= 0, R(x∗ ) = F (G4th M 1 (x∗ )) = F (x∗ ) = 0, P 0 (x∗ ) = G04th M 1 (x∗ ) = 0, R0 (x∗ ) = F 0 (G4th M 1 (x∗ ))G04th M 1 (x∗ ) = 0, G0 (x∗ ) = P 0 (x∗ ) − Q(x∗ )R0 (x∗ ) = 0. So ρ(G0 (x∗ )) = 0 < 1 and by Ostrowski’s Theorem, x∗ is a point of attraction for method (??). By using (8.6) in H2 (x(k) ), we get H2 (x(k) ) = I + 2C2 e(k) +





 2 46 2 3 C2 + 4C3 e(k) + O(e(k) ). 9

(8.12)

Again, using (8.11) and (8.12), we have H2 (x(k) )[F 0 (x(k) )]−1 F (G4th M 1 (x(k) )) 4

5

= −S1 e(k) − (S2 + X2 S1 + 2C2 S1 )e(k)  6  46 − S3 + X2 S2 + X3 S1 + 2C2 (S2 + X2 S1 ) + (− C22 + 4C3 )S1 e(k) . 9 (8.13) Then, using (8.9) and (8.13) in (??), we have G6th M 3 (x(k) ) = G4th M 1 (x(k) ) − H2 (x(k) )[F 0 (x(k) )]−1 F (G4th M 1 (x(k) ))  4 ∗ (k) 4 (k) 5 (k) 6 = x − S1 e − S2 e − S3 e − − S1 e(k) − (S2 + X2 S1  (k) 5 + 2C2 S1 )e − S3 + X2 S2 + X3 S1 + 2C2 (S2 + X2 S1 )  6 46 7 + (− C22 + 4C3 )S1 e(k) + O(e(k) ). 9 From above equation, finally we obtain 46 2 C C4 + 4C3 C2 C3 9 81 2  6 184 3 138 2 7 −3C32 C2 − 5C3 C23 − C2 C3 + C2 C3 C2 e(k) + O(e(k) ). 9 9

e(k+1) = G6th M 3 (x(k) ) − x∗ =

 230

C25 +

Hence, we see that the method (??) has sixth order convergence. Further, we consider the following iterative methods for solving system of nonlinear equations for the purpose of comparison:

95

Method of Babajee et al. (2012) (4th BCST ): x(k+1) = G4th BCST (x(k) ) = x(k) − W (x(k) )[A1 (x(k) )]−1 F (x(k) ), 1 0 (k) A1 (x(k) ) = (F (x ) + F 0 (y(x(k) ))), 2 3 1 (8.14) W (x(k) ) = I − (τ (x(k) ) − I) + (τ (x(k) ) − I)2 , 4 4 where τ (x(k) ) and y(x(k) ) as defined in 4th M 1. Method of Grau-Sanchez et al. (2011) (5th GGN ): x(k+1) = G5th GGN (x(k) ) = z(x(k) ) − F 0 (y(x(k) ))−1 F (z(x(k) )), i 1h z(x(k) ) = x(k) − F 0 (x(k) )−1 + F 0 (y(x(k) ))−1 F (x(k) ), (8.15) 2 y(x(k) ) = x(k) − [F 0 (x(k) )]−1 F (x(k) ). Method of Cordero et al. (2012b) (6th CT V ) : x(k+1) = G6th CT V (x(k) )  −1 (k) 0 (k) 0 (k) = z(x ) + F (x ) − 2F (y(x )) F (z(x(k) )) (8.16)  −1   z(x(k) ) = x(k) + F 0 (x(k) ) − 2F 0 (y(x(k) )) 3F (x(k) ) − 4F (y(x(k) )) 1 y(x(k) ) = x(k) − [F 0 (x(k) )]−1 F (x(k) ) 2

8.3

Efficiency of the Methods

In this section, consider the definitions 7.3.1 and 7.3.2 given in Section 7.3 for efficiency index and computational efficiency respectively. Table 8.1 shows the comparison of EI and CE for the methods discussed in this Chapter. From this table, we observe that for the values of n = 5 and 10, where n is the size of the system, EI and CE values indicate that the proposed methods have greater value and hence more efficient. Figure 8.1 displays the performance of different methods with respect to EI and CE. It is observed from the figure, EI and CE for the new method 6th M 3 is better than 2nd N M and all other compared methods, for all n ≥ 2, where n is the size of the system.

96

Table 8.1: Comparison of EI and CE Method

n=5

EI

2nd N M (7.1)

2

4th BCST (8.14) 5th GGN (8.15)

4 5

6th CT V (8.16)

6

4th M 1 (??)

5

6th M 3 (??)

6

1.0234

1 n+2n2

1.0255

1 2n+2n2

1.0272

1 3n+2n2

4

5th M 2 (??)

1 n+n2

1.0279

1 n+2n2

1.0255

1 2n+2n2

1.0272

1 2n+2n2

1.0303

n=10

n=5

n=10

2

1 1 n3 +2n2 + 2 n 3 3

1.0073

1.0013

4

1 2 n3 +5n2 + 1 n 3 3

1.0066

1.0012

5

1 2 n3 +5n2 + 4 n 3 3

1.0075

1.0014

6

1 2 n3 +6n2 + 7 n 3 3

1.0073

1.0014

4

1 1 n3 +4n2 + 2 n 3 3

1.0096

1.0019

5

1 1 n3 +5n2 + 5 n 3 3

1.0092

1.0019

6

1 1 n3 +5n2 + 5 n 3 3

1.0103

1.0021

CE

1.0063 1.0066 1.0073 1.0078 1.0066 1.0073 1.0082

Efficiency index

Computational efficiency index

1.18

1.09 nd

nd

2 NM

2 NM

th

1.16

th

1.08

4 BCST

4 BCST

5th GGN 1.14

5th GGN 1.07

6th CTV 4th M1

1.12

4th M1

1.06

th

5 M2 6th M3

th

5 M2 6th M3

1.05

EI

CE

1.1

6th CTV

1.08

1.04

1.06

1.03

1.04

1.02

1.02

1.01

1

2

3

4

5

6 n

7

8

9

10

1

2

3

4

5

6 n

7

8

9

10

Figure 8.1: Comparison of Efficiency index and Computational efficiency index

8.4

Numerical examples

In this section, we compare the performance of the contributed methods with Newton’s method and few existing methods (8.14)-(8.16). The numerical experiments have been carried out using the M ATLAB software for the examples given below. The approximate solutions are calculated correct to 1000 digits by using variable precision arithmetic. We use the following stopping criterion for the iteration scheme: It is observed from Table ?? that the computational order of convergence (pc ) overwhelmingly supports the theoretical order of convergence for all the test problems (TP1 - TP6). Also, 6th M 3 I.F. requires less number of iterations than 2nd N M and few other compared methods. However, as far as the total number of function

97

evaluations (ntotal ) and total number of inverse of Fr´echet derivatives (ninv ) are concerned, the 6th M 3 requires less number than few other compared methods.

8.5 8.5.1

Applications Chandrasekhar’s equation

Consider the quadratic integral equation related to Chandrasekhar’s work Chandrasekhar (1960) and Ezquerro et al. (2010) Z x(s) = f (s) + λx(s)

1

k(s, t)x(t)dt,

(8.17)

0

that arises in the study of the radiative transfer theory, the transport of neutrons and the kinetic theory of the gases. Equation (8.17) is also studied by Argyros (1985, 1988) and along with some conditions for the kernel k(s, t) in Ezquerro et al. (1999). We consider the maximum norm for the kernel k(s, t) as a continuous function in s, t ∈ [0, 1] such that 0 < k(s, t) < 1 and k(s, t) + k(t, s) = 1. Moreover, we assume that f (s) ∈ C[0, 1] is a given function and λ is a real constant. Note that finding a solution for (8.17) is equivalent to solving the equation F (x) = 0, where F : C[0, 1] → C[0, 1] and Z

1

k(s, t)x(t)dt, x ∈ C[0, 1], s ∈ [0, 1]. (8.18)

F (x)(s) = x(s) − f (s) − λx(s) 0

In particular, we consider x(s) F (x)(s) = x(s) − 1 − 4

Z 0

1

s x(t)dt, x ∈ C[0, 1], s ∈ [0, 1], s+t

(8.19)

Finally, we approximate numerically a solution for F (x) = 0, where F (x) is given in (8.19) by means of a discretization procedure. We solve the integral equation (8.19) by the Gauss-Legendre quadrature formula: Z 1 m 1X f (t)dt ≈ βj f (tj ), 2 j=1 0

(8.20)

where βj are the weights and tj are the knots tabulated in Table 8.2 for m = 8. Denote xi for the approximations of x(ti ), i = 1, 2, ...8, we obtain the following nonlinear system: 8

1 X ti βj aij xj , where aij = , i = 1, ...8. xi ≈ 1 + xi 8 j=1 8(ti + tj )

(8.21)

98

Table 8.2: Weights and knots for the Gauss-Legendre formula (m = 8) j 1 2 3 4 5 6 7 8

tj 0.0198550717512... 0.101666761293... 0.237233795041... 0.408282678752... 0.591717321247... 0.762766204958... 0.898333238706... 0.980144928248...

βj 0.101228536290... 0.222381034453... 0.313706645877... 0.362683783378... 0.362683783378... 0.31370664587... 0.222381034453... 0.101228536290...

Table 8.3: Numerical results for Chandrasekhar’s equation Methods nd 2 N M (7.1) th 4 BCST (8.14) 5th GGN (8.15) 6th CT V (8.16) 4th M 1 (??) 5th M 2 (??) 6th M 3 (??)

M 5 3 3 3 3 3 3

F 5 3 6 9 3 6 6

F0 5 6 6 6 6 6 6

ntotal 360 408 432 456 408 432 432

ninv 5 6 6 6 3 3 3

errmin 3.44018e−016 2.22045e−016 3.14020e−016 0 3.04019e−016 0 3.15018e−016

The stopping criterion for this problem is taken as errmin = kx(k+1) − x(k) k2 < 10−13 , the initial approximation assumed is x(0) = {1, 1, ..., 1}t for obtaining the solution of this problem given by x∗ = {1.02171973146..., 1.07318638173..., 1.12572489365..., 1.16975331216..., 1.20307175130..., 1.22649087463..., 1.24152460059..., 1.24944851669...}t . Table 8.3 shows that the proposed methods require less ninv than other compared methods.

8.5.2

1-D Bratu problem

The 1-D Bratu problem (Buckmire (2003) and Babajee et al. (2015b)) is given by d2 U + λ exp U (x) = 0, λ > 0, 0 < x < 1, dx2

(8.22)

with the boundary conditions U (0) = U (1) = 0. The same problem has been already considered in Section 7.5. Hence, we give only the numerical results by applying the proposed methods of this chapter.

99

Table 8.4: Comparison of number of λ’s (out of 350 λ’s) for 1-D Bratu problem Method 2 N M (7.1) th 4 BCST (8.14) 5th GGN (8.15) 6th CT V (8.16) 4th M 1 (??) 5th M 2 (??) 6th M 3 (??) nd

M =2 M =3 M =4 M =5 M >5 0 12 114 143 81 4 231 104 9 2 0 139 207 1 3 3 238 72 25 12 4 227 107 9 3 4 228 106 11 1 4 228 106 11 1

Mλ 4.92 3.37 3.63 3.55 3.42 3.36 3.36

Table 8.4 shows the results for 1-D Bratu problem, where M represents the number of iterations for convergence. It can be observed from Table 8.4, the proposed methods (5th M 2, 6th M 3) are more efficient among the compared methods because it has lowest mean iteration number (Mλ ).

8.6

Concluding Remarks

In this chapter, some efficient new iterative methods of order four, five and six by using weight functions to solve systems of nonlinear equations have been proposed. The new methods do not require the evaluation of second or higher order Fr´echet derivatives to reach fourth or higher order of convergence. Also, they require evaluation of only one inverse of first order Fr´echet derivative and calculate less number of linear systems per iteration. We have proved that x∗ is a point of attraction for the new methods. Few examples have been verified using the proposed methods and compared them with some known methods, which illustrate the superiority of the new methods. From the graphical figures, the efficiency and computational efficiency of the new methods are found to be superior over Newton’s method and some existing equivalent methods. The proposed new methods have been applied on two application problems called Chandrasekhar’s equation and 1-D Bratu problem. The results show that new methods can be better alternative to Newton’s method and some of the existing higher order methods.

Chapter 9 An improvement to double-step Newton-type method and its multi-step version In this chapter, we have improved the order of the double-step Newton-type method from four to five using the same number of evaluation of two functions and two first order Fr´echet derivatives for each iteration. The multi-step version requires one more function evaluation for each step. The multi-step version converges with order 3r + 5, r ≥ 1. Numerical experiments compare the new methods with some existing methods. Our methods are also tested on Chandrasekhar’s problem and 2-D Bratu problem to illustrate the applications. In section 9.1, we have proposed a 2-step fifth order method which is an improvement over the 2-step Newton method, which uses two functions and two Fr´echet derivative evaluations and only one inverse. A multi-step version with order 3r + 5, r ≥ 1 for solving a system of nonlinear equations is also suggested which uses one more additional functional evaluation of each step. Section 9.2 derives convergence analysis of the new methods. In section 9.3, efficiency indices and computational efficiency indices for the new methods are discussed. In section 9.4, numerical examples and their results are discussed comparing with some existing methods. In section 9.5, two application problems are solved using the present method and some existing methods. Concluding remarks are given in the last section. Outcome of the new methods are given in concluding in remarks which shows that our methods are efficient.

100

101

9.1

Construction of new methods

One of the basic procedure for solving system of nonlinear equations is the classical one-step second order Newton’s method (2nd N M ) given by x(k+1) = G2nd N M (x(k) ) = x(k) − [F 0 (x(k) )]−1 F (x(k) ), k = 0, 1, 2, ...

(9.1)

where [F 0 (x(k) )]−1 is the inverse of first Fr´echet derivative F 0 (x(k) ) of the function F (x(k) ). It is straightforward to see that this method requires the evaluation of one function, one first derivative and one matrix inversion per iteration. Traub (1964) suggested that multi-step iterative methods are better way to improve the order of convergence free from second derivatives, such modifications of Newton’s method have been proposed in the literature; for example see Cordero et al. (2010b), Abad et al. (2013), Noor et al. (2013), Sharma et al. (2013) and Babajee et al. (2015b) and references therein. Traub (1964) proposed a two step variant of Newton’s method (3rd T M ) having convergence order three by evaluating two functions, one Fr´echet derivatives and its inverse for x(k+1) = G3rd T M (x(k) ) = G2nd N M (x(k) ) − [F 0 (x(k) ))]−1 F (G2nd N M (x(k) )).

(9.2)

The double-step fourth order Newton method (4th N R) is given by x(k+1) = G4th N R (x(k) ) (k)

0

(k)

−1

(9.3) (k)

= G2nd N M (x ) − [F (G2nd N M (x ))] F (G2nd N M (x )), which was recently rediscovered by Noor et al. (2013) using the variational iteration technique, where two functions, two Fr´echet derivatives and their inverse were evaluated. Recently, Abad et al. (2013) combined the Newton and Traub method to obtain a 3-step fourth order method (4th ACT ), where two functions, two Fr´echet derivatives and their inverse were evaluated x(k+1) = G4th ACT (x(k) ) = G2nd N M (x(k) ) − [F 0 (G3rd T M (x(k) ))]−1 F (G2nd N M (x(k) )).

(9.4)

Again in Abad et al. (2013), a different combination to get a 3-step fifth order method (5th ACT ), where three functions, two Fr´echet derivatives and their inverse were

102

evaluated for x(k+1) = G5th ACT (x(k) ) = G3rd T M (x(k) ) − [F 0 (G2nd N M (x(k) ))]−1 F (G3rd T M (x(k) )).

(9.5)

New double-step fifth order method (5th M BJ): x(k+1) = G5th M BJ (x(k) ) = G2nd N M (x(k) ) − H1 (x(k) )[F 0 (x(k) )]−1 F (G2nd N M (x(k) )), 5 (9.6) H1 (x(k) ) = 2I − τ (x(k) ) + (τ (x(k) ) − I)2 , 4  τ (x(k) ) = [F 0 (x(k) )]−1 F 0 G2nd N M (x(k) ) , where I is the n × n identity matrix. This method uses two function and two Fr´echet derivative evaluations and only one inverse to reach fifth order convergence. New multi-step (3r + 5)th order method ((3r + 5)th MBJ): We improve the 5th M BJ method by an additional function evaluation to get the multi-step version (3r + 5)th MBJ method is given by x(k+1) = G(3r+5)th M BJ (x(k) ) = µr (x(k) ), µj (x(k) ) = µj−1 (x(k) ) − H2 (x(k) )[F 0 (x(k) )]−1 F (µj−1 (x(k) )), 3 H2 (x(k) ) = 2I − τ (x(k) ) + (τ (x(k) ) − I)2 , 2 (k) (k) µ0 (x ) = G5th M BJ (x ), j = 1, 2, ..., r, r ≥ 1.

(9.7)

Remark 9.1.1. This multi-step version has order 3r + 5, r ≥ 1. The case r = 0 is the 5th M BJ method given in (9.6). These two methods (9.6) and (9.7) have been proposed in Madhu et al. (2016).

9.2

Convergence Analysis of the methods

In order to prove the convergence results, we recall some important definitions and results from the theory of point of attraction given in Section 7.2. Theorem 9.2.1. Let F : D ⊆ Rn −→ Rn be sufficiently Fr´echet differentiable at each point of an open convex neighborhood D of x∗ ∈ Rn , that is a solution of the system F (x) = 0. Let us suppose that F 0 (x) is continuous and nonsingular

103

in x∗ , and x(0) close enough to x∗ . Then the sequence {x(k) }k≥0 obtained using the iterative expression (9.6) converges locally to x∗ with order 5, where the error equation obtained is 5

6

e(k+1) = G5th M BJ (x(k) ) − x∗ = L1 e(k) + O(e(k) ), 1 27 L1 = 14C24 + C2 C3 C2 + 12C22 C3 − C3 C22 2 2

(9.8)

Proof. We first show that x∗ is a point of attraction using Theorem 7.2.2. In this case, P (x) = G2nd N M (x),

Q(x) = H1 (x)[F 0 (x)]−1 ,

R(x) = F (G2nd N M (x)).

Now, since F (x∗ ) = 0, we have G2nd N M (x∗ ) = x∗ − [F 0 (x∗ )]−1 F (x∗ ) = x∗ , τ (x∗ ) = F 0 (x∗ )−1 F 0 (G2nd N M (x∗ )) = [F 0 (x∗ )]−1 F 0 (x∗ ) = I, P (x∗ ) = G2nd N M (x∗ ),

H1 (x∗ ) = I,

P 0 (x∗ ) = G02nd N M (x∗ ) = 0,

Q(x∗ ) = H1 (x∗ )[F 0 (x∗ )]−1 = I[F 0 (x∗ )]−1 = [F 0 (x∗ )]−1 6= 0, R(x∗ ) = F (G2nd N M (x∗ )) = F (x∗ ) = 0, R0 (x∗ ) = F 0 (G2nd N M (x∗ ))G02nd N M (x∗ ) = 0, G0 (x∗ ) = P 0 (x∗ ) − Q(x∗ )R0 (x∗ ) = 0, so that ρ(G0 (x∗ )) = 0 < 1 and by Ostrowski’s theorem, x∗ is a point of attraction of equation (9.6). From (7.5) and (7.6) we obtain h i 2 3 4 5 6 F (x(k) ) = F 0 (x∗ ) e(k) + C2 e(k) + C3 e(k) + C4 e(k) + C5 e(k) + O(e(k) ), (9.9) and h i 2 3 4 5 F 0 (x(k) ) = F 0 (x∗ ) I + 2C2 e(k) + 3C3 e(k) + 4C4 e(k) + 5C5 e(k) + O(e(k) ). (9.10) We have h i 2 3 4 5 [F 0 (x(k) )]−1 = I + X1 e(k) + X2 e(k) + X3 e(k) + X4 e(k) [F 0 (x∗ )]−1 + O(e(k) ), (9.11)

104

where X1 = −2C2 , X2 = 4C22 − 3C3 , X3 = −8C23 + 6C2 C3 + 6C3 C2 − 4C4 and X4 = −5C5 + 9C32 + 8C2 C4 + 8C4 C2 + 16C24 − 12C22 C3 − 12C3 C22 − 12C2 C3 C2 . Then 2

3

[F 0 (x(k) )]−1 F (x(k) ) = e(k) − C2 e(k) + 2(C22 − C3 )e(k)   4 + − 3C4 − 4C23 + 4C2 C3 + 3C3 C2 e(k) (9.12)  + 6C32 + 8C24 − 8C22 C3 − 6C2 C3 C2 − 6C3 C22  5 6 +6C2 C4 + 4C4 C2 − 4C5 e(k) + O(e(k) ). Also we have  3   2 G2nd N M (x(k) ) = x∗ + C2 e(k) + 2 − C22 + C3 e(k) + 3C4 + 4C23  4  − 4C2 C3 − 3C3 C2 e(k) + − 6C32 − 8C24 + 8C22 C3 (9.13)  5 + 6C2 C3 C2 + 6C3 C22 − 6C2 C4 − 4C4 C2 + 4C5 e(k) . Expanding F (G2nd N M (x(k) )) and F 0 (G2nd N M (x(k) )) about x∗ in Taylor series respectively given below h F (G2nd N M (x(k) )) = F 0 (x∗ ) (G2nd N M (x(k) ) − x∗ ) + C2 (G2nd N M (x(k) ) − x∗ )2 i + C3 (G2nd N M (x(k) ) − x∗ )3 + ... h  0 ∗ (k) 2 2 (k) 3 = F (x ) C2 e + 2(−C2 + C3 )e + 3C4 + 5C23  4  − 4C2 C3 − 3C3 C2 e(k) + − 6C32 − 12C24 + 10C22 C3  5i 2 + 8C2 C3 C2 + 6C3 C2 − 6C2 C4 − 4C4 C2 + 4C5 e(k) , (9.14) h F 0 (G2nd N M (x(k) )) = F 0 (x∗ ) I + 2C2 (G2nd N M (x(k) ) − x∗ ) i + 3C3 (G2nd N M (x(k) ) − x∗ )2 + ... h i 2 3 4 5 = F 0 (x∗ ) I + P1 e(k) + P2 e(k) + P3 e(k) + O(e(k) ), (9.15) where P1 = 2C22 , P2 = 4C2 C3 − 4C23 , P3 = 8C24 + 6C2 C4 − 8C22 C3 + 3C3 C22 − 6C2 C3 C2 .

105

Using equations (9.11) and (9.15), we have 0

(k)

−1

0

(k)

(k)





6C22

2

[F (x )] F (G2nd N M (x )) = I − 2C2 e + − 3C3 e(k)   3 3 + 10C2 C3 + 6C3 C2 − 16C2 − 4C4 e(k)  + − 5C5 + 9C32 + 40C24 + 14C2 C4 + 8C4 C2  4 5 − 28C22 C3 − 15C3 C22 − 18C2 C3 C2 e(k) + O(e(k) ).

(9.16)

Then 5 H1 (x(k) ) = 2I − τ (x(k) ) + (τ (x(k) ) − I)2 4  2 = I + 2C2 e(k) − C22 − 3C3 e(k)  5  3 3 4 3 + − C2 C3 + C3 C2 − 14C2 + 4C4 e(k) + O(e(k) ).(9.17) 2 2 Using equations (9.11) and (9.14), we have 0

(k)

−1

(k)

(k) 2



4C22



(k) 3



[F (x )] F (G2nd N M (x )) = C2 e + 2C3 − e + 13C23  4  − 8C2 C3 − 6C3 C2 + 3C4 e(k) + − 12C32 − 38C24 + 26C22 C3  5 6 + 20C2 C3 C2 + 18C3 C22 − 12C2 C4 − 8C4 C2 + 4C5 e(k) + O(e(k) ).

(9.18)

Then 2

H1 (x(k) )[F 0 (x(k) )]−1 F (G2nd N M (x(k) )) = C2 e(k) + (2C3 − 2C22 )e(k)

3

4

+ (3C4 + 4C23 − 4C2 C3 − 3C3 C2 )e(k) + (−6C32 + 4C5 − 6C2 C4 − 4C4 C2 − 22C24 +

(9.19)

39 11 5 6 C2 C3 C2 − 4C22 C3 + C3 C22 )e(k) + O(e(k) ). 2 2

Using equations (9.13) and (9.19) in (9.6), we have 27 1 5 6 e(k+1) = (14C24 + C2 C3 C2 + 12C22 C3 − C3 C22 )e(k) + O(e(k) ), 2 2

(9.20)

which proves fifth order convergence. Theorem 9.2.2. Let F : D ⊆ Rn −→ Rn be sufficiently Fr´echet differentiable at each point of an open convex neighborhood D of x∗ ∈ Rn that is a solution of the system F (x) = 0. Let us suppose that x ∈ S = S(x∗ , δ) and F 0 (x) is continuous and nonsingular in x∗ , and x(0) is close enough to x∗ . Then x∗ is a point of attraction

106

of the sequence {x(k) } obtained using the iterative expression (9.7). Furthermore the sequence converges locally to x∗ with order 3r + 5, where r is a positive integer and r ≥ 1. Proof. In this case, Q(x) = H2 (x)F (x)−1 ,

P (x) = µj−1 (x),

R(x) = F (µj−1 (x)), j = 1, ..., r.

We can show by induction that µj−1 (x∗ ) = x∗ ,

µ0j−1 (x∗ ) = 0,

∀ j = 1, ..., r

so that P (x∗ ) = µj−1 (x∗ ) = x∗ , H2 (x∗ ) = I, Q(x∗ ) = I[F 0 (x∗ )]−1 = [F 0 (x∗ )]−1 6= 0, R(x∗ ) = F (µj−1 (x∗ )) = F (x∗ ) = 0, P 0 (x∗ ) = µ0j−1 (x∗ ) = 0, R0 (x∗ ) = F 0 (µj−1 (x∗ ))µ0j−1 (x∗ ) = 0, G0 (x∗ ) = P 0 (x∗ ) − Q(x∗ )R0 (x∗ ) = 0. So ρ(G0 (x∗ )) = 0 < 1 and by Ostrowski’s theorem, x∗ is a point of attraction of equation (9.7). A Taylor expansion of F (µj−1 (x(k) )) about x∗ yields   F (µj−1 (x(k) )) = F 0 (x∗ ) (µj−1 (x(k) ) − x∗ ) + C2 (µj−1 (x(k) ) − x∗ )2 + ... (9.21) Also, let  2 3 H2 (x(k) ) = I + 2C2 e(k) + 3C3 e(k) + −C2 C3 − 20C23 + 3C3 C2 + 4C4 e(k) + ... (9.22) Using equations (9.11) and (9.22), we have (k)

0

(k)

−1

H2 (x )[F (x )]

h

= I + L2 e

(k) 3

i + ... [F 0 (x∗ )]−1 ,

(9.23)

where L2 = −20C23 − C2 C3 + 3C3 C2 . Using equations (9.23) and (9.21), we have H2 (x(k) )F 0 (x(k) )−1 F (µj−1 (x(k) ))     (k) 3 (k) ∗ (k) ∗ 2 = I + L2 e + ... × (µj−1 (x ) − x ) + C2 (µj−1 (x ) − x ) + ... 3

= µj−1 (x(k) ) − x∗ + L2 e(k) (µj−1 (x(k) ) − x∗ ) + C2 (µj−1 (x(k) ) − x∗ )2 + ... (9.24)

107

Using equation (9.24) in equation (9.7), we obtain ∗





3

µj (x ) − x = µj−1 (x ) − x − µj−1 (x(k) ) − x∗ + L2 e(k) (µj−1 (x(k) ) − x∗ )  (k) ∗ 2 + C2 (µj−1 (x ) − x ) + ... (k)

(k)

3

= −L2 e(k) (µj−1 (x(k) ) − x∗ ) + ... (9.25) 5

As we know that µ0 (x(k) ) − x∗ = O(e(k) ) and from equation (9.25), for j = 1, 2, ...   (3) µ1 (x(k) ) − x∗ = −L2 (e(k) ) µ0 (x(k) ) − x∗ + ... 8

µ2 (x(k) ) − x∗

= −L2 L1 e(k) + ...   (3) = −L2 (e(k) ) µ1 (x(k) ) − x∗ + ... = −L2 (−L2 L1 )e(k) = L22 L1 e(k)

11

11

+ ...

+ ...

Proceeding by induction, we have µr (x(k) ) − x∗ = (−L2 )r L1 (e(k)

(3r+5)

) + O(e(k)

(3r+6)

), r ≥ 1.

(9.26)

which shows that the method has 3r + 5 order of convergence. Remark 9.2.1. Multi-step version (3r + 5)th MBJ (r ≥ 0) methods are constructed from 4 + r evaluation of F and F 0 together. Only one inverse evaluation of Fr´echet derivatives F 0 at (x(k) ) is used for the proposed method (9.7).

9.3

Efficiency of the Methods

In this section, consider the definitions 7.3.1 and 7.3.2 given in Section 7.3 for efficiency index and computational efficiency respectively. Table 9.1 shows the comparison of EI and CE for the methods given in Section 9.1. From this table, we observe that for the values of n = 5 and 10, where n is the size of the system, EI and CE values indicate that the proposed methods have greater value and hence more efficient.

Figure 9.1 displays the performance of different

methods with respect to EI and CE. It is observed from the figure, EI and CE for the

108

Table 9.1: Comparison of EI and CE Method

n=5

EI

2nd N M (9.1)

2

3rd T M (9.2)

3

4th N R (9.3)

4

4th ACT (9.4)

4

5th ACT (9.5)

5

5th M BJ (9.6) 5 8th M BJ (9.7) 8

1 n+n2

n=10

1.0234

1 2n+n2

1.0319

1 2n+2n2

1.0234

1 2n+2n2

1.0234

1 3n+2n2

1.0251

1 2n+2n2

1.0272

1 3n+2n2

1.0325

n=5

n=10

1.0063 2

1 1 n3 +2n2 + 2 n 3 3

1.0073

1.0013

1.0092 3

1 1 n3 +3n2 + 5 n 3 3

1.0088

1.0017

1.0063 4

1 2 n3 +4n2 + 4 n 3 3

1.0073

1.0013

1.0063 4

1 2 n3 +5n2 + 4 n 3 3

1.0065

1.0012

1.0070 5

1 2 n3 +5n2 + 7 n 3 3

1.0073

1.0014

1.0073 5

1 1 n3 +5n2 + 5 n 3 3

1.0092

1.0019

1.0091 8

1 1 n3 +6n2 + 8 n 3 3

1.0102

1.0022

CE

Efficiency index

Computational efficiency index

1.18

1.08 nd

2nd NM

2 NM rd

1.16

3 TM

3rd TM

1.07

4th NR 1.14

4th NR

4th ACT

th

4 ACT

1.06

5th ACT

1.12

th

5 ACT

th

5 MBJ 8th MBJ

EI

CE

1.1

5th MBJ

1.05

8th MBJ

1.04

1.08 1.03 1.06 1.02

1.04

1.01

1.02 1

2

3

4

5

6 n

7

8

9

10

1

2

3

4

5

6 n

7

8

9

10

Figure 9.1: Comparison of Efficiency index and Computational efficiency index new method 8th M BJ is better than 2nd N M and other compared methods, for all n ≥ 2, where n is the size of the system.

9.4

Numerical examples

The numerical experiments have been carried out using the M ATLAB software for the examples given below. The approximate solutions are calculated correct to 1000 digits by using variable precision arithmetic. We use the following stopping criterion for the iteration scheme: Tables ?? to ?? show the results for the test problems, from which we conclude that 8th M BJ and 11th M BJ methods are the most efficient methods out of the methods compared with the least number of iterations and residual error.

109

Table 9.2: Comparison of iteration and errors for Chandrasekhar’s equation M 1 2 3 4

9.5

2nd N M 4.9e−001 1.6e−002 1.5e−005 1.2e−011

3rd T M 5.1e−001 9.8e−004 5.3e−012 -

4th N R 5.1e−001 1.5e−005 3.8e−016 -

4th ACT 5.1e−001 1.4e−005 2.2e−016 -

5th ACT 5.1e−001 1.7e−006 -

5th M BJ 5.1e−001 5.9e−006 -

Applications

9.5.1

Chandrasekhar’s equation

Consider the quadratic integral equation related to Chandrasekhar’s work Chandrasekhar (1960), Ezquerro et al. (2010) Z x(s) = f (s) + λx(s)

1

k(s, t)x(t)dt,

(9.27)

0

which arises in the study of the radiative transfer theory, the transport of neutrons and the kinetic theory of the gases (see detailed discussion in section 8.5). For this application, we use the following stopping criterion errmin = kx(k+1) − x(k) k2 < 10−5 , the initial approximation assumed is x(0) = {1, 1, ..., 1}t for obtaining the solution of this problem given by x∗ = {1.02171973146..., 1.07318638173..., 1.12572489365..., 1.16975331216..., 1.20307175130..., 1.22649087463..., 1.24152460059..., 1.24944851669...}t . Table 9.2 compares the iteration numbers and their errors for this application. The results show that the proposed method 5th M BJ is better than 2nd N M and some other methods.

9.5.2

2-D Bratu problem

We consider the solution of the Bratu problem in two-dimensions ∂ 2U ∂ 2U + + λ exp(U ) = 0, x, y ∈ D = [0, 1] × [0, 1] ∂x2 ∂y 2

(9.28)

subject to the boundary conditions U (x, y) = 0, x, y ∈ ∂D, where ∂D is the boundry of domain D.

(9.29)

110

Variation of θ with λ 20 18 16 14 12 θ

λ

c

10 8 6 4 2 0

0

1

2

3

4 λ

5

6

7

8

Figure 9.2: Variation of θ for different values of λ. The 2-D Planar Bratu problem has two known, bifurcated, exact solutions for values of λ < λc , one solution for λ = λc and no solutions for λ > λc . The exact solution to equation (9.28) is known and can be presented here as "  # cosh ( 4θ ) cosh (x − 12 )(y − 12 )θ   , U (x, y) = 2 ln cosh (x − 21 ) 2θ cosh (y − 12 ) 2θ

(9.30)

where θ is a constant to be determined, which satisfies the boundary conditions and is carefully chosen and assumed to be the solution of the differential equation (9.28). The following procedure found in Odejide and Aregbesola (2006), for how to obtain the critical value of λ. Substituting equation (9.30) in (9.28), simplifying and 1 1 collocating at the point x = and y = because it is the midpoint of the interval. 2 2 Another point could be chosen, but low-order approximations are likely to be better if the collocation points are distributed somewhat evenly throughout the region. Then, we have   θ θ = λ cosh . 4 2

2

Differentiating equation (9.31) with respect to θ and setting

(9.31) dλ = 0, the critical dθ

value λc satisfies 1 θ = λc cosh 4

    θ θ sinh . 4 4

(9.32)

By eliminating λ from equations (9.31) and (9.32), we have the value of θc for the critical λc satisfying θc = coth 4



θc 4

 (9.33)

and θc = 4.798714561. We then get λc = 7.027661438 from equation (9.32). Fig. (9.2) illustrates this critical value of λc . The differential equation (9.28) is usually

111

discretized by using the finite−difference five−point formula with the step size h, the resulting nonlinear equations are F (Ui,j ) = −(4Ui,j − λh2 exp(Ui,j )) + Ui+1,j + Ui−1,j + Ui,j+1 + Ui,j−1

(9.34)

where Ui,j is U at (xi , yj ), xi = ih, yj = jh, i, j = 1, 2, ...n. Equation (9.34) represents a set of n × n nonlinear equations in Ui,j which are then solved by using iterative methods. We use n = 10 and n = 20 for testing 700 λ’s in the interval (0, 7] (h = 0.01). For each λ, let Mλ be the minimum number of iterations for which (k+1)

kUi,j

(k)

(k)

− Ui,j k2 < 1e-11, where the approximation Ui,j is calculated correct to 14

decimal places. Let Mλ be the mean iteration number for the 700 λ’s. Table 9.3: Comparison of number of λ’s for 2-D Bratu problem for n = 10 Method M =2 M =3 M =4 M =5 2nd N M (9.1) 0 101 520 79 rd 3 T M (9.2) 18 633 49 0 th 4 N R (9.3) 101 599 0 0 4th ACT (9.4) 101 599 0 0 th 5 ACT (9.5) 200 500 0 0 5th M BJ (9.6) 121 579 0 0 th 8 M BJ (9.7) 514 186 0 0

Mλ 3.96 3.04 2.85 2.85 2.71 2.82 2.26

Table 9.4: Comparison of number of λ’s for 2-D Bratu problem for n = 20 Method M =2 M =3 M =4 M =5 2 N M (9.1) 1 212 487 0 3rd T M (9.2) 39 661 0 0 4th N R (9.3) 213 487 0 0 th 4 ACT (9.4) 213 487 0 0 5th ACT (9.5) 419 281 0 0 th 5 M BJ (9.6) 217 483 0 0 8th M BJ (9.7) 700 0 0 0 nd

Mλ 3.69 2.94 2.69 2.69 2.40 2.69 2

Tables 9.3-9.4 give the results for 2-D Bratu problem, where M represents the number of iterations for convergence. It can be observed from Table 9.4, proposed method 8th M BJ is convergent for all the grid points in two iterations. Also, the

112

method 8th M BJ is the most efficient method among the compared methods for the cases n = 10 and n = 20 because it has the lowest mean iteration number.

9.6

Concluding Remarks

In this chapter, a double−step fifth order method which is an improvement to the 2-step Newton’s method and its multi-step version having higher order convergence using weight functions to solve systems of nonlinear equations have been proposed. The proposed methods do not require evaluation of second or higher order Fr´echet derivatives to reach fifth order or higher order of convergence, evaluate only one inverse of first order Fr´echet derivative. We have verified that the root x∗ is a point of attraction based on the theory given Ortega and Rheinbolt (1970). A few examples have been verified using the proposed methods and compared them with some known methods, which illustrate the superiority of the new methods. The proposed new methods have been applied on two practical problems called Chandrasekhar’s equation and 2-D Bratu problem. The results obtained are interesting and encouraging for the new methods. Hence, the proposed methods can be considered competent enough to Newton’s method and some of the existing higher order methods.

Chapter 10 Application in Global Positioning System 10.1

Introduction

The Global Positioning System (GPS) is all weather and space based navigation system. It is a constellation of a minimum of 24 satellites in near circular orbits, positioned at an approximate height of 20000 km above from the earth. The satellites travel with a velocity of 3.9 km/sec with an orbital period of 11 hours 58 minutes. From the satellite constellation, the equations required for solving the user position conform a nonlinear system of equations. In addition, some practical considerations will be included in these equations. These equations are usually solved through a linearization technique and fixed point iteration method. That solution of an equation is in a Cartesian coordinate system, and then it is converted into a spherical coordinated system. However, the Earth is not a perfect sphere. Therefore, once the user position is estimated, the shape of the Earth must be taken into consideration. The user position is then translated into the Earth based coordinate system. In this Chapter, we are going to focus our attention on solving the nonlinear system of equations of the GPS giving the results in a Cartesian coordinate system. The position of a point in space can be found by using the distances measured from this point to some known position in space. Figure 10.1 shows a two dimensional case. In order to determine the user position, three satellites S1 , S2 and S3 and three distances are required. The trace of a point with constant distance to a fixed point is a circle in the two-dimensional case. Two satellites and two distances give two possible solutions because two circles intersect at two points. One more circle is needed to uniquely 113

114

Figure 10.1: Two dimensional user position determine the user position. For similar reasons in a three-dimensional case, four satellites and four distances are needed. Fig. 10.2 shows that three dimensional

Figure 10.2: Three dimensional user position case and it is taken from Griffin (2011). The equal-distance trace to a fixed point is a sphere in a three-dimensional case. A GPS receiver knows the location of the satellites because that information is included in the transmitted Ephemeris data. By estimating how far away a satellite is, the receiver also knows it is located somewhere on the surface of an imaginary sphere centered at the satellite. We can find more information about GPS in Tsui (2005) and Abad et al. (2013).

10.2

Basic Equations for Finding User Position

In this section, the basic equations for determining the user position are presented. Assume that the distance measured is accurate and under this condition, three satellites should be sufficient. Let us suppose that there are three known points at locations r1 or (x1 , y1 , z1 ), r2 or (x2 , y2 , z2 ), and r3 or (x3 , y3 , z3 ) and an unknown point

115

at ru or (xu , yu , zu ). If the distances between the three known points to the unknown point can be measured as ρ1 , ρ2 , ρ3 these distances can be written as p (x1 − xu )2 + (y1 − yu )2 + (z1 − zu )2 , p ρ2 = (x2 − xu )2 + (y2 − yu )2 + (z2 − zu )2 , p ρ3 = (x3 − xu )2 + (y3 − yu )2 + (z3 − zu )2 . ρ1 =

(10.1)

Because there are three unknowns and three equations, the values of xu , yu , zu can be determined from these equations. Theoretically, there should be two sets of solutions as they are second order equations. These equations can be solved by linearizing them and making an iterative approach. In GPS operation, the positions of the satellites are given. This information can be obtained from the data transmitted from the satellites. The distances from the user to the satellites must be measured simultaneously at a certain time instance. Each satellite transmits a signal with a time reference associated with it. By measuring the time of the signal travelling from the satellite to the user, the distance between the user and the satellite can be found. The distance measurement is discussed in the next section.

10.3

Measurement of Pseudorange

Every satellite sends a signal at a certain time tsi . The receiver will receive the signal at a later time tu . The distance between the user and the satellite i can be determined as ρiT = c(tu − tsi ),

(10.2)

where c is the speed of light, ρiT is often referred to as the true value of pseudorange from user to satellite i, tsi is referred to as the true time of transmission from satellite i and tu is the true time of reception. From a practical point of view, it is difficult, if not impossible, to obtain the correct time from the satellite or the user. The actual satellite clock time t0si and actual user clock time t0u are related to the true time as t0si = tsi + ∆bi ,

t0u = tu + but ,

(10.3)

where ∆bi is the satellite clock error and but is the user clock bias error. Besides the clock error, there are other factors affecting the pseudorange measurement. The

116

measured pseudorange ρi can be written as ρi = ρiT + ∆Di − c(∆bi − but ) + c(∆Ti + ∆Ii + vi + ∆vi ),

(10.4)

where ∆Di is the satellite position error effect on the range, ∆Ti is the tropospheric delay error, ∆Ii is the ionospheric delay error, vi is the receiver measurement noise error, and ∆vi is the relativistic time correction. Some of these errors can be corrected for example, the tropospheric delay can be modeled and the ionospheric error can be corrected in a two-frequency receiver. The errors will cause inaccuracy of the user position. However, the user clock error cannot be corrected through receiver information. Thus, it will remain as an unknown. So, the system of (10.1) must be modified as p (x1 − xu )2 + (y1 − yu )2 + (z1 − zu )2 + bu , p ρ2 = (x2 − xu )2 + (y2 − yu )2 + (z2 − zu )2 + bu , p ρ3 = (x3 − xu )2 + (y3 − yu )2 + (z3 − zu )2 + bu , p ρ4 = (x4 − xu )2 + (y4 − yu )2 + (z4 − zu )2 + bu , ρ1 =

(10.5)

where bu is the user clock bias error expressed in distance, which is related to the quantity but by bu = cbut . In the system of (10.5), four equations are needed to solve four unknowns xu , yu , zu and bu . Thus, in a GPS receiver, a minimum of four satellites is required to solve the user position.

10.4

Solution of User Position from Pseudoranges

To solve the system of equation (10.5) is to linearize them. The system can be written in a simplified form as ρi =

p (xi − xu )2 + (yi − yu )2 + (zi − zu )2 + bu , i = 1, 2, 3, 4

(10.6)

where xu , yu , zu are unknowns. The pseudorange ρi and the positions of the satellites xi , yi , zi are known. By differentiating (10.6), we have δρi =

(xi − xu )δxu + (yi − yu )δyu + (zi − zu )δzu + δbu . ρ i − bu

(10.7)

In (10.7), δxu , δyu , δzu , δbu can be considered as the only unknowns. The quantities xu , yu , zu , bu are treated as known values because one can assume some initial values for these quantities. From these initial values, a new set of δxu , δyu , δzu , δbu

117

can be calculated. These values are used to modify the original xu , yu , zu , bu to find another new set of solutions. This new set of xu , yu , zu , bu can be considered again as known quantities. This process continues until the absolute values of δxu , δyu , δzu , δbu are very small and within a certain predetermined limit. The final values of xu , yu , zu , bu are the desired solution. This method is often referred to as an iteration method of fixed point. With δxu , δyu , δzu , δbu as unknowns, the above equation becomes a set of linear equations. This procedure is often referred to as linearization. The expression (10.7) can be written in matrix form as







δρ1     δρ    2    =  δρ3      δρ4

α11 α12 α13 1 α21 α22 α23 α31 α32 α33 α41 α42 α43





δxu     1   δyu    ,  δzu  1    1 δbu

(10.8)

where αi1 =

yi − yu zi − zu xi − xu , αi2 = , αi3 = . ρ i − bu ρi − bu ρ i − bu

The solution of (10.8) is    δxu     δy   u    =   δzu      δbu

α11 α12 α13 1

−1 

 α21 α22 α23 1    α31 α32 α33 1   α41 α42 α43 1

δρ1

(10.9)



   δρ  2   .   δρ3    δρ4

(10.10)

This process obviously does not provide the needed solutions directly. However, the desired solutions can be obtained from it. In order to find the desired position solution, this procedure must be used repetitively in an iterative way. A quantity is often used to determine whether the desired result is reached, and this quantity can be defined as δv =

p δx2u + δyu2 + δzu2 + δb2u ,

(10.11)

where δv is lower than a certain predetermined threshold, the iteration will stop. Sometimes, the clock bias bu is not included in (10.11). In this Chapter, we use the norm kx(k+1) − x(k) k2 for the stopping criterion because it is stronger than (10.11).

118

10.5

Numerical Results for the GPS Problem

In this section, we give numerical results on GPS problem (10.6) to compare the efficiency of the proposed methods 6th HM , 6th M 3, 5th M BJ, 8th M BJ with 2nd N M , 4th ACT and 4th N R methods. The numerical experiments have been carried out using the M ATLAB software. We use the following stopping criterion for the iteration scheme: errmin = kx(k+1) − x(k) k2 < 10−10 .

(10.12)

Let M be the number of iterations required for reaching the minimum residual (errmin ). In order to test our methods on the problem of a user position of a GPS device, we used the coordinates of observed satellite and pseudorange as calculated by El-naggar (2011) and it is given in Table 10.1. Table 10.1: Coordinates of observed satellite and pseudorange. Sat ] SAT 1 SAT 2 SAT 4 SAT 11 SAT 13 SAT 16

X 17934700.08 13642634.73 9078161.96 13950041.94 22247083.32 21944910.3

Y -1201699.27 6241228.57 16940062.6 22815808.7 12695442.4 15502347.0

Z ρ 25412566.4 26063773.1 27327782.1 25880448.3 23258573.1 24898018.26 4876545.56 22162681.56 13764915.2 22716021.52 2490006.32 21452169.25

Table 10.2: Comparison of the iterative methods for GPS Method 2 N M (9.1) 4th ACT (9.4) 4th N R (9.3) 6th HM (7.4) 6th M 3 (??) th 5 M BJ (9.6) 8th M BJ (9.7)

x(0)

nd

(0, 0, 0, 0)T

M 6 4 4 4 4 3 3

errmin 3.8641e−023 9.1051e−032 1.4958e−031 7.5061e−015 5.5526e−026 7.1806e−011 2.2429e−031

User position x∗ x∗ x∗ x∗ x∗ x∗ x∗

For our study, we need XYZ coordinates and Pseudorange (ρ) of only four satellites to find the user position. Hence, we consider data from four satellites namely SAT 1, SAT 2, SAT 4 and SAT 11. Table 10.2 compares different iterative methods

119

for the nonlinear system used in the GPS software, where M represents the number iterations required for convergence. We recall that the coordinates of the center of the Earth and bu = 0 gives x(0) = (0, 0, 0, 0)T is usually used as starting value. We denote x∗ = (4732338.512, 2723851.268, 3285484.240, 0.000012635)T as the solution of the nonlinear system which gives the user position on the Earth.

10.6

Concluding Remarks

We have applied the methods suggested in Chapter 7-9 on a practical problem of GPS. Further, it is found that all the iterative methods converge to the user position measured from the center of the earth. Among the methods tested, 5th M BJ and 8th M BJ methods converge to the user position with less number of iterations as observed from Table 10.2.

Chapter 11 Conclusion and Future Work In this thesis, we have given a detailed review of the historical development of multipoint iterative methods from Traub’s period to present date. It is observed that many methods which are already known were rediscovered and derived by different techniques. We have proposed some new families of higher order multi-point iterative methods for solving scalar nonlinear equations. We have studied the local convergence analysis using Taylor series for all the methods proposed. Also, we have applied the new methods on many examples and compared with some existing equivalent methods and the results are tabulated. We have extended some methods and their modifications for systems of nonlinear equations and proved their local convergence using the point of attraction theory. The different measures of efficiency of the methods namely, EI and CE for scalar and systems of nonlinear equations especially for multistep methods are compared. Using Variable Precision Arithmetic in M ATLAB, we have illustrated the importance of these faster converging methods by verifying on many examples. Our study on the dynamic behaviour of the methods in the complex plane has helped us to understand the basins of attraction for the methods having optimal convergence. The symmetric pattern found in the polynomiographs has produced some beautiful pictures. The proposed methods are also applied on some application problems such as Chandrasekhar integral equations arising in Radiative Heat Transfer and 1-D, 2-D Bratu problems arising in Solid Ignition Fuel Problem. The results obtained show some evidence to support the accuracy of the analytical conclusions and to demonstrate their usefulness in practice. Further, we have studied an important application problem in the modern communication era known as GPS problem. By applying the 120

121

proposed higher order methods, user location can be found more accurately with less number of iteration. The contributions we have made in this thesis are very much limited to suggesting new methods and their local convergence analysis. However, many other type of study can be done which will be a promising area for future research. We have listed below some of the future area of study which can be carried out: 1. Brent (1972) showed that the explicit nonlinear Runge-Kutta methods for the solution of a special class of ordinary differential equations (ODE) may be derived from the methods for finding zeros of functions. Investigation can be done for the derivation of other type of Runge-Kutta methods from the multipoint iterative methods. The solution of nonlinear ODE by implicit schemes requires the solution of system of nonlinear equations. One of the application problem is to solve a stiff nonlinear ODE. Ralston and Rabinowitz (2001) pointed out that the explicit numerical methods for solving stiff ODE are inadequate because of lack of stability. Newton’s method is usually sought to solve such systems of equations. There are possibilities to develop a numerical construction of the implicit scheme for solving stiff nonlinear ODE by combining an implicit scheme with one of the higher order variants of Newton’s method and by using piecewise uniform mesh to resolve faster varying component in the solution and thus producing stable numerical schemes. 2. The dynamic behaviour of the higher order methods with optimal convergence (eight and higher) for complex polynomials of degree more than four can be studied (See Kalantari (2009)). 3. Babajee (2015a) showed that Kung-Traub’s conjecture fails for quadratic functions. That is, he obtained iterative methods for solving quadratic equations with three functions evaluations reaching order of convergence greater than four. Furthermore, using weight functions, he showed that it is possible to develop methods with three function evaluations of any order. In the same way, it is possible to develop iterative methods having higher order convergence with only three function evaluations where the Kung-Traub’s conjecture may fail for nonlinear equations having degree three and above.

122

4. A study on semilocal convergence of multipoint higher order Newton-like methods for solving nonlinear equation in Banach spaces by using recurrence relations can be done. 5. The convergence of the methods in Banach Spaces using Newton-Kantorovich Theorem and their behaviour with respect to minimization problems known as Directional Newton Like methods can be studied. In fact, the solution to a function vector of n variables is similar to that of a function of n variables by P minimizing the least squares of functional F = nj fj2 . Since the NewtonKantorovich Theorem can provide for the error estimates depending on the initial point, one can conduct a spectral analysis of these errors.

Bibliography M. F. Abad, A. Cordero, and J. R. Torregrosa. Fourth- and fifth-order methods for solving nonlinear systems of equations: An application to the global positioning system. Abstract and Applied Analysis, Volume 2013, Article ID 586708, 10 pages, 2013. S. Abbasbandy. Extended newton’s method for a system of nonlinear equations by modified adomian decomposition method. Appl. Math. Comp., 170:648–656, 2005. S. Amat, S. Busquier, and S. Plaza. Review of some iterative root-finding methods from a dynamical point of view. SCIENTIA, Series A: Math. Sci., 10:3–35, 2004. G. Ardelean. A comparison between iterative methods by using the basins of attraction. Appl. Math. Comp., 218:88–95, 2011. G. Ardelean. A new third-order newton-type iterative method for solving nonlinear equations. Appl. Math. Comp., 219:9856–9864, 2013. I. K. Argyros. Quadratic equations and applications to chandrasekhar’s and related equations. Bull. Austral. Math. Soc., 32(2):275–292, 1985. I. K. Argyros. On a class of nonlinear integral equations arising in neutron transport. Aequationes Math, 35:99–111, 1988. V. Arroyo, A. Cordero, and J. R. Torregrosa. Approximation of artificial satellites preliminary orbits: The efficiency challenge. Mathematical and Computer Modelling, 54:1802–1807, 2011. F. Awawdeh. On new iterative method for solving systems of nonlinear equations. Numer. Algor., 54:395–409, 2010. 123

124

D. K. R. Babajee. Analysis Of Higher Order Variants Of Newton’s Method And Their Applications To Differential And Integral Equations And In Ocean Acidification. PhD thesis, University of Mauritius, 2010. D. K. R. Babajee. Several improvements of the 2-point third order midpoint iterative method using weight functions. Appl. Math. Comp., 218(15):7958–7966, 2012. D. K. R. Babajee. On a two-parameter Chebyshev-Halley-like family of optimal two-point fourth order methods free from second derivatives. Afr. Mat. DOI: 10.1007/s13370-014-0237-z, 2014. D. K. R. Babajee. On the kung-traub conjecture for iterative methods for solving quadratic equations. Algorithms, 9:1, 2015a. D. K. R. Babajee. Some improvements to a third order variant of Newton’s method from Simpson’s rule. Algorithms, 8:552–561, 2015b. D. K. R. Babajee and M. Z. Dauhoo. An analysis of the properties of the variants of Newton’s method with third order convergence. Appl. Math. Comp., 183(1): 659–684, 2006. D. K. R. Babajee and V C Jaunky. Applications of higher-order optimal Newton Secant iterative methods in ocean acidification and investigation of long-run implications of CO2 emissions on alkalinity of seawater. ISRN Applied Mathematics, 2013:Article ID 785287, 10 pages, DOI:10.1155/2013/785287, 2013. D. K. R. Babajee, A. Cordero, F. Soleymani, and J. R. Torregrosa. On a novel fourthorder algorithm for solving systems of nonlinear equations. Journal of Applied Mathematics, Volume 2012, Article ID 165452, 12 pages, 2012. D. K. R. Babajee, Kalyanasundaram Madhu, and J. Jayakumar. A family of higher order multi-point iterative methods based on power mean for solving nonlinear equations. Afr. Mat., DOI:10.1007/s13370-015-0380-1, 2015a. D. K. R. Babajee, Kalyanasundaram Madhu, and J. Jayakumar. On some improved harmonic mean newton-like methods for solving systems of nonlinear equations. Algorithms, 8:895–909, 2015b.

125

D. Basu. Composite fourth order newton type method for simple root. International Journal for Computational Methods in Engineering Science and Mechanics, 9: 201–210, 2008. R. Behl and V. Kanwar. Variants of chebyshev’s method with optimal order of convergence. Tamsui Oxf. J. Inf. Math. Sci., 29(1):39–53, 2013. P. Blanchard. Complex analytic dynamics on the riemann sphere. Bull. Amer. Math. Soc., 11(1):85–141, 1984. B. Bradie. A Friendly Introduction to Numerical Analysis. Pearson Education Inc, New Delhi, 2006. R P Brent. The computational complexity of iterative methods for systems of nonlinear equations. In R. E. Miller and J. W. Thatcher, editors, Complexity of Computer Computations, New York, 1972. Plenum Press. R. Buckmire. Investigations of nonstandard Mickens-type finite-difference schemes for singular boundary value problems in cylindrical or spherical coordinates. Num. Meth. P. Diff. Eqns., 19(2):380–398, 2003. D. Chandrasekhar. Radiative Transfer. Dover, New York, 1960. F. I. Chicharro, A. Cordero, and J. R. Torregrosa. Drawing dynamical and parameters planes of iterative families and methods. The Scientific World Journal, Article ID 780153, 11 pages, DOI:10.1155/2013/780153, 2013. C. Chun. Construction of Newton-like iteration methods for solving nonlinear equations. Numer. Math., 104(3):297–315, 2006. C. Chun. A two-parameter third-order family of methods for solving nonlinear equations. Appl. Math. Comput., 189:1822–1827, 2007. C. Chun and Y I Kim. Several new third-order iterative methods for solving nonlinear equations. Acta Appl. Math., 109:1053–1063, 2010. C. Chun and B. Neta. A new sixth-order scheme for nonlinear equations. Appl. Math. Lett., 25:185–189, 2012.

126

C. Chun, M. Y. Lee, B. Neta, and J. Dzunic. On optimal fourth-order iterative methods free from second derivative and their dynamics. Appl. Math. Comp., 218: 6427–6438, 2012. A. Cordero and J. R. Torregrosa. Variants of Newton’s method using fifth-order quadrature formulas. Appl. Math. Comp., 190:686–698, 2007. A. Cordero and Juan R. Torregrosa. A class of multi-point iterative methods for nonlinear equations. Appl. Math. Comp., 197:337–344, 2008. A. Cordero, J. L. Hueso, E. Martinez, and J. R. Torregrosa. Efficient three-step iterative methods with sixth order convergence for nonlinear equations. Numer. Algor., 53:485–495, 2010a. A. Cordero, J. L. Hueso, E. Martinez, and J. R. Torregrosa. A modified newtonjarratts composition. Numer. Algor., 55:87–99, 2010b. A. Cordero, J. L. Hueso, E. Martinez, and J. R. Torregrosa. A family of iterative methods with sixth and seventh order convergence for nonlinear equations. Mathematical and Computer Modelling, 52:1490–1496, 2010c. A. Cordero, J. L. Hueso, E. Martinez, and J. R. Torregrosa. Solving nonlinear equations by a new derivative free iterative method. Appl. Math. Comp., 217:4548– 4556, 2011a. A. Cordero, J. R. Torregrosa, and M. P. Vassileva. Three-step iterative methods with optimal eighth-order convergence. J. Comp. Appl. Math., 235:3189–3194, 2011b. A. Cordero, J. L. Hueso, E. Martinez, and J. R. Torregrosa. Increasing the convergence order of an iterative method for nonlinear systems. Appl. Math. Lett., 25: 2369–2374, 2012a. A. Cordero, J. R. Torregrosa, and M. P. Vassileva. Pseudocomposition: A technique to design predictor-corrector methods for systems of nonlinear equations. Appl. Math. Comput., 218:11496–11504, 2012b. A. Cordero, J. L. Hueso, E. Martinez, and J. R. Torregrosa. A new technique to obtain derivative-free optimal iterative methods for solving nonlinear equations. J. Comp. Appl. Math., 252:95–102, 2013.

127

A. Cordero, M. Fardi, M. Ghasemi, and J. R. Torregrosa. Accelerated iterative methods for finding solutions of nonlinear equations and their dynamical behavior. Calcolo, 51:17–30, 2014. A. Cordero, A. Franques, and J. R. Torregrosa.

Multidimensional homeier’s

generalized class and its application to planar 1d bratu problem.

SeMA,

DOI:10.1007/s40324-015-0037-x, 2015. M. Drexler. Newton’s Method as a Global Solver for Non-Linear Problems. PhD thesis, University of Oxford, 1997. Aly M. El-naggar. An alternative methodology for the mathematical treatment of gps positioning. Alexandria Engineering Journal, 50:359–366, 2011. J. A. Ezquerro, J. M. Gutierrez, M. A. Hernandez, and M. A. Salanova. Solving nonlinear integral equations arising in radiative transfer. Numer. Funct. Anal. Optim., 20(7-8):661–673, 1999. J. A. Ezquerro, M. A. Hernandez, and N. Romero. An extension of gander’s result for quadratic equations. J. Comp. Appl. Math, 234:960971, 2010. M. Frontini and E. Sormani. Some variants of Newton’s method with third-order convergence. Appl. Math. Comp., 140:419–426, 2003. M. Grau and J. L. Diaz-Barrero. An improvement to ostrowski rootfinding method. Appl. Math. Comput., 173:450–456, 2000. M. Grau-Sanchez, A. Grau, and M. Noguera. On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math., 236(6):1259–1266, 2011. M. Grau-Sanchez, A. Grau, and M. Noguera. On the computational efficiency index and some iterative methods for solving systems of nonlinear equations. J. Comput. Appl. Math., 236(6):1259–1266, 2012. D. Griffin. http://www.pocketgpsworld.com/howgpsworks.php, 2011. V. I. Hasanov, I. G. Ivanov, and G. H. Nedzhibov. A new modification of Newton’s method. Application of Mathematics in Engineering, 27:278–286, 2002.

128

D. Herceg and D. Herceg. Means based modifications of newton’s method for solving nonlinear equations. Appl. Math. Comp., 219:6126–6133, 2013. H. H. H. Homeier. A modified Newton method for root finding with cubic convergence. J. Comput. Appl. Math., 157(1):227–230, 2003. H. H. H. Homeier. A modified Newton method with cubic convergence: the multivariate case. Comp. Appl. Math., 169:161–169, 2004. H. H. H. Homeier. On Newton-type methods with cubic convergence. Comp. Appl. Math., 176(2):425–432, 2005. J. L. Hueso, E. Martinez, and J. R. Torregrosa. Modified newton’s method for systems of nonlinear equations with singular jacobian. J. Comp. Appl. Math., 224: 77–83, 2009a. J. L. Hueso, E. Martinez, and J. R. Torregrosa. Third and fourth order iterative methods free from second derivative for nonlinear systems. Appl. Math. Comp., 211:190–197, 2009b. D. Jain. Families of newton-like methods with fourth-order convergence. Int. J. Comput. Math., 90(5):1072–1082, 2013. J. P. Jaiswal. A new third-order derivative free method for solving nonlinear equations. Universal Journal of Applied Mathematics, 1(2):131–135, 2013. J. P. Jaiswal. Some class of third- and fourth-order iterative methods for solving nonlinear equations. Journal of Applied Mathematics, Volume 2014, Article ID 817656,17 pages, 2014. P Jarratt. Some fourth order multipoint iterative methods for solving equations. Math. Comp., 20:434–437, 1966a. P Jarratt. Multipoint iterative methods for solving certain equations. Comput. J., 8: 398–400, 1966b. P. Jarratt. Some efficient fourth order multipoint methods for solving equations. BIT, 9:119–124, 1969.

129

J. Jayakumar and Kalyanasundaram Madhu. Generalized power means modification of newton’s method for simple roots of nonlinear equation. Int. J. Pure Appl. Sci. Technol., 18:45–51, 2013. B. Kalantari. Polynomial Root-Finding and Polynomiography. World Scientific Publishing Co. Pvt. Ltd, Singapore, 2009. V. Kanwar. A family of third-order multipoint methods for solving nonlinear equations. Appl. Math. Comp., 176:409–413, 2006. V. Kanwar, S. K. Tomar, Sukhjit Singh, and Sanjeev Kumar. Note on super-halley method and its variants. Tamsui Oxf. J. Inf. Math. Sci., 28:191–216, 2012. S. K. Khattri and S. Abbasbandy. Optimal fourth order family of iterative methods. Mat. Vesnik, 63:67–72, 2011. S. K. Khattri and I. K. Argyros. Sixth order derivative free family of iterative methods. Appl. Math. Comp., 217:5500–5507, 2011. S. K. Khattri and T. Log. Derivative free algorithm for solving nonlinear equations. Computing, 92:169–179, 2011. Y. I. Kim. A new two-step biparametric family of sixth-order iterative methods free from second derivatives for solving nonlinear algebraic equations. Appl. Math. Comp., 215:3418–3424, 2010. R. F. King. A family of fourth-order methods for solving nonlinear equations. SIAM J. Numer. Anal., 10:876–879, 1973. J. Kou, Y. Li, and X. Wang. Some modifications of newton’s method with fifth-order convergence. J. Comp. Appl. Math., 209:146–152, 2007a. J. Kou, Y. Li, and X. Wang. A family of fifth-order iterations composed of newton and third-order methods. Appl. Math. Comp., 186:1258–1262, 2007b. H. T. Kung and J. F. Traub. Optimal order of one-point and multipoint iteration. J. Assoc. Comput. Mach., 21:643–651, 1974. X. Li, C. Mu, J. Ma, and C. Wang. Sixteenth-order method for nonlinear equations. Appl. Math. Comp., 215(10):3754–3758, 2010.

130

T. Luki´c and N. M. Ralevi´c. Newton’s method with accelerated convergence modified by an aggregation operator. Proceedings of 3rd Serbian-Hungarian Joint Symposium on Intelligent Systems, SCG, Subotica, pages 121–128, 2005. Kalyanasundaram Madhu and J. Jayakumar. Class of modified newton’s method for solving nonlinear equations. Tamsui Oxf. J. Inf. Math. Sci., 30:91–100, 2014. Kalyanasundaram Madhu and J. Jayakumar. A fifth-order modified newton-type method for solving nonlinear equations. IJAER, 10:83–86, 2015a. Kalyanasundaram Madhu and J. Jayakumar. Two new families of iterative methods for solving nonlinear equations. Tamsui Oxf. J. Inf. Math. Sci. (Accepted), 2015b. Kalyanasundaram Madhu and J. Jayakumar. Some higher order newton-like methods for solving system of nonlinear equation and its applications. Communicated, 2016a. Kalyanasundaram Madhu and J. Jayakumar. Higher order methods for nonlinear equations and their basins of attraction. Mathematics, 4:22, 2016b. Kalyanasundaram Madhu, D. K. R. Babajee, and J. Jayakumar. An improvement to double-step newton method and its multi-step version for solving system of nonlinear equations and its applications. Numer. Algor, DOI:10.1007/s11075016-0163-2, 2016. J. M. McNamee. Numerical Methods for Roots of Polynomial: Part 1. Elsevier, AE Amsterdam, The Netherlands, 2007. M. A. Noor. Some iterative methods for solving nonlinear equations using homotopy perturbation method. International Journal of Computer Mathematics, 87:141– 149, 2010. M. A. Noor and M. Waseem. Some iterative methods for solving a system of nonlinear equations. Comput. Math. with Appl., 57:101–106, 2009. M. A. Noor, M. Waseem, K. I. Noor, and E. Al-Said. Variational iteration technique for solving a system of nonlinear equations. Optim. Lett., 7:991–1007, 2013. S. A. Odejide and Y. A. S. Aregbesola. A note on two dimensional Bratu problem. Kragujevac J. Math., 29:49–56, 2006.

131

J. M. Ortega and W. C. Rheinbolt. Iterative Solution of Nonlinear Equations in Several Variables. Academic Press, New York, 1970. A. M. Ostrowski. Solutions of Equations and System of equations. Academic Press, New York, 1960. A. Ozban. Some new variants of Newton’s method. Appl. Math. Lett., 17:677–682, 2004. S. K. Parhi and D. K. Gupta. A sixth order method for nonlinear equations. Appl. Math. Comp., 203:50–55, 2008. L. D. Petkovic and M. S. Petkovic. A note on some recent methods for solving nonlinear equations. Appl. Math. Comput., 185:368–374, 2007. L. D. Petkovic, M. S. Petkovic, and J. Dzunic. A class of three-point root-solvers of optimal order of convergence. Appl. Math. Comp., 216:671–676, 2010. M. S. Petkovic, B. Neta, L. D. Petkovic, and J. Dzunic. Multipoint Methods for Solving Nonlinear Equations. Elsevier, Amsterdam, 2013. M. S. Petkovic, B. Neta, L. D. Petkovic, and J. Dzunic. Multipoint methods for solving nonlinear equations: A survey. Appl. Math. Comp., 226:635–660, 2014. A. Ralston and P. Rabinowitz. A First course in Numerical Analysis. Dover Publications, Mineola, New York, 2001. D. K. Salkuyeh. A family of newton-type methods for solving nonlinear equations. International Journal of Computer Mathematics, 84(3):411–419, 2007. M. Scott, B. Neta, and C. Chun. Basin attractors for various methods. Appl. Math. Comp., 218:2584–2599, 2011. F. A. Shah and M. A. Noor. Some numerical methods for solving nonlinear equations by using decomposition technique. Appl. Math. Comp., 251:378–386, 2015. M. Sharifi, D. K. R. Babajee, and F. Soleymani. Finding solution of nonlinear equations by a class of optimal methods. Comput. Math. Appl., 63:764–774, 2012. J. R. Sharma. A composite third order newton-steffensen method for solving nonlinear equations. Appl. Math. Comp., 169:242–246, 2005.

132

J. R. Sharma and H. Arora. An efficient family of weighted-newton methods with optimal eighth order convergence. Appl. Math. Lett., 29:1–6, 2014. J. R. Sharma and R. K. Guha. A family of modified ostrowski methods with accelerated sixth order convergence. Appl. Math. Comput., 190:111–115, 2007. J. R. Sharma and R. K. Guha. Second-derivative free methods of third and fourth order for solving nonlinear equations. International Journal of Computer Mathematics, 88:163–170, 2011. J. R. Sharma and P. Gupta. An efficient fifth order method for solving systems of nonlinear equations. Comput. Math. Appl., 67:591–601, 2014. J. R. Sharma and R. Sharma. New third and fourth order nonlinear solvers for computing multiple roots. Appl. Math. Comp., 217:9756–9764, 2011. J. R. Sharma, R. K. Guha, and R. Sharma. An efficient fourth order weighted-newton method for systems of nonlinear equations. Numer. Algor, 62:307–323, 2013. J. R. Sharma, H. Arora, and M. S. Petkovic. An efficient derivative free family of fourth order methods for solving systems of nonlinear equations. Appl. Math. Comp., 235:383–393, 2014. A. Singh and J. P. Jaiswal. Several new third-order and fourth-order iterative methods for solving nonlinear equations. International Journal of Engineering Mathematics, Volume 2014, Article ID 828409,11 pages, 2014. S. Singh and D. K. Gupta. Iterative methods of higher order for nonlinear equations. Vietnam J. Math., DOI:10.1007/s10013-015-0135-1, 2014. F. Soleymani. Revisit of jarratt method for solving nonlinear equations. Numer. Algor., 57:377–388, 2011. F. Soleymani, D. K. R. Babajee, and M. Sharifi. Modified Jarratt method without memory with twelfth-order convergence. Annals of the University of Craiova, Mathematics and Computer Science Series, 39:21–34, 2012a. F. Soleymani, D. K. R. Babajee, and M. Sharifi. Modified jarratt method without memory with twelfth-order convergence. Annals of the University of Craiova, Mathematics and Computer Science Series, 39:21–34, 2012b.

133

F. Soleymani, S. K. Khratti, and S Karimi Vanani. Two new classes of optimal Jarratt-type fourth-order methods. Appl. Math. Lett., 25:847–853, 2012c. F. Soleymani, T. Lotfi, and P. Bakhtiari. A multi-step class of iterative methods for nonlinear systems. Optim. Lett., DOI:10.1007/s11590-013-0617-6, 2013. R. Thukral. Introduction to Newton-type method for solving nonlinear equations. Appl. Math. Comp., 195(2):663–668, 2008. R. Thukral and M. S. Petkovic. A family of three-point methods of optimal order for solving nonlinear equations. Comp. Appl. Math., 233:2278–2284, 2010. J. F. Traub. Iterative Methods for the Solution of Equations. Prentice-Hall, New Jersey, 1964. J. B. Y. Tsui. Fundamentals of Global Positioning System Receivers, A Software Approach. Wiley Interscience, 2005. M. Z. Ullah, S. Serra-Capizzano, F. Ahmad, and E. S. Al-Aidarous. Higher order multi-step iterative method for computing the numerical solution of systems of nonlinear equations: Application to nonlinear pdes and odes. App. Math. Comput., 269:972–987, 2015. E. R. Vrscay and W. J. Gilbert. Extraneous fixed points, basins boundaries and chaotic dynamics for Schr¨ oder and K¨ onig rational iteration functions. Numer. Math., 52:1–16, 1988. R. Wait. The Numerical Solution of Algebraic Equations. John Wiley & Sons, 1979. X. Wang and L. Liu. Two new families of sixth-order methods for solving non-linear equations. Appl. Math. Comp., 213:73–78, 2009. S. Weerakoon and T. G. I. Fernando. A variant of Newton’s method with accelerated third order convergence. Appl. Math. Lett., 13:87–93, 2000. Z. Xiaojian. A class of newton’s methods with third-order convergence. Appl. Math. Lett., 20:1026–1030, 2007. B. I. Yun. Solving nonlinear equations by a new derivative free iterative method. Appl. Math. Comp., 217:5768–5773, 2011.

134

L. Zheng, K. Zhang, and L. Chen. On the convergence of a modified chebyshevlike’s method for solving nonlinear equations. Taiwanese Journal of Mathematics, 19(1):193–209, 2015. Q. Zheng, J. Li, and F. Huang. Optimal steffensen-type families for solving nonlinear equations. Appl. Math. Comput., 217:9592–9597, 2011. X. Zhou, X. Chen, and Y. Song. Families of third and fourth order methods for multiple roots of nonlinear equations. Appl. Math. Comp., 219:6030–6038, 2013.

Suggest Documents