I also thank Prof M.K. Jain ofDepaltment of Mathematics, Faculty of Science,. University of Mamitius, Reduit, Mamitius and Prof S.R.K. Iyengar, Head of ... for some polynomials of degree n ~ 5, no closed-form solutions can be found using integers ... absence of the second derivative and higher derivatives, the second order ...
DEPARTMENT OF MATHEMAn CS
University of Sri Jayewardenepura
NUFlOOa
Sd 'Mn. (CeJ
University of Sri Jayewardenepura Faculty of Graduate Studies Department of Mathematics
IMPROVED NEWTON'S METHOD FOR SOLVING NONLINEAR EQUATIONS A Thesis By
T.G.I. FERNANDO
Submitted in Partial Fulfilment of the Requirements for the Degree of
Master of Science
.
In
Industrial Mathematics
I grant the University of Sri Jayewardenepura the nonexclusive right to uo;;e this work for the University's own purposes and to make single copies of the work available to the public on a not-far-profit basis if copies are not otherwise available.
u'J~~
(T.G.I. F
ando)
.
.
•
We approve the Masters degree m Industrial Mathematics, thesis of T.G.I. Fernando.
Date:
J\\'
... . ........
,
...... ' ......... . ............ .......... .
Dr. SunetlITa Weerakoon Principal Supervisor Co-ordinator M. Sc. in Industrial Mathematics Department of Mathematics University of Sri Jayewardenepma .
.....?~J .t.t. '1 1 ..... .
. .. ...
~
Dr.
G.K. Watugala
................. .
Depaltment of Mechanical ELgineeling University ofMoratuwa .
.......~. y../ .Y!.i.? ....
.......~..~. 4..................... . Ms. G.S. Makalanda Co-ordinator of Computer Centre Department of Mathematics University of Sri Jayewardencpura.
ABSTRACT
An iterative scheme is introduced improving Newton's method which is widely
used for solving nonlinear equations. The method is developed for both functions of one variable and two variables.
Proposed scheme replaces the rectangular approximation of the indefinite integral involved in Newton's Method by a trapezium. It is shown that
'~he
order of
convergence of the new method is at least three for functions of one variable. Computational results overwhelmingly support this theory and the computational order of convergence is even more than three for certain functions.
Algorithms constlUcted were implemented by using the high level computer language Turbo Pascal (Ver. 7)
Key words: Convergence, Newton's method, Improved Newton's method, Nonlinear equations, Root finding, Order of convergence, Iterative methods
CONTENTS
Abstract
Page Chapter 1
Introduction
01
Chapter 2
Preliminaries 2.1
Functions of One Variable
04
2.2
Functions of Two Variables
07
Chapter 3
Numerical schemes for functions of one variable 3.1
Newton's Method
14
3.2
Improved Newton's Method
16
3.3
Third Order Convergent Methods 3.3.1
Chebyshev Method
17
3.3 .2
Multipoint Iteration Methods
18
3.3.3
Parabolic Extension of Newton's Method
19
Chapter 4 Numelical schemes for functions of two variables 4.1
Newton's Method
20
4.2
Improved Newton's Method
23
Chapter 5 Analysis of Convergence of Improved Newton's Method for functions of one valiable 5.1
Second Order Convergence
28
5.2
Third Order Convergence
31
Chapter 6 Computational Results and Discussion
34
Appendix A (Real world problems)
41
Appendix B
47
References
73
ACKNOWLEDGEMENT
I am deeply indebted to my supervisors Dr. Sunethra Weerakoon and Ms. G.S. Makalanda of the Department of Mathematics, Ulliversity of Sli Jayewardenepura and Dr. G.K. Watugala of Department of Mechanical Engineelmg, University of Moratuwa for providing me invaluable guidance, helpful suggestions and encouragement at every stage in the research process. I also thank Prof M.K. Jain ofDepaltment of Mathematics, Faculty of Science, University of Mamitius, Reduit, Mamitius and Prof S.R .K. Iyengar, Head of Mathematics Depaltment, Indian Institute of Technology, New Delhi for providing me proofs of third order convergence of Multipoint Iterative Methods. I wish to pay my gratitude to the staff of Department of Mathematics, University of Sri Jayewardenepura for encouraging me during all the peIiod of research work. Finally I wish to express my sincere appreciations to my loving parents and my friend Mr. H.K. G. De Z. Amarasekara who gave me valuable assistance to make this endeavour a success.
1
Chapter 1
INTRODUCTION One topic which has always been of paramount imp0l1ance
ill
numerical
analysis is that of approximating roots of nonlinear equations in one variable, be they algebraic or transcendental. According to the great French mathematician Galois, even for some polynomials of degree n
~
5, no closed-form solutions can be found using
integers and the operations +, -, x, -:-, exponentiation, and taking second through nth roots. Even though the closed form solutions for polynomials with degree n
=
3, 4 are
available they are very complex and not easy to memorize as the quadratic formula. Thus, one can only give numerical approximations for this type of rroblems. In practical situations we do not require the actual foot and we need only a solution which is close to the tme solution to some extent. Newton ' s method that approximates the root of a nonlinear equation in one variable using the value of the function and its delivative, in an iterative fashion, is probably the best known and most widely used algOlithm and it converges to the root quadratically. In other words, after some iterations, this process approximately doubles the number of conect decimal places or significant digits at each iteration.
In our high school education and undergraduate level we leamt how to solve
systems of linear equations in two or more variables. But in most of the practical problems we should deal with systems of nonlinear equations. Solving these systems analytically, is impossible for most of the cases. Newton's method can be readily extended to solve systems of nonlinear equations numerically and it uses the function
2
(of several variables) and its Jacobian at each iteration. It is known that Newton's method converges to the root quadratically(3) even for functions of several variables.
In this study, we suggest an improvement to the iterations of Newton's method
at the expense of one additional first derivative evaluation of the function for the one dimensional case. Furthermore, we have extended the suggested method to systems of nonlinear equations in two variables. Derivation of Newton's method involves an indefinite integral of the derivative of the function to be solved and the relevant area is approximated by a rectangle. In the proposed method, we approximate this indefinite integral by a trapezium instead of a rectangle.
It is shown that the suggested method converges to the root and the order of
convergence is at least 3 in a neighbourhood of the root whenever the first, second and third derivatives of the function exist in a neighbourhood of the root. i. e. our method approximately trebles the number of significant digits after some iterations. Computational results overwhelmingly support this theory and the computational order of convergence is even more than three for certain functions. Further, even in the absence of the second derivative and higher derivatives, the second order convergence of the new method can be guaranteed.
Furthermore, we discuss some of the existing 3rd order convergent methods and compare these methods with the new method.
3
Even though we did not prove that our method is third order for the systems of nonlinear equations in two variables, the computational results suggest that our method is of order three.
4
Chapter 2
PRELIl\1INARIES (2.1) Functions of One Variable Definition (2.1.1) : A function f is Lipschitz continuous with constant y in a set A, written f E Lip y (A),
if V
X,y E
A
=>
I f{x) - f{y) I :::; y I x- y I
(2 . l.1)
Lemma (2.1.1) : For an open intexval 0 , let f : 0 ~ 91 and let
f l
I f{y) - f{x) -
f l
(x)(y - x) I :::; y I y - x I 2
I f{y) - f{x) -
f l
(x) (y- x) I
E Lip y (D). Then for any x,YED,
(2.1.2)
Proof:
=
I f l (~) (y - x) - f l (x) (y -x) I
=
I Y - xi i f l (~) -
fl
(x)
I
:::; Iy - xl y I ~ - x I :::; y Iy -
(by Mean Value Theorem)
(by Lipschitz continuity)
Xl2
(since x :::;
~
:::; y)
o
Lemma (2.1.2)[3] : For an open intexval 0 , let f : 0 ~ 91 and let I f{y) - f{x) -
f l
(x) (y - x) I :::; (y / 2
e E Lip
)1y - x I 2
y
(D). Then for any x,YED,
(2 . 1.3)
5
Proof: y
f(y)- f(x)- f'(x)(y-x) = f[f'(z)- f'(x)}dz x
Let z = x + (y - x) t, then dz = (y - x) dt Substituting these in the integral gives: 1
f(y)- f(x)- f'(x)(y-x) = f [f'(x+(y-x)t)- f'(x)](y-x)dt o Lipschitz continuity of f l together with triangular inequality gives : I
If(y)- f(x)- f'(x)(y-x)1
::::;
Iy-xlf rlt(y-x)ldt
° Definition (2.1.2)13)
Let a E~,
X
= Lly_ x l2 0 2
:
nE~ , n = 0,1,2, .... Then the sequence {x n} = {x O,X
I,X 2, . . . }
is said to
converge to a if
Ifin addition, there exists a constant c;::: 0, an integer no;::: n>n
°and p ;::: °such that for all
0,
Ix n+1 - al::::; clx n - al
P,
(2.1.4 )
then {x n} is said to converge to a with q-order at least p. If p = 2 or 3, the convergence is said to be q-quadratic or q-cubic, respectively.
6
Iffor some {c n} which converges to 0,
IXn+l- al:::; cn lx n - a1
2
(2 . 1.5)
,
then {x n} is said to converge q-super-quadratically to a.
Definition (2.1.3) : Let a be a root of the function f(x) and suppose that x
n+l, X n
& x
n -1
be the
consecutive iterations closer to the root, a . Then the Computational Order of Convergence p can be approximated by : 1n p
oX
n
.
a a a
.X
""
oX
1n oX
n
-
I
a
Theorem (2.1.1) - Intermediate Value Theorem: Suppose that f : [a,b]
~
iR is continuous. Then f has the intermediate property on
[a,b]. That is, if k is any value between f (a) and f (b) [ i.e. f (a) < k < f (b) or feb) < k < f(a)], then there exists c
E
(a,b)
S.t.
fCc) = k.
Defmition (2.1.4) - Machine Precision(3) : Usually numerical algOIithms will depend on the machine precision. It is impOltant, therefore, to characterise machine precision in such a way that discussions and computer programs can be reasonably independent of any particular machine. The concept commonly used is machine epSilon, abbreviated macheps; it is defined as the smallest positive number 1" such that 1+r > 1 on the computer in question.
7
Defrnition (2.1.4) - Stopping Criteria[31 :
Since there may be no computer representable a such that f{a) = 0, we must expect our algorithms to provide only solutions to approximate the problem or the last two iterates stayed in virtually the same place. So we adopt the following stopping criteria for computer programs. (i)
I f{Xn+l) I < -V (macheps)
(ii)
I
Xn+l - Xn I < -V (macheps)
(2.2) Functions of Two Varia hIes Definition (2.2.1) :
Let D c ~2. A function f of two variables is a rule that assigns to each ordered pair (x,y) in D, a unique real number denoted by f{x,y). The set D is the domain off and its range is the set of values that ftakes on, that is, {f{x,y) I (x,y) ED}. We often write
z
=
f{x,y) to make explicit the value taken on by f at the general point (x,y). The
variables x and yare independent variables and z is the dependent variable.
Definition (2.2.2) :
lffis a function of two variables with domain D, the graph offis the set S = {(x,y,z)
E
91 3 I z = f{x,y), (x,y)
E
D}
Defmition (2.2.3) :
The level curves or contour curves of a function f of two variables are the curves with equations f{x,y) = k, where k is a constant (in the range off)
8
Definition (2.2.4) : Iff is a function of two variables, its partial derivative with respect to x and yare the functions f x and f y defined by
f x (x,y) = lim f(x+h,y)- f(x,y) h-->O h
" ) = lim f(x , y+h)- f(x , y) f y ( x,y . h-->O h To give a geometlic interpretation of partial derivatives we should understand that the equation z = f(x,y) represents a surface S (the graph of f). If f(a,b) = c then the point P(a,b,c) lies on S. The vertical plane y = b intersects S in a curve C 1 (In other words,
C I is the trace of S in the plane y = b). Likewise the veltical plane x
=
a intersects S in
a curve C 2 . Both of the CUlves C 1 and C2 pass through the point P (see Figure 2.2.1). Notice that the curve C 1 is the graph of the function g(x) = f(x,b), so the slope of its tangent TI at P is g I(a) = f
x
(a,b). The cwve C2 is the graph of the function I
G(y) = f(a,y), so the slope of its tangent T2 at Pis G (b) = fy(a,b)
Thus the partial derivatives fx (a,b) and f y (a ,b) can be interpreted geometrically as the slope of the tangent lines at P( a,b,c) to the traces C I and C 2 of S in the planes y = b and
x = a.
9
z
~------------------------.
x Figure (2.2.1)
Definition (2.2.5) : Suppose a smface has equation z = f{x,y), where f has continuous first partial derivatives, and let P(a,b,c) be a point on S. As in the previous definition, let C, and C2 be the curves obtained by intersecting the vertical planes y
=
b and x = a with the
surface S. Then the point P lies on both C] and C 2 . Let T] and T2 be the tangent lines to the CUIves C] and C 2 at the point P. Then the tangent plane to the surface S at the point P is defined to be the plane that contains both of the tangent lines T, and T 2 . (See Figure 2.2.1) It can be sho\\ITI that if C is any other curve that lies on the sulface S which passes through P, then its tangent line at P also lies in the tangent plane. Therefore, we
y
10
can think of the tangent plane to S at P as consisting of all possible tangent lines at P to curves that lie on S which pass through P. The tangent plane at P is the plane that most closely approximates the surface S near the point P. It can be shown that the equation of the tangent plane to the surface z
=
f(x,y)
(2.2.1)
Definition (2.2.6),3) : A continuous function f : ~2 -+ ~ is said to be continuously differentiable at x
=
(x,y)T
E
~2, if
fx (x,y) and f y (x,y) exist and is continuous. The gradient of f at
(x,y) is then defined as V'f(x) = V'f(x,y)
= [fx (x,y) , f y (x,y)] T
Note (2.2.1) : We denote the open and closed line segments connecting x , x 1 E ~2 by (x,x I) and [X, X/]
respectively, and D c ~2 is called a convex set, if for every x,x l
E
D, [x,x I]
C
D.
Lemma (2.2.1)[3) : Let f: 9:e -+ ~ be continuously differentiable in an open convex set D c ~2. Then, for x = (X,y)T
E
D and any nonzero pelturbation p
= (p
J
,
P2)T
E
~2, the directional
derivative off at x in the direction of p, defined by
Dpf(x) = Dpf(x,y) =
lim ~o
f(x + h Pl'y
+: pJ -
f(x,y)
11
exists and is equal to Vf(X)T . p . For any x, x+p ED, 1
J
f(x +p) = f(x) + V'f(x + tp)T . pdt
(2.2.2)
o
and there exists Z E (x , x+p) such that f{x + p) = f{x) + V'f{z{p
(2 .2.3)
[This is the Mean Value Theorem for functions of several variables]
Proof:
We simply parameterise f along the line through x and x+p as a function of one variable g : 9t
~
9t,
get) = f(x + tp) = f(x + tp[,y + tP2)
and apply calculus of one variable to g. Differentiating g with respect to t
(2 .2.4)
Then by the fundamental theorem of calculus or Newton's theorem, 1
J
gel) = g(O)+ g'(t)dt o
which, by the definition of g and (2.2.4), is equivalent to I
J(x+p)= f(x)+ JVf(x+tp)T . pdt o
and proves (2.2 .2). Finally, by the mean value theorem for functions of one valiable,
g(l)
g(O) + gl (~),
~ E(O , l)
12
which by the definition of g and (2 .2.4), is equivalent to f{x+p)
f{x) + V'f(x + ~ p(p
=
and proves (2.2 .3).
~ E
(0,1)
0
Definition (2.2.7)[3] : A continuously differentiable function f : 91 ~ 91 is said to be twice continuously 2
differentiable at x
E
91
2
,
if(02f l OXiOxj)(X) exists and is continuous, 1 ~ ij ~ 2. The
Hessian off at x is then defined as the 2 x 2 matrix: whose (i,j)th element is
Clairaut's Theorem (2.2.1) : 2
Suppose fis defined on Dc 9-1 that contains the point (a,b). If the functions fx v and fyx
are both continuous on D , then
C y (a,b) = f yx (a,b)
Definition (2.2.8)1 3 ]
:
A continuous function F : 91 ~ 91 is continuously differentiable at x 2
2
E
2
91 if each
component function f{x,Y) and g(x,y) is continuously differentiable at x. The derivative of F at x is sometimes called the Jacobian (matrix:) of F at x, and its transpose is sometimes called the gradient ofF at x.
13
The common notations are:
Definition (2.2.9) : Let a be a solution of the system of nonlinear equations f(x) g(x)
=
0
=0
and suppose that x 0+1,
X0
& x 0 -1 be the consecutive iterations closer to the root, a .
Then the Computational Order of Convergence can be approximated by :
In
p
::: In
where
Ilx Ilx Ilx IIx
n
+ I
n
-
a
-
a
-
a
-
a
" n -
I
II . II is the norm infinity.
II II II
II
14
Chapter 3
NUMERICAL SCHEMES FOR
FUNCTIONS OF ONE VARIABLE (3.1) Newton's Method [NM] :
Newton's algOlithm to approximate the root a
of the nonlinear
equation f{x) = 0 is to start with an initial approximation x '0 sufficiently close to a and to use the one point iteration scheme (3.1.1)
where x* n is the nth iterate.
It is impOltant to understand how Newton's method is constmcted. At
each iteration step we constmct a local linear model of our function f{x) at the point X*n and solve for the root (X*n+l) of the local model. In Newton's method, [Figure
(3.1.1)] this local linear model is the tangent drawn to the function f{x) at the CUlTent
point x* n.
Local linear model at x* n is : (3.] .2)
This local linear model can be interpreted(3) in another way. From the Newton's Theorem: x
f(x)
f(x:)+
f. f'().,)dA
Xn
(3.1.3)
15
In Newton's method, the indefinite integral involved in (3. 1.3) is approximated by the
rectangle ABCD. [See figure (3.l.2)] x
l.e.
f.
f ' (A ) d A ~
f ' ( x : ) (x - x: ) ~ (3.l.4)
X n
which will result in the model given in (3.1.2). y
x
Figure (3.1.1) : Newton's iterative step
f/(A)
E ~
D
C
A
B x
Figure (3.1.2) : Approximating the area by the rectangle ABeD
16
(3.2) Improved Newton's Method [INM] :
From the Newton's Theorem:
J (x
)
J(X
n
)+
J J'(/L)d/L
(3.2.1)
Xn
In the proposed scheme, we approximate the indefinite integral involved in (3.6) by the
trapezium ABED. 1. e.
Jf
' ( A) d A
~
(t) (x -
X
11
)
[
f '(
X
/1
+ f ' ( x )]
)
~
(3.2.2)
x
E
~
··· .. c
A
B X*n
x
Figm'e (3.2.1) : Approximating the area by the trapezium ABED
Thus the local model equivalent to (3.l.2) is :
M
n (x)= !(x,,)+(+)(x-x/1H!'(x )+ !'(X)]~ (3.2.3) ll
Note that just like the linear model for Newton's Method this nonlinear model and the delivative of the model agree with the function f{x) and the derivative function f I(X) respectively when x
=
x
n.
of the
In addition to these prope11ies, the second
delivatives of the model and the function agree at the CUlTent iterate x
=
x n. Note that
this property does not hold for the local model for Newton's method. i.e. the model
17
matches with the
values of the slope, f/(x n), of the function, as well as with its
cUlVature in terms of
e (Xn).
function f{x) at x =
l
The resultant model is a tangential nonlinear CU1ve to the
X n.
We take the next iterative point as the root ofthe local model (3.2.3).
M l.e.
~
n (x
f (X
n +1)
II
)
=
0
+ (t)( x
n
+1
X
-
II
2f(x [f'(x
ll
f ' (x
)[
n
n
)
+
)
)+ f'(x
n
+ I )]
Obviously, this is an implicit scheme, which requires to have the delivative of the function at the (n+ 1)th iterative step to calculate the (n+ 1)th iterate itself We could overcome this difficulty by making use of the Newton's iterative step to compute the (n+ 1)th iterate on the right hand side.
Thus the resulting new scheme, which is very appropriately named as Improved Newton's Method (JNM) is :
Xn+l = xn
where
[f'(x n ) + f'(x:+ 1)]' f(xn) * xn+l=xn f'(x n )
n=012··· , , , -+ (3.2.4)
(3.3) Third Order Convergent Methods: (3.3.1) Chebyshev Method [5] [CM] :
Chebyshev method uses a second degree Taylor polynomial as the local model of the function near the point x = x n .
18
Thus, the local model for the Chebyshev method is :
The next iterative value ( x n+ 1) is taken as the root of this local model.
Replacing (Xn+
1-
Xn) on the right hand side by -f(xn) / f l (xn) [by using of Newton's
method] , we get the Chebyshev method x
= n+ 1
x n
_ j (x /1 ) _ ~ ( j (x n ) ) j'(x n ) 2 f'(x /1 )
2(
(3.3.2) Multipoint iteration methods [51
j
(x n f'(x n /I
»)
n = 0 1 2 .. . "
'
~
(3.3 .1)
,
:
It is possible to modifY the Chebyshev method and obtain third order iterative methods
which do not require the evaluation of the second derivative. We give below two multipoint iterative methods. x
(i)
"
+1
=X
[MPMl] where
f(x,,) -
.,
" f ' ( xn+l )
x
• +1
= xn
-
n
n
= 0,1,2, ...
1 f(x n ) --'---:"""":":"':"" 2 f'(x n )
(3.3.2)
This method requires one function and two first derivative evaluations per iteration.
j(X; +1 (ii)
[MPM2]
where
)
j'(x ) , n=O,l,2, ... n • j(xn) x +1 = X - -'---:........:..:..':"" n n j'(x n )
xn+l=x n -
(3 .3.3)
This method requires two functions and one first derivative evaluations per iteration.
19
(3.3.3) Parabolic Extension of Newton's Method [4] [PENM] :
This method involves the construction of a parabola through the point (x n,i{x n», which also matches the value of the slope, f I (x n), of the function, as well as its curvature in terms of
fll
(x n). The desired quadratic is precisely the second degree
Taylor polynomial
The desired root a of i{x) should lie near a root x n+l, of P(x). Solving for (x n+l - X n) by the quadratic formula, gives
x
=X + -j'(x n )±[f'(xn)2 -2j(x n )j"(x n )] X n+ n j "( Xn) 1
(3.3.4)
First there is the ambiguity of the '±' sign. Here, the sign opposite to that of the term in front of it is taken (i.e. taking the sign of f
I
(x
n», so that this adjustment term is
forced to become zero. It can be handled automatically by dividing numerator and denominator by f I (x n) (assuming a non-zero derivative) and selecting the positive branch. Then (3.13) becomes
x
=
n+l
x + -1 + {1- 2[j(x n) I J'(xn)][j"(xJ I J'(xJ]}Yz n J"(x n) I J'(xJ
~(3.3.5)
Rationalising the numerator, gives
2[f(xJ I J'(xJ] [PENlVl] xn+1 =xn -1+{1-2[f(x,JI f'(x,J][f"(x,JI f'(x,J]} X ~(3.3.6)
20
Chapter 4
NUMERICAL SCHEMES FOR FUNCTIONS OF TWO VARIABLES (4.1)
Newton's Method for systems of nonlinear equations with two variables:
Suppose that f: Dc ill ~ ill and g : D c ill ~ ill then the problem of finding the 2
2
solution of a system of two nonlinear equations can be stated as : Find (a,[3) ED s.t. f(a,[3) = 0 g(a,[3) = 0
(4.1.1)
Now the Newton's scheme for this problem is that solving the system of linear equations fx (X*n, Y*n) (X*n+l - X*n) + fy (X*n, Y*n) (Y*n+l - yOn) = - f(X*n, Y*n) gx (x' n , y' n ) ( x' n+l - x' n) + g y (x' n , y' n) ( y' n+l - Y* n) = - g(x* n , Y' n) ~(4.l.2) where (x*n , y' n) is the nth iterate. Letting PI = X'n+l - X*n and P2 = y'n+l - y'n, we can find the (n+ 1)th iterate
y' n+l = P2 + y' n
(4.l.3)
As in the one dimensional case, it is possible to interpret Newtcn's scheme geometl1cally for the two dimensional case. Hence we also approximate the two functions f(x,y) and g(x,y) by local linear models in the neighbourhood of (x' n , y* n). In Newton's method, these local linear models are the tangent planes drawn to the functions f(x,y) and g(x,y) at the CWTent iterate (X*n, Y*n). [say MI (x,y) and M2 (x,y) resp ectively]
21
Then the correspondillg equations of these planes are: MI (x,y) = f{X*n, Y*n) + Ex (X' n , y' n) ( x - X' n) + f y (X' n , Y*n) (y - Y*n) M2 (x,y) = g(x* n , y* n) + gx (x*n ,
y* n )
(
X - x' n) + g y (x' n , y' n ) (y- y* n) ~(4.l.4)
At the next iterate (x' n+1 , y' n+I), it is assumed that both MI (x,y) and M2 (x,y) vanish. L. e.
Now we solve these two equations simultaneously to obtain (x' n+1 , y' n+I).
We can visualise (X' n+l , y' n+ l) geometrically as follows. Since we are solving the two nonlinear equations f{x,y)
= 0 and g(x,y) = 0 in the xy-plane, we are interested in only
the traces of planes MJ (x,Y) and M2 (x,y) in the xy-plane. These two traces are lines (say Ll and L 2) in the xy-plane and corresponding equations are:
Then (X*n+l , Y*n+l) is the intersecting point of these lines LI and L 2.
As in thp, one dimensional case, above linear model can be interpreted in another way. Let x*n = (x' n , y' n ), p = (PJ,P2) and by (2.2.2) we shall obtain I
I(x: + p) = l(x:J + f Vf(x: + tp/ .pdt o
22
I
I
=/(x:)+ Plf[/y(x: +tpl'Y: +tp2)dt +P2f / y(x: +tppy: +tpz)]dt
° Let x
0
= x*u + p , then we have I
/ (x, y) = f ( x : , y: ) + (x - x: ) f [It (x: + t PI ' y: + t P 2 )dt
°
~(4.1.7)
1
+(y- y:)f f/x: +tP1>Y: +tp2)]dt o
The local model MJ (x,y) [see equation (4.1.4)] is obtained by approximating the two indefinite integrals by the rectangles I
f Iy (x: +
l
PI' y: + t P2 )dt ~
It (x:, y:)
and
o I
f / y(x: +tPl'Y: +tp2)dt ~ /y(x:,y:) o
Similarly, we can obtain the local model M2 (x,y).
AJgorithm (4.1.1) : Newton's Method for Systems of Nonlinear Equations with Two Variables Given f : 91 2 ~ 91 and g : 91 2 491 continuously differentiable and
Xo
= (xo,yo)T
E
at each iteration n, solve
X*11+1
= x*u + pu
(4.l.8)
91 2
:
23
(4.2)
Improved Newton's Method for systems of nonlinear equations with two variables:
(4.2.1) Formulation oflocal models: From the equation (4.1. 7) : I
!(x ,y )= !(Xn,Yn)+(x-xn)f[!x(X n +tPPYn +tpz)dt o 1
+(Y- Yn)f ! y(Xn +tPPYn +tpz)]dt o
In this case, we approximate indefinite integrals by trapeziums I
f [,(X n +tPI,Yn +tp2)dt ~ (Yz)[[,(Xn,Yn)+ [,(X n + PI , Yn + pz)] o I
fo Iv (x n + t PI , Y n + Let x =
Xu
t
P2 )dt ~ (Yz)[ly (x n' Y n) + !y (x n + PI ,Y n + P2)]
+ p, then we have
I
f [,(x n + t PpYn + t P2)dt ~ (Yz)[[y(xn,Yn)+ [,(x,y)] o 1
f Iy(xn +tPl'Y n +tp2)dt ~(Yz)[/I'(xn,Yn)+!J'(X'Y)] o
Thus the local model for function f(x,y) in a neighbourhood ofx n is: m I (x,y) = f{ X n , y n) + (112) (x - x n)[ f x (x n , y n) + f x (x,y)]
+ (112) (y - y n)[ f y (x n , y n) + f y (x,y)]~
(4 .2.1)
Similarly, the local model for the function g (x,y) in a neighbourhood ofx n is:
+ (112) (y - Yn)[ gy (xn, Yn) + gy
(x,y)]~
(4.2.2)
24
(4.2.2) Properties of the local models: (i) When x = X nand y = y n
(ii) The gradient ofm 1 at x
\lm I (x) = [ (1/2)( fx (Xu) + fx (x» + (112) (x - x n) f xx (x) + (1/2) (y - Yn) fy x (x),
(112)( f y (x u) + f y (x» + (1/2) (y - y n) f yy (x) + (1/2) (x - x n) f xy (x)] T When x = X n and y = y n
Similarly, it can be sho\Vll that
Since the normal line to a smface at a point is parallel to gradient vector at that point and the equivalency of the function values and the gradient vectors imply that the resultant models are tangent smfaces to the corresponding functions at the point x n.
(iii) Let H (x) be the Hessian matlix of the model m (x) and then we have
where
= (1I2)[2fx x (x) + (x - xn)fxxx (x) + (y - Yn)f yxx (x)]
25
= (1I2)[f xy (x) + fyx (x) + (x - xn)fxxy (x) + (y - Yn)fyxy (x)]
= (1I2)[fxy (x) + fyx (x) + (y - Yn)fyyx (x) + (x - xn)fxyx (x)]
= (1I2)[2fyy (x) + (y - Yn)fyyy (x) + (x - xn)fxyy (x)] When x =
Xn,
the elements of the Hessian matrix becomes
[Assuming both fxy and fyx continuous on D and by the use of Clair aut's Theorem] h21
= (1I2)[fx y (x) + fyx (x)] [Assuming both f xy and fy x continuous on D and by the use of Clairaut' s Theorem]
Thus, the Hessian matrix of the local model m 1 (x) at x =
XlI
Similarly, it can be shown that, if the Hessian matIix of the local model m2 (x)
H2 (x) then
IS
26
Thus, the Hessian matrices H
I
(x) and H 2 (x) for the local models m I (x) and m 2 (x)
agree with the Hessian matrices of the two functions f (x) and g (x) at x
=
Xu
respectively. Note that this property does not hold for the local models for Newton's method.
At the next iterative point x n + I, we assume both the local models m
I
(x) and
ill 2
(x)
vanish. Then we obtain the following system of equations (Xn+ 1- Xn)[
fx (Xu) + fx (Xn+ I)] + (Yn+ 1- Yn)[ fy (xu) + fy (Xn+ 1)] = -2 f{xu)
(Xn+ 1 - Xn)[ gx (Xn) + gx (xu+d] + (Yn+
1-
Yn)[ gy (xu) + gy (Xu+ I)]
=
-2 g(xu)
Obviously, as in the one dimensional case, this is an implicit scheme, which requires to have the first partial derivatives of the functions f and g at the (n+ I )th iterative step to calculate the (n+ l)th iterate itself We could overcome this problem by making use of the Newton's iterative step to compute the first partial derivatives of f and g at the (n+ 1)th iterate.
Then the resulting scheme is :
f.: (xu) + f.: (X*n+ 1)] + (Yn+ 1 - Yn)[ fy (xu) + fy (X*u+I)] = -2 f{xu)
(Xn+
1-
Xn)[
(Xn+
1-
Xn)[ gx (xu) + g" (X*n+I)] + (Yn+ 1- Yn)[ gy (Xu) + gy (X*u+I)] = -2 g(x n)
where x*u+ I is (n+l)th iterate obtained by applying Newton's Method.
27
Algorithm (4.2.1) : Improved Newton's Method for Systems of Nonlinear Equations with Two Variables Given f:
i}12 ~ i}1
and g : i}12 ~
i}1
continuously differentiable and
Xo
= (Xo,yo)T
E i}12 :
at each iteration n, solve
Xn+l=Xn+Pn
where
(4.2.3)
28
Chapter 5
ANALYSIS OF CONVERGENCE OF INM FOR FUNCTIONS OF ONE VARIABLE 5.1
Second Order Convergence
Lemma (5.1.1) : Let f: D ~ 91 for an open interval D and let fiE Lip p>o, Ie (x)1 ~ p for every XED. Iff{x)
I x ,: + 1 -
=
a I S; (:/zp) I x
y
(D). Assume that for some
°
has a solution a E D, then
II
-
a I2
Proof: x* n+1
xn-f{xn)/e(x n)
X*n+1 - a
(x n - a) - f{ x n) I e (x n) [e(x n)r' { f{a) - f{x n) - e(x n)( a - x n)}
0, If/(x)1 ~ p for every XED. If f{x) has a simple root at a E D and x close to a, then ::J e >
°
0
is sufficiently
such that the Improved Ne-wton's method defined by (3.9)
satisfYing the following inequality :
29
2r
r { 1 + - e }e2 4p n n
() Here e n = 1x
n -
a
(5.1.1)
I·
Proof: By the Improved Newton's Method:
x
11
+1
=
X
2f(xn) . n [!'(X n )+ !'(X:+ 1 ) ] '
TI
where
x
• n+l
=
f( x n
X n
f
= 0,1,2,···
)
'(X" )
and f{a) = 0.
Thus
(X n+l - a)
= (x n
-
2f(X ,J
a)
Then by the triangular inequality, e n+l
:::;
By the Lemma (2.1.2), e n+l
:::;
1[f/(X n) + e(X*n+l)]"ll {(y/2)(a - X n)2
+ 1f{a) - f{x n) - f/(X*n+l)(a - Xn)I} :::;
1[e(x n) + f/(X*n+l)r11 { (y/2)e n2 + 1f{a) - f{x n) - f/(X n)(a - x n) +f/(x n)(a - x n)- f/(X* n+l)(a-xn)l}
Then again by the triangular inequality, e n+l
:::;
30
Lemma (2 . 1.2) and Lipschitz continuity ofe implies, ![f/(X n) + f /(X*n+l)r l l { (y/2)e n2 + (y/2)e n2 + ylx*n+1 - x nle n}
~
e n+1
I[e(x n) + e(x*n+l)r l l
{y e n2 + y I(X*n+1 - a) + (a - x n)le n}
l[f/(X n) + f/(X*n+I)]"11 {y e n2 + y IX*n+1 - al
~
en + y la - x nle n}
By the Lemma (5 .1.1), e n+1
Obviously
1
~
l[f/(X n) + f /(X*n+I)]"1 1 { y e n2 + y (y/2p)la - x nl
~
I[f /(x n) + e(X*n+I)]"11 {2 Y e n2 + (y-/2p) e n3 }
e (x) + e (y) I
-:;t;
0,
'\j
2
e n + Y e n2 }
~
(5 .1.2)
x , Y E D.
For
if l [ I (x) + [ I (y) ~
e (x) = e (y) = 0 or e (x) = -
iff l (x) = if
= 0 for some x, y E D
1
e (x)
f l
fl
(y)
(y) = 0, it contradicts the assumption If I (x)1 ~ p > 0,
= - f l
'\j
XED.
(y)
e (y) < 0 < [ I (x)
~
[ I
~
::l zED s.t. f l (z) = 0 (by the Intermediate Value Theorem, f l E Lip "I (D) &
(x) < 0 < fl (y) or
hence f [ I
is continuous on D) which contradicts our assumption that
(x)1 ~ p > 0,
Thus f l (x) + 1
I
f l
'\j
xED.
(y) I > 0,
'\j
x, Y E D.
Hence::l 8 > 0 S.t. f l (x) + [ I (y) I
1
> 8 > 0 , '\j x, Y E D.
In particular, we have I f /(X n) + [ I(x* n+l) I ~ 8 [ I f l(X n) + f /(x* n+l)
1
rl ~
(118)
(5 . 1.3)
31
Substituting (5.1.3) in (5.1.2)
e n+1
2y
e n+l
5.2
o
e
Third Order Convergence
Theorem (5.2.1) :
Let f :
D~91
for an open interval D. Assume that f has fiTst, second and third
delivatives in the interval D. If f{x) has a simple root at a
E
D and x
0
is sufficiently
close to a , then the Improved Newton's method defined by (3 .2.4) satisfies the following enol' equation: (5 .2.1) where en =
X n -
a and C j = (1/ jl) fG)(a)/f(I)(a),j = L2,3, ...
Proof:
Improved Newton Method [INM] is
Xn + l
= XII
where
[f'(x n ) + f'(x:+ 1)]' f(xl/) * xn+l = xn f'(x n )
n=012··· , , ,
Let a is a simple root off{x) [i.e. f{a) = 0 and e(a) =I:. 0] and We use the following Taylor expansions
Xn
= a+e n
32
f(l)(a)[e n + C2 e} + C 3 e n3 + O(e n4 )]
where C j = (11 j!) f (ll (Xn)
=
f(j)
(5.2 .2)
(a)/f(l) (a)
f(ll(a+e n) rl)(a) + f(2) (a)e n + (112!) f(3 l (a) e} + O(e n3) fO\a)[l + f(2) (a) I f(l)(a) en + (1I2!) f(3 l (a) I fOl(a) e/ + O(e n 3 )]
=
(5.2.3)
Dividing (5.2.2) by (5.2.3) ttxn)/f(l)(x n)= [en + C 2 e n2 + C 3 e n3 + O(e n4)][1 + 2C 2 en + 3C 3 e n2 + O(e n3
)r l
[en + C2 e n2 + C 3 e n3 + O(e n4 )]{1- [2C 2 en + 3C 3 e n2 + O(e n3)] +
[2C 2 en + 3C 3 e n2 + O(e/)] 2
- ... }
[en + C 2 e / + C 3 e n3 + O(e n4 )]{ 1- [2C 2 en + 3C 3 e n2 + O(en~)] +
4C/ e n 2 + .. . } [en + C2 e n2 + C 3 e n3 + O(e n4 )][ 1 - 2C 2 en + (4C/ - 3C 3
)
e n2 + O(e n3 )]
(5.2.4)
X*n+l
xn-ttxn)/f(ll(Xn) a + en - [en - C2 e n2 + (2C 22
-
2C 3 ) e n3 + O(e n4 )]
(By 5.2.4) (5.2.5)
Again by (5.2.5) and the Taylor's expansion
Adding (5.2.3) and (5.2.6)
(5.2.7) From equations (5.2.2) and (5.2.7)
(5.2.8) Thus Xn+J
(by 5.2.8) (5.2.9) 0
(5.2.9) establishes the third order convergence of the INM, beyond any doubt.
34
Chapter 6
COlVlPUTATIONAL RESULTS AND DISCUSSION The objective of this study was to improve Newton's Method which is used to solve functions of one variable and systems of nonlinear equations. We designed a new method called Improved Newton's Method [INM] for functions of one variable and functions of two variables. Even though we have developed INM up to two variables only, we can readily extend INM for functions of several variables, in general. The improvement was made by replacing the indefinite integral involved in the derivation of Newton's Method by the area of a trapezium instead of a rectangle.
In chapter 2 we have discussed preliminaries required to design and analyse
Newton's Method [NM] and Improved Newton's Method. Furthermore, we defined order of convergence of a numerical scheme and how to measure the order of convergence approximately when three consecutive iterations closer to the required root are available.
In chapter 3, we have discussed Newton's Method [NM], Improved Newton's Method [INM] and existing third order convergent methods such as Chebyshev Method [CM], Multipoint Iterative Methods· [MPMI and MPM2] and Parabolic Extension of Newton's Method [PENM] for nonlinear equations in one variable.
35
We
applied these iterative methods to
polynomial,
exponential and
trigonometIic functions [Table 6.3]. The efficiency of these methods were compared by the number of iterations required to reach the actual root to 15 decimal places
starting from the same initial guess. Note that in all cases INM converges to the root faster than the NM. Compared to the third order convergent methods available, we can see that in most cases U\TM take the same number of iterations or sometimes it requires even a lesser number of iterations than the existing third order convergent methods. Computational order of convergence [COC] suggests that INM is of third order. For certain functions COC even more than three and for Newton's Method, it is even less than two for most functions. In table (6.3), PENM does not converge to the root due to the OCCUlTence of the negative value in the square root of the scheme. [Eqn 3.3.4]
CompaIing with the third order convergent methods, INM is simpler than the Chebyshev Method and PENM. The other important characteristic of the INM is that unlike the other third or higher order methods, it is not required to compute second or higher derivatives of the function to carryout iterations.
The local model for the INM has an additional property which does not hold for the local linear model of Newton's Method. i.e. the second derivative of the local model agrees with the second derivative of the function at the cunent iterate. In other words, the model matches with the values of the slope, as well as with its curvature in terms of the second derivative of the function.
36 In chapter 5, we have shown that the INM is at least third order convergent
provided the first, second and third derivatives of the function exist [Theorem 5.2.1]. Moreover the suggested method guarantees the second order convergence whenever the first derivative of the function exists and is Lipschitz continuous in a neighbourhood of the root [Theorem 5.l.1].
Apparently, the INM needs one more function evaluation at each iteration, when compared to Newton's Method. However, it is evident by the computed results [Table 6.3] that, the total number of function evaluations required is less than that of Newton's Method.
In chapter 4, we have discussed Newton's Method for systems of nonlinear
equations with two variables and giving a geometric interpretation. In section 4.2.1 we extended the INM to systems of nonlinear equations with two variables. Moreover, we showed that local models for INM are tangent surfaces to the corresponding function at the current iterative point. Just as in the case of functions of one variable, for functions of several variables, the local models for INM has an additional propelty than Newton's Method [Section 4.2.2]. i.e. Hessian matrices oflocal models agree with the Hessian matrices of corresponding functions at the present iterate.
We applied these two iterative methods [NM and INM] to systems of nonlinear equations of two variables with the same initial guess [Table 6.4]. As in the one dimensional case, the efficiency of these methods were compared by the number of iterations required to reach the actual root to 15 decimal places. As we expected, by applying INM to the systems of nonlinear equations, we can arrive at the root faster.
37 Computational order of convergence [COC] for the INM was almost three for most of the systems we checked. For certain functions COC is even more than three and for Newton's Method, it is less than two for most systems. These results suggest that the third order convergence of INM is valid for systems of nonlinear equations as well.
By applying both Newton's Method and Improved Newton's Method for the car loan problem in section A3 of Appendix - A, the following results were obtained.
n
Newton's Method
Error
1
2.3886456015E-02
7.6113543985E-02
2
1.2742981700E-02
1. 1143474316E-02
3
8.9187859799E-03
3.8241957199E-03
4
7.8291580298E-03
1.0896279701E-03
5
7.7032682357E-03
1.2588977406E-04
6
7.7014728591E-03
1. 7953765763E-06
7
7.7014724900E-03
3.6913405665E-I0
Table (6.1) Newton's Method for the car loan problem
Monthly rate = 0.77% Annual rate
=
9.24%
38
n
Improved Newton's Method
Error
1
6.5887577790E-03
9.3411242221E-02
2
7.6750417329E-03
1.0862839539E-03
-, .J
7.7014722681E-03
2.6430535158E-05
4
7.7014724894E-03
2.2135537847E-10
Table (6.2) Improved Newton's Method for the car loan problem
Monthly rate = 0.77% Annual rate
= 9.24%
In both cases we used the same initial guess (= 0.10) and ultimately obtained the
annual interest rate as 9.24%. According to the above tables, INM gives the monthly interest rate faster than that of Newton's Method. Notice that the number of function evaluations required to obtain this result is 14 for NM and only 12 for INM.
Function
1)
2) 3) 4)
5) 6) 7) 8) 9)
xo
COC
i
I N N C f(x) M M M x-j+4XL_10 -0.5 109 6 5 5 3 3 1 5 3 3 2 -0.3 113 6 8 sinL(x)-xL+1 1 5 4 5 3 6 3 4 xL-e x-3x+2 2 4 4 3 3 6 4 4 cos(x)-x 1 4 2 3 1.7 4 3 3 -0.3 5 3 NC (x-1),j-1 3.5 7 4 5 2.5 5 4 4 x,j-10 1.5 5 4 4 xexp(xL)-sinL(x)+3cos(x)+5 -2 8 6 5 xLsinL(x)+exp[xLcos(x)sin(x)]-28 5 9 5 I\lC exp(xL+ 7x-30)-1 8 8 3.5 11 3.25 8 5 5
NM - Newton's Method INM -Improved Newton Method CM - Chebyshev Method MPM1 - Multi-Point Method 1 MPM2 - Multi-Point Method 2 PENM - Parabolic Extension of Newton's Method
P E N M NC 3 3 NC 3 4 I\lC NC 3 3 3 NC 3 3 I\lC
M P M 1 10 3 3 17 4 4 3 4 3 3 4 4 3 4 5
M P M 2 8 3 3 69 16 4 3 4 3 3 6 5 4 5 6
I\lC
5
7
I\lC NC
7 5
8 6
N M 1.98 1.98 1.99 1.99 1.98 1.98 1.56 1.66 1.99 1.99 1.98 1.98 1.99 1.99 1.99 1.99
I N M
2.96
NO NO
C M 2.91 NO NO 2.76
P E N M NO NO NO NO
3.05 3.04 3.05 NO NO 2.76 2.84 3.01 1\10 NO 3.04 3.43 NO NO NO NO NO NO NO NO NO NO 2.67 2.55 NO 2.99 2.96 NO 3.01 3.16 NO 2.92 2.01 NO 2.87 1\10 NO 1.99 2.96 2.68 NO 1.99 2.81 2.82 NO
3.03 2.91
NOFE M P I M N N 2 M M 2.92 218 18 NO 10 9 NO 10 9 2.69 226 18 2.85 10 12 2.91 12 9
1\10 3.39 NO 1\10 3.05 2.75 NO 2.87 2.99 2.98 2.91 2.93
1\10 3.39 NO 1\10 3.32 2.74 2.95 3.02 2.99 2.98 2.91 2.93
M P M 1 2.92 NO NO 2.73
NC - Does not converge to the root COC - Computational Order of Convergence ND - Not defined NOFE - Total no of function evaluations
Root
1.36523001341448 -00-00-001.40449164821621 -00-
8 12 8 8 10
12 0.257530285439771 -0012 6 0.739085133214758 -009 9 -00-
14 10 10 16 18
12 2 12 -0012 2.15443469003367 15 -1 .20764782713013 15 4.82458931731526 24 3 15 -00-
22 16
Xo - Initial guess - No of iterations to approximate the root to 15 places
Table (6.3) Computed results for functions of one variable
Functions f(x,y) & g(x,Y) 1) XL+yL_2 X2+y2 -1
Initial Guess (xo,Yo)
No of Iterations INM NM
CDC
Root
NM
INM
(1,2)
6
4
1.97
2.93
(1.22474487139152,0.70706781186667)
(10,20)
16
11
1.99
2.40
(1.88364520891082 , 2.71594753880345)
(1.8,2.7)
6
4
1.95
2.91
-00-
(-1,-2)
6
4
1.83
3.04
(1 , 1)
(5,-2)
106
17
2.03
2.96
-00-
4) XL+yL_2 e X. 1+l-2
(2,3)
8
5
1.70
3.51
(1 , 1)
5) _XL -x+2y-18
(-5,5)
9
6
1.99
2.46
(-2,10)
(200,0)
13
6
2.01
2.99
(0.526522621918048 , 0.507919719037091)
(2,2)
6
4
1.99
2.99
-00-
(-1,-2)
6
4
1.23
2.97
(0.5 , 2)
2) X4+y4_67 x3-3xy2+35
3) XL -1 Ox+yL+8 xl+x-10y+8
(x-1) 2+(y-6) 2-2 5 6) 2cos(y)+ 7sin(x)-1 Ox 7cos(x)-2si n(y)-1 Oy
7) 16xL-80X+yL+32
xl+4x-10Y+16
NM INM CDC
Newton Method Improved Newton Method Computational Order of Convergence
Table (6.4) Computed results for functions of two variables
41
APPENDIX-A We gIVe below some practical situations where the solution of nonlinear equations, becomes the major problem We have tried the suggested INM along with NM to show the advantages of adopting the former.
A.I
The Ladder in the Mine [2)
1
7'
Figure A.I.I There are two intersecting mine shafts that meet at an angle of 123 0, as shown in Fig. (A. 1. 1). The straight shaft has a width of 7 ft, while the entrance shaft is 9 ft wide. Here we want to find the longest ladder that can negotiate the tum. We can neglect the thickness of the ladder members and assume it is not tipped as it is manoeuvred around the comer. Our solution should provide for the general case in which the angle A is a variable, as well as for the widths of the shafts.
Here is one way to analyse our ladder problem Visualise the ladder in successive locations as we carry it around the comer; there will be a critical position in which the two ends of the ladder touch the walls while a point along the ladder touches
42
the corner where the two shafts intersect. [See Fig. (Al.2)] Let C be the angle between the ladder and the wall when it is in this critical position. It is usually preferable to solve this problem in general telms, so we work with variables C, A, B,
Consider a series of lines drawn in this critical position - their lengths vary with the angle C, and the following relations hold (angles are expressed in radian measme): W I=~· 1 = __1_. I sinB' 2 sin C'
B = n - A - C;
=
I=f +1 I
2
W
W 2
sine n - A - C)
+_1_
sin C
Figure A.1.2
The maximum length of the ladder that can negotiate the turn is the minimum of f as a function of angle C. We hence set d f / dC = o. df w 2 cos(n - A - C) -= -
de
sin 2 (n - A - C)
cos C =0 sin 2 C
WI
43
We can solve the general problem if we can find the value of C that satisfies this equation. With the critical angle determined, the ladder length is given by
1=
w2
sin(n - A - C)
+~ sin C
As this analysis shows; to solve the specific problem we must solve a transcendental equation for the value ofC: 9 cos (n - 2.147 - C) _ 7 cos C sin 2 (n - 2.147 - C) sin 2 C
=0
aud then substitute C into
1=
9 +_7_ sin(n - 2.147 - C) sin C'
where we have convelted 123 0 into 2.147 radians.
A.2
Molecular Configuration of a Compound [3) A scientist may wish to determine the molecular configuration of a celtain
compound. The researcher derives an equation t{x) giving the potential energy of a possible configuration as a function of the tangent x of the angle between its two components. Then, since the nature will force the molecule to assume the configuration with the minimum potential energy, it is desirable to find the x for which f{x) is minimised. This is a minimisation problem in the single variable x and we should find clitical x values s. t.
df / dx
=
owing to the physics of the function f
o.
The equation is likely to be highly nonlinear,
44
A.3
Interest Rate of a Loan In real life we borr-ow loans at the expense of certain interest rates. Sometimes
we need to find the interest rate when the principaL periodic payment for an annuity, and total number of periodic payments in an annuity of a loan are available. For example, suppose you get a four-year monthly car loan ofRs. 200,000/- and the lender ask you to pay Rs. 5,000/- at the end of each month. Now the problem is that finding the annual interest rate of the loan.
To solve this problem we should find the roots [between 0 and 1] of the following equation.
where
P
Ptincipal
A
Periodic instalment (Payment)
n
Total number of instalments
I
0 or 1 (0 ifpayments are made at the end of the period and 1 if payments are made at the beginning of the period)
For solving the car loan problem, substituting P
200,000
A
- 5,000 (cash you payout represented by a negative number) and
n
4.12 = 48
I
0 (since payments are made at the end of the month)
45
we shall obtain, 200,000(1 +
r)" - 5,000(1 + r.ofl+ r;" - 1] ~ 0
200,000(1 + r)" - 5,000[ (J
=>
( 40r - 1)(1 + r
t
8
+rr
-1]
~0
+1=0
This is a higher degree polynomial equation and according to the Galois Theorem, we can't find closed form solutions. Thus we should apply a numerical method for finding a root between 0 and 1 (Since interest rate is a ratio).
A.4
Nonlinear Least Squares(3) Many
real world problems involve, selecting the best one from family of
curves to fit data provided by some experiment or some sample population. Usually, in Regression analysis, we deal with curves which are linear in parameters. Thus, the equations obtained are linear when the residual least squares are minimised. But one may want to fit a curve that is nonlinear in parameters. For example, a researcher may want to fit a bell-shaped curve for his data collected from an experiment. Suppose that (t l,y
d, (t 2,y 2), ... , (t n,y n) be any n such pieces of data. In
practice, however, there was experimental elTor in the points and in order to draw conclusions from the data, the researcher wants to find the bell-shaped curve that comes "closest" to the n points.
46
Since the general equation for a bell-shaped CUIVe is
it requires choosing x
I,
x
2, X 3,
and x
4
to minimise some aggregate measure of the
discrepancies (residuals) between the data points and the CUIVe; they are given by
The most commonly used aggregate measure is the sum of squares of the r j 's, leading to determination of the bell-shaped curve by the solution of the nonlinear least-squares problem,
L n
mIll x e91 '
f(x)
(XI +
x
2
e-(I+X3)2/ X4
-
Yi)2.
i= I
The problem is called a nonlinear least-squares problem because the residual functions r j(x) are nonlinear functions of some of the parameters x I,
X2, X3, X4.
When f is minimising, we obtain four nonlinear equations and it is impossible to find a closed form solution and one can only give numerical approximations. In this case, numerical methods play an important role in finding the suitable approximations for
XI, X2, X3,
and
X4.
47
APPENDIX -B B.t
This program is used to compare numerical algorithms for finding roots of non·linear equations in one variable.
{$N+}
uses crt, pro I_com, pro3_com; (Units prol_com and pro3 Jom defined in Appendix - D)
type menu type
record op: string[ 10]; hp:integer; end;
fc
object(fu) (fu is defined in prol_com) end;
element
object(item) (Item is defined in pro3_com) end;
var op,ch,ch 1 char; menuar array[l.. 9] of menutype; m boolean; ij,k Integer; xiO,xcO, (Current iterative values) xi,xc, (Next iterative values) ierror,cerror, {En"or values} pi,pc, (Computational Order Convergence) aierror,acerror, (Actual En·or Values) epsilon (lvfachine Precision) real; array [1..3] ofreal; (Keeps three consecutive errors) el,ec fcn fc; el element; procedure menusc; (Draws the Main Menu) begin clrscr; eldrawbox(8,2,72,24) ; el.drawbox(6,1,74,25); eJ.drawbox(27,3,53,5); textcolor(O); textbackground(7); eJ.centertxt(4,'Dv1PROVED NEWfON METHOD'); textcolor(7); textbackground(O); eLdrawbox(28,21 ,51 ,23); e1.centertxt(22,'ENTER YOUR SELECTION'); gotoxy(l6,8); write('ewton Method');
48 gotoxy(l6,12); writeChebyshev Method'); gotoxy( 46,8); write(,arabolie Extension of NM'); gotoxy( 46,12); Write('ultipoi nt Methods'); gotoxy(37,16)~
writeCUIT'); gotoxy(38, 19); write("); texteolor(O); textbackground(7); gotoxy(l5, 8); write('N'); gotoxy(l5,12); wri tee C'); gotoxy(45,8); writeCP); gotoxy( 45,12); writeCM); gotoxy(3 6,16); writeCQ'); gotoxy(3 7,19); write(")~
texteolor(7); textbaekground(O); texteolor(O); textbackground(O); gotoxy(52,22); write("); textcolor(7); textbaekground(O); endj{ofmenusc} function eps : real; (Finds the machine epsilon)
va," ep : real; begin ep:= 1; while I +ep > I do ep := ep/2; eps := ep; end;{of eps} procedure initialize; begin clrser; fen.fen menu; fen.fnehoiee := readkey; write(fen.fnehoiee); writeln~
while not(fcn.fnchoice in ['a' .. 'p']) do begin sound(250); delay(500); nowund; clrser;
49
fen.fen _menu; fen.fnehoice := readkey; write(fcn.fnchoice); writeln; end; {of wh ile) writeln ; writelnC Acnlal root writeln; write (' Enter the initial point readln(xiO); xeO:= xiO; end;(of initialize)
',fcn.root); ');
procedure cal_nm (yO : real; var y : real ); (Newton Method) begin y := yO - Cfcn.fx(yO)/fcn.fd(yO)); end; procedure cal_inm (yc : real; var y : real); (improved Vewtol1 's Method) var z,t real ; begin z := yc - (fcn.fx(yc)/fcn.fd(ye)); t := fcn.fd(yc)+fcn.fdCz); y= yc - 2*fcn.fx(yc)/t; end; procedure cal_cbysbev (yc : real; var y : real); (Chebyshev M ethod) begin y := yc - fcn .fx(yc)/fcn.fd(yc) - (l12)*sqrCfcn.fx(ye)/fcn .fd(Yc))*Cfcn.fsd(yc)/fcn.fd(yc)); end; procedure cal_multi! (yc : real; var y : real); (J'vlultipoint Iterative Method I) var z real; begin z := yc - (l/2)*(fcn.fx(yc)/fcn .fd(yc)); y := yc - (fcn.fx(yc)/fcn.fdCz)); end; procedure cal_multi2 (yc : real; var y : rea!); (.'v/ultipoint Iterative Method 2) var z real ; begin z := yc - Cfcn.fx(yc)/fcn .fd(yc)); y := z - fcn.fxCz)/fcn.fd(yc); end; procedure cal_para (ye : real; var y : real); (Parabolic Extension Newton Method) begin y := yc - 2 *(fcn.fx(yc)/fcn.fd(yc))/C l+sqrt(l-2*(fcn.fx(yc )/fcn.fd(yc))*(fcn. fsd(yc )/fcn. fdCyc )))); end;
50 procedure error(x,y : real; var e : real); (Returns e as the absolute error between two numbers x & y) begin e= abs(x-y); end; procedure title (wl,w2 : string; i,j : integer); begin writelnC ',wi,' ':i,w2,' ':j,'i'); wri te In(' --------------------------------------------------'); end; procedure line; begin wri tel n(' --------------------------------------------------'); end; procedure print_result (n : integer; x,y : real;wl,w2 : string; i,j : integer); var k : integer; begin k :=nmod15; if (k = 0) then begin writeC ',x,' ',y,' ',n); writeln; line; writelnC Enter the RETURN key'); readin; cIrscr; title(wl,w2,i,j); end {ofi/k =O} else begin writeC ',x,' ';-;y,' I,n); writeln end;{of else k=O) end; (ofprintJ esult) procedure cal_compare_mtds; (Selects the appropriate Numerical method and calculates the new value and the error accordingly) begin i:=i+l; case upcase(ch) of 'C' : cal_cbyshev(xcO,xc); 'P : calyara(xcO,xc); '0' : cal_multil(xcO,xc); 'T' : cal_ multi2(xcO,xc); 'N' : cal_nm(xcO,xc); end;{of case ch) cal_inmCxiO,xi); error(xc,xcO,cerror); error(xi,xiO,ierror\ errore xcJcn. root,acerror); error(xiJcn.root,aierror); xcO := xc; xiO := xi; end;{of cal_compare_ mtds)
51
procedure compare_mtds; (Compare Numerical iVlethods with Improved Newton's Alethod, Errors and Computational Order 0/ Convergence) begin drscr; gotoxy(20,4); writelnC **************************************'); gotoxy(20,5); case upcase(ch) of 'C' : writeln(,B : Chebyshev and Improved methods'); 'P' : writelnCB : PENM and Improved methods'); '0' : writeJnCB : .MPMI and Improved methods'); 'T' : writelnCB : .MPM2 and Improved methods'); 'N' : writelnCB : Newton and Improved methods'); end; (a/case ch) gotoxy(20,6); writelnCE : Errors'); gotoxy(20,7); writeln('C : Computational Order of Convergence'); gotoxy(20,8); wri tel nC *********************** ***************'); gotoxy(20, 10); wTite(,Enter your selection : '); chI := readkey; case upcase(chl) of 'E':begin initialize; i=O; drscr; titleCError(Comp) ','Error(INM) ',12,13); t"epeat cal_compare_ mtds; print_ result(i,acerror,aierror,'Error(Comp) ','Error(INM) ',12,13); until «(cerror < sqrt(epsilon» and (abs(fcn.fx(xc» < sqrt(epsilon») and ((ierror < sqrt(epsilon» and (abs(fcn.fx(xi» < sqrt(epsilon»»or (i>299); end; (a/case E) 'B':begin initialize; i:=O; clrscr; title('Comp. Mtd ','Improved Mtd',13,13); repeat cal_compare_ mtds; print_ result(i,xc,xi,'Comp. Mtd ','Improved Mtd', 13, 13); until «(cerror < sqrt(epsilon» and (abs(fcn.fx(xc» < sqrt(epsilon») and ((ierror < sqrt(epsilon» and (abs(fcn.fx(xi» < sqrt(epsilon»»or (i>299); end; (a/case B) 'C':begin initialize; i:=O; j:=O; drscr; writeln(, Computational order of convergence'); writelnC ***** ****** **** **************** ***'); writeln; titie(,Comp. Mtd ','Improved Mtd',16,13); for k := I to 3 do {Keeps three consecutive errors}
S2
begin ei[k] := abs(xiO-fcn .root); ec[k] := abs(xcO-fcn.root) ; end; {offor} repeat cal_compare_ mtds; ei[l] := ei[2J; ei[2] := ei[3]; ei[3] := aierror; ec[l] := ec[2]; ec[2] = ec[3]; ec[3] := acerror; if i > 2 then begin if (ei[1] ei[2]) and (ei[3] > sqrt(epsiIon)) then pi := abs((ln( ei[3]/ei[2]))/(ln(ei[2]/ei[ 1]))); if (ec[I]ec[2]) and (ec[3] > sqrt(epsiIon)) then pc := abs((ln( ec[3]/ee[2]))/(In( ee[2]/ee[ 1]))); j := i mod 10; if (j = 0) then begin if (ei[I] ei[2]) and (ei[3] > sqrt(epsilon)) then if (ee[ 1] ec[2]) and (ee[3] > sqrt(epsilon)) then writeC ',pc,' ',pi,' ',i) else
writeC ','
*
',pi,' ',i)
else if (ec[I] ee[2]) and (ee[3] > sqrt(epsilon)) then ,,i) writeC ',pc,' * else write(' * * writeln; line; writelnC Enter the RETURN key'); readIn; elrscr; writeinC Computational order of convergence'); writeIn(, *** ** *************** ****** *** ** ** *'); writeln; titJe('Comp. Mtd ','Improved mtd' ,16, 13); end (ofifj =O) else begin if (ei[J] ei[2]) and (ei[3] > sqrt(epsilon)) then if (ee[l] ee[2]) and (ee[3] > sqrt(epsi\on)) then writeC ',pc,' ',pi,' ',i) else ',pi,' ',i) writeC ',' * else if (ee[I] ee[2]) and (ee[3] > sqrt(epsiIon)) then writeC ',pc,' * ',i) else , i )~ write(' * * writeln; end;{of else j=O} end;{ofifi >2} until «(cerror < sqrt(epsilon)) and (abs(fcn.fx(xc)) < sqrt(epsilon))) and «ierror < sqrt(epsilon)) and (abs(fcD.fx(xi)) < sqrt(epsilon)))) or (i>299); I
53
end;{of case C) end;(ofcase chi) line; writeln~
writeln(, Enter the RETURN key); readln; end; (of Compare _ mtds) procedure menusc_multi_mtds; (Draws the sub-menu jar Nfuihpoint Methoru) begin cJrscc gotoxy(25,9)~
wri tel n(' >I< >I< ** >I< *>I< >I< >I< >I< >I< ** ** * ** *****'); gotoxy(25,lO); writeln(,O : Multipoint Method I'); gotoxy(25,l1); writelnCT : Multipoint Method 2'); gotoxy(25, 12); wri tel nC ** ******* * ** *** ** ******'); gotoxy(25 , 14); writeCEnter your selection : '); ch := readkey; end; {ofmenusc_multi _ mtd,>} begin (of main program) m:=true; epsilon:=eps; WHILEmDO begin menusc; ch : =readkey~
case upcase(cb) of 'e': compare_mtds; 'P: compare_mtds; 'N' : compare_mtds; 'M': begin menusc_multi_ mtds; compare_ mtds; end;{of case AI} 'Q','g' :m:=false else begin sound(250); delay(500); nosound; end;{of cas'e else) end;{ofcase ch) end;(ofwhile m} el.endsc; end.{ofmain program}
54
B.2
This program is used to compare Newton's Method and Improved Newton's Method for solving systems of nonlinear equations in two variables.
{$N+ }
uses crt, matrix, pro2_com, pr03 _ com;{Uni ts matrix. pro2 _com and pr03 _com defined in Appendix-D} type menu type
record op: string[ 10]; hp:integer; end;
fc
object(fu) (fit is defined in pr02J om) end; object(m) (m is del/ned in matrix) end;
matrices
element
object(item) (item is defined in pro3_com) end;
var char; ch menuar array[l.. 9] of menutype; m boolean; i,j,k integer; xnO,xiO, (Cun'ent iterative vectors) xn,xi, (Next iterative vectors) root (.4ctual root) vector; array [1..3] ofreal; (Keeps three consecutive errors) el,en nerror,ierror, (Error vectors) pi ,pn, , (Computational Order Convergence) anerror,aierror, (.4ctual Error vector!>') epsilon (.\lachine Precision) real; fcn fc; mx matrices; el element; pt"ocedure menusc; (Draws the Alain Afenu) begin c1rscr; el.drawbox(8,2, 72,24); el.drawbox(6, 1,74,25); el.drawbox(27,3,53,5); textcolor(O); textbackground(7); el.centertxt(4,'I.MPROVED NEWTON METHOD'); textcolor(7); textbackground(O); eldrawbox(28,21,51,23); elcentertxt(22,'ENTER YOUR SELECTION');
55
gotoxy(l6,8); writeCMPROVED NEWTON) ; gotoxy(l6,14) ; writeCUIT'); gotoxy( 46,8); writeCEWTON'): gotoxy(46,14); writeCOMP ORDER CVG '); texteo1or(O); textbackground(7); gotoxy(l5,8); wri te(T); gotoxy(l5,14); writeCQ'): gotoxy( 45,8); writeCN'); gotoxy(45, 14); write('C); texteolor(7); textbaekground(O); texteolor(O); textbaekground(O); gotoxy(52,22); write("); texteolor(7); textbaekground(O); end; {of menusc) function eps : real; (Finds the machine epsilon) var ep real ; begin ep:= 1; while 1+ep > 1 do ep := ep/2; eps := ep; end;((if eps) procedUl·e ioitialize; begin clrser; fen .fen_menu; fen .fnehoiee := readkey; write(fcnfnehoiee ); writeln; while not(fcn.fnchoice in ['a' ..'0']) do begin sOl'nd(250); delay(500); nosound; clrser; fen.fen_menu; fen.fnehoiee := readkey; write(fcn.fnchoice); writeln; end; {while) writeln;
S6 writelnC Actual root x writelnC Actual root y writeln; root(1] := fen .rootx; root(2] = fen.roory, write (' Enter the initial point readln(xnO[ 1],xnO[2]); xiO[I] := xnO[l]; xiO[2] := xnO[2]; end; {of initialize)
,,fen .rootx); ',fen. rooty);
');
p.-ocedure Jacobian (x : vector; var J : mtrix); (Find'S the Jacobian matrix J of F(x)) begin J[I,1] := fcn.fdx(x); J[I,2] := fcn.fdy(x); J[2,1] := fcn.gdx(x); J[2 ,2] := fcn.gdy(x); end; {ofJacobian) procedure CombF (x : vector; val" F : vector); (Returns I'aector valuedfunction F(x)) begin F[l] := fcn.f(x); F[2] := fcn.g(x); end; {of Comb F) procedure cal_nm (yO: vector; var y : vector); (Newton ;\.Jethod) var vector; s,F,Fneg J,Jinv mtrix; begin Jacobian(yO,J); CombP(yO,F); mx .invert(J,Jinv); mx .vnegate(F ,Fneg); rnx .vtnultiply(Jinv,fneg,s) ; mx .vadd(yO,s,y) end; (ofcal_nm) procedure cal~nm (yO : vector; var y : vector); (improved Newton ·s l'vfethod) va .. s,F,Fneg,yt,p,twoFneg vector; JO,JOinv,Jt,J,Jinv mtrix; begin Jacobian(yO,JO); CombF(yO,f ); mx.invert(JO,JOinv); mx .vnegate(F,f negt mx .vmultiply(JOinv,Fneg,s); mx.vadd(yO,s,yt) ; Jacobian(yt,Jt); mx.add(JO,Jt,J); mx.invert(J,Jinv); twoFneg[l] := 2*Fneg[1J; twoFneg[2] := 2*Fneg[2J;
57 mx.vmultiply(Jinv,twoFneg,p); mx.vadd(yO,p,Y); end; {ofcaUnm} procedure error(x,y : vector; var e : real); (Returns the norm infinity of error vector of two vectors x and y) var n vector; begin mx.vsubstract(x,Y ,n); if abs(n[l]) >= abs(n[2]) then e := abs(n[l]) else e := abs(n[2]); end;{oferror) procedure title (wl,w2 : string; i,j : integer); begin writeln(, ',wI,' ':i,w2,' ':j,'i'); wri tel nC' --------------------------------------------------'); end; {of title) procedure line; begin wri tel n(' --------------------------------------------------'); end; {of lin e) procedure print_result (n : integer; x,y : real;wl,w2 : string; i,j : integer); var k : integer; begin k:=nmod15 ; if (k = 0) then begin \Vrite(' ',x; ',Y,' I,n)~ writeln; line; writeln(' Enter the RETURN key'); readln; c1rscr; title(wl,w2,i ,j); end {ofifk=O} else begin writeC ',x,' ',y,' I , n) ~ writeln end; {of else k=O) end; {of printJesuIt} procedure cal_mtds; {Calculate lIIumerical Methods andjinds errors} begin initialize; i:=O; clrscr; title('Xi repeat i:=i+l;
','Yi
',14,13);
S8 case upcase(cb) of N: cal_nm(xnO,xn); '1': cal_inmlxnO,xn); end;{case ch) error(xn,xnO,nerror); error(xn,root,anerror); xnO:= xn; print_resuJt(i,xn[1],xn[2],'Xi ','Yi until (nerror < sqrt(epsilon)) or (i>299); line, writeln; writelnC Enter the RETURN key); readln; end;{of cal_mtds)
',14,13);
procedure coc; (Calculate Computational Order of Convergence ) begin initialize; i:=O; j:=O; cJrscr; writelnC Computational order of convergence'); writeln('
*** *** ** *** *** *** *** **************I)~
writeln; titleCNewton mtd','Improved mtd',16,13); for k := 1 to 3 do (Keeps three consecutive errors) begin error(xiO,root,ei [k]); error(xnO,root,en[k]); end; (offor) repeat i:=i+l; cal_ nm(xnO,xn); caUnm(xiO,xi); error(xn,xnO,nerror); error(xi,xiO,ierror); error(xn,root,anerror); errore xi ,root,aierror); xnO := xn; xiO :=xi; ei[l] = ei[2]; ei[2] = ei[3]; ei[3] := alerror; en[l] = en[2J; en[2] := en[3]; en[3] := anerror; ifi > 2 then begin if (ei[I]ei[2]) and (ei[3] > sqrt(epsilon)) then pi = abs((ln( ei[3]/ei[2]))/(ln( ei[2]/ei[l]))); if (en[1] en[2]) and (en[3] > sqrt(epsilon)) then pn := abs((ln( en[3]/en[2]))/(ln(en[2]/en[ 1]))); j := i mod 10; if (j = 0) then begin if (ei[I]ei[2]) and (ei[3] > sqrt(epsilon)) then if (en[1]en[2]) and (en[3] > sqrt(epsilon)) then
59 writeC ',pn,' ',pi,' ',i) else writeC ' ,1 * ',pi,' ',i) else if (en[1] en[2]) and (en[3] > sqrt(epsilon)) then ',i) writeC ',pn,' * else ',i); write(, * * writeln; line; writeln(' Enter the RETURN key'); readln; elrscr; writelnC Computational order of convergence'); writelnC * * ********* ********************* **'); writeln; titleCNewton mtd','Improved mtd',16,13); end{ofifj =O} else begin if (ei[I] ei[2]) and (ei[3] > sqrt(epsilon)) then if (en[1] en[2]) and (en[3] > sqrt(epsilon)) then writeC ',pn,' ',pi,' ',i) else ',pi,' ',i) writeC ',' * else if (en[I] en[2]) and (en[3] > sqrt(epsilon)) then ,,i) writeC ',pn,' * else writeC * * writeln; end; {of elsej=O} end;{ofifi >2} until «nerror < sql't(epsilon)) and (ierror < sqrt(epsilon») or (i>299); line; writeln; writelnC Enter the RETURN key'); readln; end;{ofcoc)
begin(ofmain program} m:=true; epsilon = eps; while m do begin menusc~
ch:=readkey; case upcase(ch) of 'N':cal_mtds; 'I':cal mtds; 'C':coc; 'Q':m:=false else begin sound(2S0); delay(SOO); nosound; end;{of case else}
60 end;{«f case ch} end ; {ofwhilej el.endsc; end.{of main program)
61 B.3
This unit is used in the program inmjcn_ oCone_variable to input functions of one variable and it's properties.
unit prol_com; {$N+}
interface uses crt; type fll = object fnchoice procedure fcn_menu ; function root function fx (x:real) function fd (x: real) function fsd (x:real) end;
: char; : real ; : real ; : real ; : real;
implementation function power (x : real; n : integer) : real; {Evaluate x to the power n} var a : real; 1 : integer; begin a:=I; i:=l; while i 0 then power:=a else ifn < 0 then power:=lIa else power:=I; end; {o/pmller} procedure fu.feD_menu; {Drm!:" the Function Menu} begin clrscr; FUNCTION MENU); writelnC writelnC * ** *** **** * *** *********** **************** *** ** *** ** ** *'); x/\3+4x/\2-IO'); writelnC a (sinx)/\2-x/\2+ I'); writelnC b x/\2-expx-3x+2'); c writelnC cosx-x'); d writelnC (x-l )/\3-1'); writelnC e x/\3-10'); writeln(' f
62 g xexp(x"2)-(sinxY\2+ 3eosx+5'); writelnC h (xsinx)"2+exp(x"2eosx+sinx)-28'); writelnC exp(x"2+ 7x-30)-I'); writeln(, J exp(eosx)-I'); writelnC write]n(' k 16x+l/(l+x)"20-1'); x"2-] '); writeln(, m x"2-2x+l'); writelnC n sin(x"2+ 10)'); writelnC o sinx-x'); writelnC p x"lO-lO'); writelne writelnC ****************************************************** I)~ writeln; write (' SELECT YOUR CHOICE '); end j (offil·fen _menu) function fu.root : real; {4ctual roots of the functions} begin case fnchoice of 'a' root 'b' root 'e' root 'd' root 'e' root 'f root 'g' root 'h' root 'i' root 'j' root 'k' root 'I' root 'm' root 'n' root '0' root 'p' root end j {of case fnchoice) endj {ofJiI. root)
1.36523001341448; 1. 40449164821621; 0.257530285439771 ; 0.739085133214758; 2; 2.15443469003367; -1.20764782713013; 4.82458931731526; 3; pil2; 0.0222623113 071165; 1;
1; sqrt( 4*pi-1O); I; 1.2589254] ]79383;
function fu.fx (x:real) : realj (Evaluate function fx at x) begin case fnchoice of x*sqr(x)+4*sqr(x)-10; 'a' fx 'b' fx sqr(sin(x»-sqr(x)+ 1; 'e' sqr(x)-exp(x)-3 *x+2; fx 'd' eos(x)-x; fx 'e' (x-l )*sqr(x-l )-1; fx 'f fx x*sqr(x)-IO; 'g' fx x*exp(sqr(x»-sqr(sin(x»+ 3*eos(x)+ 5; 'h' fx sqr(x*sin(x»+exp(sqr(x)*eos(x)+sin(x»-28; IiI exp(sqr(x)+7*x-30)-I; fx 'j' fx exp(cos(x»-I; 16*x+power( 1+x, -20)-1; 'k' fx 'I' sqr(x)-I; fx 'm' sqr(x)-2*x+ I; fx 'n' sin(sqr(x)+ I 0); fx '0' sin(x)-x; fx 'p' power(x, 10)-1 0; fx
63
end; (oIcase fochOlce) end; (offu./x) function fu.fd (x:real) : real;
{Evaluate first derivative oIIx at x} begin case fnchoice of 'a' fd 'b' fd Ie' fd 'd' fd 'e' fd 'f fd 'g' fd 'h' fd Iii
'j' 'k' ']'
']'
'n' '0'
fd fd fd fd fd fd fd fd
'p' end;{oI casefnchoice) end; {offil.jd}
3*sqr(x)+8*x; sin(2*x)-2*x; 2*x-exp(x)-3; -sin(x)-l; 3*sqr(x-l); 3*sqr(x); (1 +2 *sqr(x»*exp(sqr(x) )-sin(x)*(3+ 2 *cos(x»); 2*x*sin(x)*(sin(x)+x*cos(x))+( cos(x)*( 1+2*x)sqr( x)* sine x) )*exp( sqr( x )*cos( x )+si n(x»; (2 *x+ 7)*exp(sqr(x)+7*x-30); -sin(x)*exp( cos(x»; 16-20*power(1+x,-21); 2*x; 2*x-2; 2*x*cos(sqr(x)+ 10); cos(x)-l; 10*power(x,9);
function fu.fsd (x:real) : real;
(Evaluate second derivative of the Ix at x) begin case fnchoice of 'a' fsd 'b' fsd 'c' fsd 'd' fsd 'e' fsd fsd 'f 'g' fsd fsd 'h'
Ii' 'j' 'k' ']'
'1' 'n' '0' 'p'
fsd fsd fsd fsd fsd fsd fsd fsd
end;(oIcasefr/choice) end; (oIfuJsd) end.{ofunit)
6*x+8; 2*cos(2*x)-2; 2-exp(x); -cos(x); 6*(x-I ); 6*x; 4*x*( 1+sqr(x»*exp(sqr(x »-3 *x*cos(x)-2 *cos(2*x); 4*x*sin(2*x)+(2*sqr(x)-])*cos(2*x)+ 1+((2*cos(x)-sin(x)4*x*si n(x)-sqr(x)*cos(x»+( cos(x)*( 1+2 *x)-sqr(x)* sin(x »*(3 *x*cos(x)sqr(x)*sin(x»))*exp(sqr(x)*cos(x)+sin(x)); (2+sqr(2*x+ 7)*exp(sqr(x)+ 7*x-30); (sqr( sin(x) )-cos(x»* exp( cos( x»); 420*power( 1+x,-22); 2;
2; 2*cos(sqr(x)+ 10)-4*sqr(x)*sin(sqr(x)+ 10); -sin(x); 90*power(x,8);
64 B.4
ThIs unit is used in the program inmjcn_oCtwo_ variables to input functions and it's properties
unit pro2_com;
{$N+}
interlace uses crt,matrix; type matrices object(m) end; fu = object fnchoice : char; procedure fcn_menu; function rootx : real; function rooty : real; function f (x: vector) function fdx (x : vector) function fdy (x : vector) function g (x: vector) function gdx (x : vector) function gdy (x : vector) end; var mx
: real; : real; : real; : real; : real; : real;
matrices;
implementation function power (x:real;n:integer) : real; (Evaluate x ta the power nj va .. a:real; i:integer; begin a:=l; i:=l; while i 0 then power:=a else ifn < 0 then power:=lIa else power:=l; end; {o("pawer/
65 pmcedure fu.fcn_menu; {Prints the f unction menu} begin clrscr; FUNCTION MENU); writelnC *********************************************************************'); writelnC a f(x,y)=x2.+f-2 , g(X,y)=x2_y2_1'); writelnC b f(X,y)=(X2)2.-f-(y2)Z-67, g(x,y)=x*x2-3*xy2.-f- 35 .); \.\TitelnC c f(x,y)=xqO*x+y2.-f-S ,g(x,y)=x*y2.-f-x-lO*y+S·); writelnC d f(x ,y)=x2.-f-y2-x, g(x,y)=xZ_y2_y'); writelnC e : f(x,y)=x+y*y2-5y2-2y-IO, g(x,y)=x+y*y2.-f-y2-14y-29'); writelnC f : f(X,y)=-X2_X+2y-JS, g(x,y)=(x-l )2.-f-Cy-6)2-25'); \.\TitelnC g : f(x,y)=5x2_y2, g(x,y)=y-0.25*(sin(x)+cos(y)) '); writelnC h : f(x,y)=3x2_y2, g(x,y)=3xy2-x*x2-1 '); writelnC i : f(x,y)=16x2-S0x+y2.-f-32, g(x,y)=xy2.-f-4x-lOy+16'); writelnC j f(x,y)=2cos(y)+ 7sin(x)-1 Ox , g(x,y)=7cos(x)-2sin(y)-1 Oy'); writelnC k : f(X,y)=x2.-f-y2_2, g(x,y)=exp(x-l)+y*y2_2 writelnC writelnC *********************************************************************'): writeln ; write (' SELECT YOUR CHOICE ');
end; {offufcn _menu} function fu.rootx : real; {x component of the Actual root} begin case fnchoice of 'a' 'b' 'c' 'd' 'e'
: rootx= sqrtCl.5); : rootx := 1883645208910S2; : root x := 1; : rootx := 0; : rootx := 34.1993336862070; 'f : root x := -2; 'g' : root x := 0; 'h' : rootx := 0; 'i' : rootx := 0.5; 'j' : rootx := o 52652262191S04S; 'k' : rootx := 1;
end; (offi1choice) end; (ojjil. rootx) function fu.rooty : real; {y component of the Actual root} begin case fnchoice of 'a' 'b' 'c' 'd' 'e'
: rooty : rooty : rooty : rooty : moty 'f : rooty 'g' : rooty 'h' : rooty 'i' : rooty 'j' : rooty 'k' : rooty
end; (offochoice) end; {offil. rooty}
sqrt(0.5); 2.71594753880345; 1;
0; 3. 041241452319S1; 10;
0; 0; 2; 0.507919719037091 ; l;
66 function fu.f (x : vector) : real; (Returns the value off) begin case fnchoice of 'a' : f:= sqrex[I])+sqr(x[2])-2; 'b' . f:= sqr(sqrex[I]))+sqr(sqr(x[2]))-67; 'e' : f := sqr(x[ 1])-(1 O*x[ 1])+sqr(x[2])+8; 'd' : f:= sqr(x[I])+sqr(x[2])-x[l]; 'e' : f:= x[I]+x[2]*sqr(x[2])-5*sqr(x[2])-2*x[2]-10; 'f : f := -sqr(x[1])-x[I]+2*x[2]-18; 'g' : f:= 5*sqr(x[1])-sqr(x[2]); 'h' : f:= 3*sqr(x[I])-sqr(x[2]); 'i' : f= 16*sqr(x[l])-80*x[1]+sqr(x[2])+32; 'j' : f= 2*eos(x[2])+7*sin(x[1])-10*x[l); 'k' : f= sqr(x[J])+sqr(x[2])-2; end; (vffochoice) end; {offu.j} function fu.fdx (x : vector) : real; (Returns the first partial derivative offw.r.t. x) begin case fnchoice of 'a' : fdx 2*x[l); 'b' : fdx 4*x[ I] *sqr(x[ I]); 'e' : fdx (2*x[ID-IO; 2*x[I]-1; 'd' : fdx 'e' : fdx 1; -(2*x[I]+ 1); 'f : fdx 'g' : fdx 10*x[1]; 6*x[1]; 'h' : fdx 'i' : fdx 32*x[l]-80; 7*eos(x [I ])-1 0; 'j' : fdx 2*x[1); 'k' : fdx end; {ojfhchoice} end; (offufdx) function fu.fdy (x : vector) : real; (Returns thejirstpartial derivative offwr.t. y) begin case fnchoice of 'a' : fdy 2*x[2]; 'b' : fdy 4*x[2]*sqr(x[2]); 2*x[2]; 'e' : fdy 2*x[2]; 'd' : fdy 3*sqr(x[2])-10*x[2]-2; 'e' : fdy 2·, 'f : fdy 'g' : fdy -2*x[2J; -2*x[2J; 'h': fdy 2*x[2J; 'i' : fdy -2 *sin(x [2]); 'j' : fdy 2*x[2]; 'k': fdy end; (offochoice) end; (offiIJdy)
67 function fu.g (x : vector) (Returns the value of g) begin case fnchoice of 'a': g 'b': g 'e': g 'd': g 'e': g
'f : g 'g'. g 'h' : g 'i' : g 'j' : g
'k': g end; (offochoice) end; (offu.g)
: real;
sqr(x[ 1])-sqr(x [2])-1; x[ 1] *sqr(x [1 ])-3 *x[1] *sqr(x[2])+ 35; (x [1] *sqr(x[2])+x[ 1]-( 1O*x [2])+8; sqr(x [1 ])-sqrex[2])-x[2]; x[ 1]+x[2]*sqr(x[2])+sqrex[2])-14*x[2]-29; sqr(x[ 1]-1 )+sqr(x[2]-6)-25; x[2]-0 .25*(sin(x[ 1])+eos(x[2])); 3 *x[1] *sqr(x[2])-x[ 1]*sqr(x[ 1])-1 ; x[l]*sqr(x[2])+4*x[ 1]-1 0*x[2]+ 16; 7*eos(x[ I ])-2*sin(x[2])-1 0*x[2] ; exp(x[ 1]-l)+(sqr(x [2])*x[2])-2;
function fu.gdx (x : vector) : real; (Returns the first partial derivative ofg w.r.l. x) begin case fnchoice of 'a'. gdx 2*x[l]; 'b' : gdx 3 *sqr(x[ 1])-3 *sqr(x[2]); 'e' : gdx sqr(x[2])+ 1; 'd': gdx 2*x[l]; 'e' : gdx 1; 'f: gdx 2*ex[1]-1); 'g': gdx -0.25 *eos(x[l]); 'h' : gdx 3 * sqr(x [2])-3 *sqrex[ 1]); sqr(x[2])+4; 'i' : gdx 'j' : gdx -7*sinex[1]); 'k' : gdx exp(x[I]-I); end; (offnchoice) end; (offu.gdx) function fu.gdy (x : vector) : I·eal; (Returns ihe/irst partial derivative ofg w.r.t. y) begin case fnchoice of 'a' : gdy -2*x[2]; 'b': gdy -6*x[I]*x[2]; (2*x( 1]*x[2])-1 0; 'e': gdy 'd': gdy -2*x[2]-1 ; 3 *sqr(x[2])+2*x[2]-14; 'e': gdy 2*(x[2]-6); 'f: gdy 'g': gdy I +0.25*sin(x[2]); 6*x[ 1] *x[2]; 'h': gdy 'j': gdy 2*x[l]*x[2]- I 0; 'j' : gdy -2*eos(x[2])- I 0; 'k': gdy 3*sqr(x[2]); end; (offochoice) end; (offu.gdy)
end.(o(l/nit pro2Jom}
68 B.S
Create an object that uses to draw a box, center a text, etc,
unit pro3_com; interface uses crt; type item = object procedure drawbox (xl,yl,x2,y2 : integer); procedure centertxt (r : integer; its : string); procedure endsc; end; implementation procedure item.drawbox(xl,yl,xl,y2:integer); va.· i:integer; begin for i:=x J+ 1 to x2-1 do begin goto)':y(i,yl ); write(chr(205)); gotoxy(i,y2); write(chr(205)); end; for i:=yJ+ 1 to y2-1 do begin gotoxy(x I ,i); write(chr(l86)); gotoxy(x2,i); write(chr(l86)); end; gotoxy(x 1,yl); write(chr(201)); gotoxy(x2,y2); write(chr(l8S)); gotoxy(x 1,y2); write(chr(200)); gotoxy(x2,yl); write(chr(lS7)); end;{ofilem,drawbox} procedure item.centertxt(r:integerjlts:string)j val" x,l:integer; begin I:=length(lts); x=round«SO-I)I2); gotoxy(x,r); write(lts); end;{of item, centertxI}
69 procedure item.endsc; begin cJrscr~
textcolor(O)~
textbackground(7); centertxt( 14,'end of program'); textcolor(7)~ textbackground(O)~
end; {o/ item. endsc} end.{o/unit pro3 Jam}
70 B.6
Create an object to perform matrix operations.
unit matrix; {$N+}
interface uses crt;
const n
=
2;
type mtrix vector
= =
array [1..n, 1..n] ofreal; array [1..n] of real;
m = object procedure multiply procedure add procedure substract procedure negate procedure invert procedure transpose procedure vmultiply procedure vadd procedure vsubstract procedure vnegate end;
(a,b : mtrix; var c : mtrix); (a,b : mtrix; var c : mtrix); (a,b : mtrix ; var c : mtrix); (a : mtrix; var c : mtrix); (a : mtrix; var c : mtrix); (a : mtrix; var c : mtrix); (a : mtrix; b : vector; var c : vector); (a,b : vector; var c : vector); (a,b : vector; var c : vector); (a : vector; var c : vector);
implementation procedure m.multiply (a,b : mtrix; var c : mtrix); (Multiply matrices a and b) var ij ,k : integer; begin for i := 1 to n do for j := 1 to n do begin c[ij] := 0; for k := 1 to n do c[i,j] := c[i,j] + a[i,k]*b[kj]; end; {offor j) end; (ofm. multiply) procedure m.add (a,b : mtrix; var c : mtrix); (4dd matrices a and b) var 1,J: integer; begin for i := 1 to n do for j := 1 to n do c[i,j] := a[ij] + b[ij]; end;{ofm.add)
71
procedure m.substl·act (a,b : mtrix; var c : mtrix); (Substract matrix h fj-om matrix a) val' i,j: integer; begin for i := 1 to n do for j := 1 to n do e[i,j] := a[i,j] - b[i,j]; end;{of m.substract} procedure m.negate (a : mtrix; var c : mtrix); ( egate vector a) var I,J: integer; begin for i := 1 to n do for j := 1 to n do e[iJ] := -a[i,j]; end;{ofm.negate) procedure m.invert (a : mtrix; var c : mtrix); (Invert matrix a) var det: real; begin det:= a[1 ,1]*a[2,2] - a[1,2]*a[2,1]; (Determinant of matrix a) if (det = 0) then writeln(,Matrix is Singular') else begin e[l,l] := a[2,2]/det; e[l,2] := -a[l,2]/det; e[2,l] := -a[2,l]/det; e[2,2] := a[l,l]/det; end; end;{ofm.invert) procedure m.transpose (a : mtrix; var c : mtrix); (Returns the transpose of matrix a) var IJ: integer; begin for i := 1 to n do for j = 1 to n do e[i,j] .= a[j,i]; end; (m. transpose) procedure m.vmultiply (a : mtrix; b : vector; val' c : vector); {lvJultiply matrix a and vector h} var I,J: integer; begin for i := 1 to n do begin eli] := 0; for j := 1 to n do eli] := eli] + a[i,j]*b[j]; end; end; {ofm. vmultiply)
72 procedure m.vadd (a,b : vector;var c : vector); ~4dd vectors a and b) val· integer; begin for i := 1 to n do eli] := ali] + b[i]; end;{ofm- vadd) procedul·e m.vsubstract (a,b : vector;var c : vector); (Substract vector b from vector a) var 1 integer; begin for i := 1 to n do eli] := ali] - b[i]; end;{ofm- vsubstract) procedure m.vnegate ( Negate vector a) var I integer; begin for i := 1 to n do eli] := -ali]; end;{ofm. vnegate) end. (afmatrix)
(a : vector; var c : vector);
73
REFERENCES 1. Burden Richard L & Faires Douglas 1. (1993). Numerical Analysis - PWS - Kent
Publishing Co, Bosten. 2.
Curtis F. Gerald & Patrick O. Wheatley (1989). Applied Numerical Analysis. Addison-Wesley Publishing Company.
3. Dennis 1.E. & Schnable Robert B. (1983). Numerical methods for unconstrained optimisation and nonlinear equations. Prentice Hall, Inc. 4 . Gordon Sheldon P. & Von Eschen Ellis R. (1988). A Parabolic extension of Newton's method. International Jomnal for Math . Educ. Sci. Technol. 5. Jain M.K:, Iyenger S.R.K. & Jain R.K. (1993). Numerical Methods (Problems and
.,.
;!
"
Solutions) - New Age International Limited / Wiley Eastern Limited.
46. Weerakoon Sunethra (1996). Numerical solution of nonlinear equations in the absence of the derivative. Journal for Natn. Sci .. Coun. Sri Lanka 1996 24(4) : 309-318
\
..
.,"
•
."-
(g~17mm,;ta £"oL }3inJQ,.. /K. fin. 1),.gm .. da.a 21.,q., dol..", .1",,,. G:»tt9"tlca""ilta, ttl UIJ eq"J. •. 10. 1). 0741 1422