Mathematical Scope. ⢠Ordinary Differential Equations. These have great significance in engineering practice. This is
Advanced Numerical Methods in Engineering Anand J Kulkarni PhD, MS, BEng, DME Email:
[email protected] URL: sites.google.com/site/oatresearch/class-room-schedule Ph: 91 20 3911 6432 Department of Mechanical Engineering Symbiosis Institute of Technology Symbiosis International University, Pune, MH, India
References • Chapra S C, Canale R P: Numerical methods for Engineers, 6th Edition, Publisher: McGraw-Hill
• Some online resources 2
Mathematical Scope • Roots of Equations: These problems are concerned with the value of a variable or a parameter that satisfies a single nonlinear equation. These problems are especially valuable in engineering design contexts where it is often impossible to explicitly solve design equations for parameters.
3
What are Numerical Methods • Numerical Methods Methods designed for the constructive solution of mathematical problems requiring particular arithmetic results, usually on a computer. A numerical method is a complete and unambiguous set of procedures for the solution of a problem, together with computable error estimates. Numerical methods are techniques by which mathematical problems are formulated so that they can be solved with arithmetic operations. Although there are many kinds of numerical methods, they have one common characteristic: they invariably involve large numbers of tedious arithmetic calculations. It is little wonder that with the development of fast, efficient digital computers, the role of numerical methods in engineering problem solving has increased dramatically in recent years. 4
Mathematical Scope • System of Linear Algebraic Equations A set of values is sought that simultaneously satisfies a set of linear algebraic equations. Such equations arise in a variety of problem contexts and in all disciplines of engineering. In particular, they originate in the mathematical modeling of large systems of interconnected elements such as structures, electric circuits, and fluid networks. However, they are also encountered in other areas of numerical methods such as curve fitting and differential equations.
5
Mathematical Scope • Optimization
6
Mathematical Scope Curve Fitting • The techniques developed for this purpose can be divided into two general categories: • Regression and Interpolation. Regression is employed where there is a significant degree of error associated with the data. Experimental results are often of this kind. For these situations, the strategy is to derive a single curve that represents the general trend of the data without necessarily matching any individual points.
7
Mathematical Scope • Curve Fitting In contrast, interpolation is used where the objective is to determine intermediate values between relatively error-free data points. Such is usually the case for tabulated information. For these situations, the strategy is to fit a curve directly through the data points and use the curve to predict the intermediate values.
8
Mathematical Scope • Integration A physical interpretation of numerical integration is the determination of the area under a curve. It has many applications in engineering practice, ranging from the determination of the centroids of oddly shaped objects to the calculation of total quantities based on sets of discrete measurements. In addition, numerical integration formulae play an important role in the solution of differential equations.
9
Mathematical Scope • Ordinary Differential Equations These have great significance in engineering practice. This is because many physical laws are understood in terms of the rate of change of a quantity rather than the magnitude of the quantity itself.
10
Mathematical Scope • Partial differential equations These are used to characterize engineering systems where the behavior of a physical quantity is understood in terms of its rate of change with respect to two or more independent variables.
11
Roots of Equations
• This formula may work for simpler problems; however as the problem becomes complex and/or nonlinear the above approach may not work. • Ex: 12
Roots of Equations
13
Roots of Equations • Bracketing Methods – Bisection Method – False Position Method
• Open Methods – One Point Iteration
• Newton-Raphson Method • Secant Method 14
Roots of Equations: Bracketing Methods • Bracketing methods typically exploits the facts that a function changes sign in the vicinity of its root. • The initial guesses are required to bracket the root (on either sides of the root).
15
Roots of Equations: Bracketing Methods Graphical Methods
16
Roots of Equations: Bracketing Methods Graphical Methods Various values of c can be substituted into the right-hand side
17
Roots of Equations: Bracketing Methods • Graphical Methods
18
Roots of Equations: Bracketing Methods • Graphical Methods: Generalization
19
Roots of Equations: Bracketing Methods • Graphical Methods: Exceptions to Generalization
20
Roots of Equations: Bracketing Methods • Bisection Method
21
Roots of Equations: Bracketing Methods
22
Roots of Equations: Bracketing Methods
23
Roots of Equations: Bracketing Methods
24
Roots of Equations: Bracketing Methods • Termination Criteria and Error Estimates (Approximation)
25
Roots of Equations: Bracketing Methods False Position Method
26
Roots of Equations: Bracketing Methods False Position Method
27
Roots of Equations: Bracketing Methods False Position Method
28
Roots of Equations: Bracketing Methods False Position Method
29
Roots of Equations: Bracketing Methods False Position Method
30
Roots of Equations: Bracketing Methods Comparison of the relative errors of the bisection and the falseposition methods. The False position method is more efficient.
31
Pitfalls of the False-Position Method
32
33
Pitfalls of the False-Position Method
34
Pitfalls of the False-Position Method
35
Roots of Equations: Open Methods One Point Iteration
36
Roots of Equations: Open Methods One Point Iteration
37
Roots of Equations: Open Methods One Point Iteration
38
Roots of Equations: Open Methods One Point Iteration
39
Roots of Equations: Newton-Raphson Method
40
Roots of Equations: Newton-Raphson Method
41
42
Pitfalls of the Newton-Raphson Method
43
Pitfalls of the Newton-Raphson Method
44
Pitfalls of the Newton-Raphson Method
45
Root finding: Secant Method • The secant method is not a bracketing method. It starts from two initial points and draws a straight line between these points. Secant method 10
f(x)
5
previous
0 0
0.5
1
1.5
2
2.5
3
3.5
4
-5
oldest -10 x
46
Secant Method • A new x value is chosen by finding where this straight line crosses the x-axis: k k k 1 f ( x )( x x ) k 1 k x x f ( x k ) f ( x k 1 )
Secant method 10
f(x)
5
k
0 0
0.5
1
1.5
2
2.5
3
3.5
4
k+1
-5
k-1 -10 x
47
Secant Method • This new value replaces the oldest x value i.e. x k 1 replaces x k 1 which is being used in the calculation. Secant method 10
f(x)
5
k
0 0
0.5
1
1.5
2
2.5
3
3.5
4
k+1
-5
k-1 -10 x
48
Secant Method: Example Secant method 10
f(x)
5 0 0
0.5
1
1.5
2
2.5
3
3.5
4
-5 -10
x
xo 0 4 3.5 3.733333 3.833828
x1 4 3.5 3.733333 3.833828 3.799535
f(xo) -7 1 -0.875 -0.263407 0.136445
f(x1) 1 -0.875 -0.263407 0.136445 -0.009913
x2 3.5 3.733333 3.833828 3.799535 3.801858
f(x2) -0.875 -0.263407 0.136445 -0.009913 -0.000329 49
Secant Method: Failure • If the function is very “flat” the secant method can fail, for example: 1 f ( x) 2 ( x 1) 2
Secant method 1
f(x)
0.5
k-1
first iteration
0 0
0.5
1
1.5
2
2.5
k+1
-0.5
3
3.5
4
k
second iteration -1 x 50
Secant Method: Failure • The numerical values associated with the “failure” example are: xo 0 4 2.083333
x1 4 2.083333 -9.525344
f(xo) 0.5 -0.46 -0.394814
f(x1) -0.46 -0.394814 -0.486241
x2 2.083333 -9.525344 52.21333
f(x2) -0.394814 -1639.735 123986
51
Numerical Differentiation and Integration
52
Numerical Differentiation and Integration
53
Numerical Differentiation and Integration
54
55
56
Non-computer Methods for Differentiation and Integration
57
Non-computer Methods for Differentiation and Integration
58
Non-computer Methods for Differentiation and Integration
59
Non-computer Methods for Differentiation and Integration
60
61
Non-computer Methods for Differentiation and Integration
62
Newton-Cotes Integration Formulas
63
64
65
Trapezoidal Rule
66
67
Error of the Trapezoidal Rule
68
Single Application of the Trapezoidal Rule
69
70
71
The Multiple-Application Trapezoidal Rule
72
73
74
75
76
77
78
Simpson’s Rules
79
80
81
Single Application of Simpson’s 1/3 Rule
82
83
The Multiple-Application Simpson’s 1/3 Rule
84
85
86
Multiple-Application Version of Simpson’s 1/3 Rule
87
• The previous example illustrates that the multipleapplication version of Simpson’s 1/3 rule yields very accurate results. For this reason, it is considered superior to the trapezoidal rule for most applications. However, as mentioned previously, it is limited to cases where the values are equi-spaced. Further, it is limited to situations where there are an even number of segments and an odd number of points. • Consequently, an odd-segment–even-point formula known as Simpson’s 3/8 rule is used in conjunction with the 1/3 rule to permit evaluation of both even and odd numbers of segments. 88
Simpson’s 3/8 Rule
89
90
91
92
93
Integration with Unequal Segments
94
95
96
97
Integration of Equations • We know that the functions to be integrated numerically will typically be of two forms: a table of values or a function. • The form of the data has animportant influence on the approaches that can be used to evaluate the integral. – For tabulated information, you are limited by the number of points that are given. – In contrast, if the function is available, you can generate as many values of f(x) as are required to attain acceptable accuracy 98
Integration of Equations • We will discuss techniques that are expressly designed to analyze cases where the function is given. These techniques capitalize on the ability to generate function values to develop efficient schemes for numerical integration. • The first is based on Richardson’s extrapolation, which is a method for combining two numerical integral estimates to obtain a third, more accurate value. • The computational algorithm for implementing Richardson’s extrapolation in a highly efficient manner is called Romberg integration. 99
Integration of Equations • The third method is called Gauss quadrature. Recall that, in the last chapter, values of f(x) were determined at specified values of x. • Gauss-quadrature formulas employ x values that are positioned between a and b in such a manner that a much more accurate integral estimate results.
100
101
Romberg Integration • Romberg integration is one technique that is designed to attain efficient numerical integrals of functions. • It is based on successive application of the trapezoidal rule. However, through mathematical manipulations, superior results are attained for less effort.
102
Richardson’s Extrapolation • Richardson’s extrapolation uses two estimates of an integral to compute a third which is supposedly more accurate approximation. • The estimate and error associated with a multiple-application trapezoidal rule can be represented generally as I = I(h) + E(h) where I = the exact value of the integral I(h) = the approximation from an n-segment application of the trapezoidal rule with step size h = (b − a)/n, E(h) = the truncation error. If we make two separate estimates using step sizes of h1 and h2 and have exact values for the error, I(h1) + E(h1) = I(h2) + E(h2)
103
Richardson’s Extrapolation • Finally the simplified integration is as follows:
104
105
106
Romberg Integration Algorithm
107
108
Linear Algebraic Equations
109
110
111
Linear Algebraic Equations: Graphical Method
112
113
114
115
116
Determinant of Matrix
117
• •
If the determinant is zero it means that the equations never intersect one another and there is no solution. If the determinant is close to zero it means that the equations/matrix are/is ill conditioned. They loosely intersect or could have many intersecting points as in fig C above. 118
Cramer’s Rule
119
120
121
Basic Elimination Method • The elimination of unknowns was used to solve a pair of simultaneous equations. The procedure consisted of two steps: 1. The equations were manipulated to eliminate one of the unknowns from the equations. The result of this elimination step was that we had one equation with one unknown. 2. Consequently, this equation could be solved directly and the result back-substituted into one of the original equations to solve for the remaining unknown. 122
Gauss Elimination Method • This basic approach can be extended to large sets of equations by developing a systematic scheme or algorithm to eliminate unknowns and to back-substitute. Gauss elimination is the most basic of these schemes.
123
Gauss Elimination Method
124
Gauss Elimination Method
125
Gauss Elimination Method
126
Pitfalls of Gauss Elimination Method
127
128
Pitfalls of Gauss Elimination Method
129
Pitfalls of Gauss Elimination Method • Singularity • It is worse when the two are identical. In such cases, we would lose one degree of freedom, and • would be dealing with the impossible case of n − 1 equations with n unknowns. • Such cases might not be obvious to you, particularly when dealing with large equation sets. Consequently, it would be nice to have some way of automatically detecting singularity, i.e. check the determinant value. 130
Gauss Jordan Method
131
Gauss Jordan Method
132
Gauss Jordan Method
133
Gauss Jordan Method
134
Gauss Jordan Method
135
LU Decomposition Method
136
LU Decomposition Method
137
LU Decomposition Method
138
LU Decomposition Method
139
LU Decomposition Method
140
LU Decomposition Method
141
LU Decomposition Method
142
LU Decomposition Method
143
LU Decomposition Method
144
LU Decomposition Method
145
LU Decomposition Method
146
LU Decomposition Method
147
LU Decomposition Method
148
LU Decomposition Method
149
LU Decomposition Method
150
LU Decomposition Method
151
LU Decomposition Method
152
LU Decomposition Method
153
154
Gauss Seidel Method AJK
Gauss Seidel Method • The Gauss-Seidel method is the most commonly used iterative method. • Assume that we are given a set of n equations:
• All the diagonal elements should be nonzero
161
Taylor Series AJK
Taylor Series • Taylor’s theorem and its associated formula, the Taylor series, is of great value in the study of numerical methods. In essence, the Taylor series provides a means to predict a function value at one point in terms of the function value and its derivatives at another point. In particular, the theorem states that any smooth function can be approximated as a polynomial.
Interpolation
Interpolation
Newton’s divided-difference interpolating polynomial
Linear Interpolation
Quadratic Interpolation • The error in the previous example resulted from our approximating a curve with a straight line. Consequently, a strategy for improving the estimate is to introduce some curvature into the line connecting the points. If three data points are available, this can be accomplished with a secondorder polynomial (also called a quadratic polynomial or a parabola).
Lagrange interpolating polynomial
Inverse interpolation
Spline Interpolation
Spline
Linear Splines
Cubic Splines
Cubic Splines
194
ODE AJK
ODE
ODE
ODE
Runge-Kutta Methods
Euler Method
211
ODE2 AJK
Euler Method: Error Reduction
Runge-Kutta Methods
Second-Order Runge-Kutta Methods
(Classical) Fourth-Order Runge-Kutta Methods
227