implicit runge-kutta algorithm using newton-raphson ...

4 downloads 0 Views 112KB Size Report
NEWTON-RAPHSON METHOD. Andrés L. ... Key words: Runge-Kutta, Newton-Raphson, Implicit, Algorithm. Abstract. ..... Englewood Cliffs, New Jersey, (1971).
IMPLICIT RUNGE-KUTTA ALGORITHM USING NEWTON-RAPHSON METHOD Andr´ es L. Granados Universidad Sim´ on Bol´ıvar, Dpto. Mec´ anica. Apdo. 89000, Caracas 1080A. Venezuela. e-mail: [email protected] Key words:

Runge-Kutta, Newton-Raphson, Implicit, Algorithm.

Abstract. An algorithm for solving ordinary differential equations has been developed using implicit Runge-Kutta methods, which may be partially or fully implicit. The system of algebraic equations generated by the Runge-Kutta method in each step of integration is solved with the help of the NewtonRaphson Method. The unknown variables in this case are the derivatives kr (r = 1, 2, . . . , N ) of the intermediate points for a Runge-Kutta method of N stages. An expression for the gradient in the Newton-Raphson method is deduced with the application of the chain rule and then is approximated by a linear estimation. The criterion to evaluate the iterative process in the Newton-Rapson method is stated in order to determine the size of the step that assures the convergence. The starting values of the variables kr for the iterative process are computed using an explicit Runge-Kutta method with the same intermediate points as the implicit method. This reduce substantially the cost of computation for each step, particularly in those steps where the implicit method is some redundant and there are few iterations. Finally, the comparison of the implicit and the explicit runge-Kutta methods in the previous step permits to estimate the size of the next step before the corresponding integration is made.

1

1 INTRODUCTION

The principal aim of this work is the development of an algorithm for solving ordinary differential equations based on known implicit Runge-Kutta methods but where the selection of the methods and the application of iterative procedures have been accurately studied in order to produce the least number of numerical calculations and the highest order of exactitude with a reasonable stability. As it is well known, the implicit Runge-Kutta methods are more stable than explicit ones. However, the solution of the system of non-linear equations with the auxiliary variables “k”, produced by the implicit dependence, generates an iterative process that can be extremely slow and can produce a restriction in the stepsize. This restriction can be overcome by the selection of the Newton-Rapson method for solving the algebraic equations generates by the implicit dependence. 2 IMPLICIT RUNGE-KUTTA METHODS 2.1 Differential Equations

As it is well known, every system of ordinary differential equations of any order may be transformed, with a convenient change of variables, into a system of first order ordinary differential equations1,2 . This is the reason why only this last type of differential equations will be studied. Let the following system of M first order ordinary differential equations be expressed as dy i /dx = f i (x, y) with i = 1, 2, 3, . . . , M , being y a M -dimensional function with each component depending on x. This may be symbolically expressed as dy = f (x, y) dx

y = y(x) = (y 1 (x), y 2 (x), y 3 (x), . . . , y M (x))

where

(1)

When each function f i (x, y) depends only on each y i the system is said to be uncoupled, otherwise it is said to be coupled. If the system of ordinary differential equations is uncoupled then every differential equation can be solved separately. When the system does not explicitly depends on x the system is said to be autonomous. When the conditions of the solution y(x) are known at a unique specific point, for example, y i (xo ) = yoi at x = xo , or symbolically x = xo

y(xo ) = yo

(2)

Both expressions (1) and (2) are said to state an Initial Value Problem, otherwise they state a Boundary Value Problem. In deed, the system of differential equations (1) is a particular case of a general autonomous system stated in the next form2,3 dy = f (y) ≡ dx



dy i /dx = 1 dy i /dx = f i (y)

but with an additional condition yo1 = xo in (2). 2

if i = 1 if i = 2, 3, . . . , M + 1

(3)

2.2 Runge-Kutta Methods

Trying to make a general formulation, a Runge-Kutta method of order P and equiped with N stages is defined3 with the expression i yn+1 = yni + h(cr kri )

(4.a)

where the auxiliary M -dimensional variables kr are calculated by kri = f i [ xn + br h , yn + h (ars ks ) ]

(4.b)

for i = 1, 2, 3, . . . , M and r, s = 1, 2, 3, . . . , N . Notice that index convention of sum has been used. Thus every time an index appears twice or more in a term, this should be summed up to complete its range. A Runge-Kutta method (4) has order P if, for a sufficiently smooth problem, the expressions (1) and (2) satisfy y(xn + h) − yn+1  ≤ Φ(ζ) hP +1 = O(hP +1 )

ζ ∈ [xn , xn + h]

(5)

i.e., the Taylor series for the exact solution y(xn + h) and for the numerical solution yn+1 coincide up to (and include) the term with hP 7 . The Runge-Kutta method thus defined can be applied for solving initial value problems, and it is used recurrently. This is, given a point (xn , yn ), it can be obtained the next point (xn+1 , yn+1 ) using the expressions (4), being xn+1 = xn + h, where h is named the stepsize of the method. Every time that this is made, the method goes forward (or backward if h is negative) an integration step h in x, offering the solution in consecutive points, one for each jump. In this way, if the method begins with the initial conditions (x0 , y0 ) stated by (2), it can calculate (x1 , y1 ), (x2 , y2 ), (x3 , y3 ), . . . , (xn , yn ), and continue this way, up to the desired boundary in x. In each integration or jump the method reinitiates with the information from the adjacent point inmediately preceding. This characteristic clasifies the RungeKutta methods within the group of one step methods. It should be notice, however, that the auxiliary variables kri are calculated for each r up to N stages in each step. These calculations are no more than evaluations of the functions f i (x, y) for intermediate points x+br h in the interval [xn , xn+1 ] (0 ≤ br ≤ 1). The evaluation of each M -dimensional auxiliary variable bf kr represents one stage of the method. Let it now be introduced a condensed representation of the generalized Runge-Kutta method, formerly developed by Butcher4,5 and that is presented systematically in the book of Lapidus and Seinfeld6 and in the books of Hairer, Nørsett and Wanner7,8 . This last books has a numerous collection of Runge-Kutta methods using the Butcher’s notation and an extensive bibliography. After the paper of Butcher4 it became customary to symbolize the general Runge-Kutta method (4) by a tableau. In order to illustrate this representation, consider the expresions (4) applied to a method of four stages (N = 4). Accomodating the coefficients ars , br and cr in a adequate form, they may be schematically represented as b1 b2 b3 b4

a11 a21 a31 a41

a12 a22 a32 a42

a13 a23 a33 a43

a14 a24 a34 a44

c1

c2

c3

c4

3

(6)

The aforementioned representation allows for the basic distinction of the following types of Runge-Kutta methods, according to the characteristics of the matrix ars : If ars = 0 for s ≥ r, then the matrix ars is lower tringular (excluding the principal diagonal) and the method is said to be completely explicit. If, ars = 0 for s > r, then the matrix ars is lower triangular, including the principal diagonal, and the method is said to be semi-implicit or simple-diagonally implicit. If the matrix ars is diagonal by blocks, the method is said to be diagonally implicit. if the first row of the matrix ars is filled with zeros, a1,s = 0, and the method is diagonally implicit, then the method is called Lagrange Method9 (the coefficients br may be arbitrary). If a Lagrange method has bN = 1 and the last row of the matrix is the array cs = aN,s, then the method is said to be stiffly accurate. If, conversely, none of the previous conditions are satisfied, the method is said to be simply implicit. In the cases of implicit methods, it can be noticed that an auxiliary variable kr may depend on itself and on any other variables not calculated before in the same stage. That is why the methods are named implicit in these cases. Additionally, the condense representation above described permits to verify very easily certain properties that the coefficients ars , br , and cr should fulfill. For example, 0 ≤ br ≤ 1, N N s=1 ars = br and r=1 cr = 1. These mentioned properties may be interpreted in the following manner: The first property expresses that the Runge-Kutta is a one step method, and the functions f i (x, y(x)) in (4.b) should be evaluated for x ∈ [xn , xn+1 ]. The Second property results in applying the Runge-Kutta method (4) to the system of differential equations (3), where ks1 = 1 ∀s = 1, 2, 3, . . . , N , and thus the sum of ars in each line r offers i the value of br . The third property means that in the expression (4.a) the value of yn+1 is i obtained from the value of yn and the projections (with h) of an average of the derivatives dy i /dx = f i (x, y). The values of cr are the weighted coefficients of this average. The coefficients ars , br and cr are determined applying the aforementioned properties and using some additional relations that are resumed in the following theorem: Theorem (Butcher4 ). Let the next conditions be defined as B(P )

N  i=1

C(η)

N  j=1

D(ξ)

N  i=1

ci bq−1 = i

1 q

aij bq−1 = j

q = 1, 2, . . . , P

bqi q

ci bq−1 aij = i

i = 1, 2, . . . , N

cqj (1 − bqj ) q

q = 1, 2, . . . , η

j = 1, 2, . . . , N

(7)

q = 1, 2, . . . , ξ

If the coefficients bi , ci , aij of a Runge-Kutta method satisfy the conditions B(P ), C(η), D(ξ) with P ≤ η + ξ + 1 and P ≤ 2η + 2, then the method is of order P . This theorem is demostrated basically with the expansion in Taylor series of kri of (4.b). The same variables in implicit dependence are also expanded in series up to the corresponding order term. The recurrent substitution of these in (4.a) and the consecuent 4

comparison with the Taylor series of y(xn+1 ) ∼ = yn+1 around y(xn ) = yn results in the following relations equivalent to (7) cr ars ast atu bu = 1/120 h

cr ars ast b2t = 1/60

cr δ r = 1

cr ars bs ast bt = 1/40

cr ars ast bt = 1/24 h2

cr ars b3s = 1/20

cr ars b2s = 1/12

cr br = 1/2

cr br ars ast bt = 1/30

cr br ars bs = 1/8 h4

cr b3r = 1/4

cr ars bs = 1/6 h3

cr br ars b2s = 1/15

(8)

cr ars bs art bt = 1/20 cr b2r ars bs = 1/10

cr b2r = 1/3 h5

cr b4r = 1/5

These relations are valid for Runge-Kutta methods, whether implicit or explicit, from first order method (e.g. Euler method) to fifth order method (e.g. Fehlberg method10 ). In all the case the indexes r, s, t and u vary from 1 to the number of stages N , and is applied the convention of sum (δr = 1 ∀r). Gear3 , has presented a similar deduction for (8), but only for explicit methods. In Hairer et al.7 appear relations similar to (8), but only for explicit methods and only up to the term of order h4 . Also Hairer et al. deduce a theorem that expresses the equivalence of the implicit Runge-Kutta methods and the orthogonal collocation methods.7,8 Ralston in 1965 (see for example Ralston and Rabinowitz11 ) made a similar analysis to obtain the relations of the coefficients, but for an explicit Runge-Kutta method of fourth order and four stages, and found the following family of solutions of relations (8) b1 = 0 a21 = b2 a42 =

a31 = b3 − a32

b4 = 1 a32 =

(1 − b2 )[b2 + b3 − 1 − (2 b3 − 1)2 ] 2 b2 (b3 − b2 )[6 b2 b3 − 4 (b2 + b3 ) + 3]

b3 (b3 − b2 ) 2 b2 (1 − 2 b2 ) a43 =

(s ≥ r)

a41 = 1 − a42 − a43

(1 − 2 b2 )(1 − b2 )(1 − b3 ) b3 (b3 − b2 )[6 b2 b3 − 4 (b2 + b3 ) + 3]

(9.a − c) (9.d − g) (9.h, i)

1 1 − 2(b2 + b3 ) + 2 12 b2 b3

c2 =

2 b3 − 1 12 b2(b3 − b2 )(1 − b2 )

(9.j, k)

1 − 2 b2 12 b3(b3 − b2 )(1 − b3 )

c4 =

2 (b2 + b3 ) − 3 1 + 2 12 (1 − b2 )(1 − b3 )

(9.l, m)

c1 = c3 =

ars = 0

Notice that if b2 = 1/2 and b3 = 1/2, then the well known classical Runge-Kutta method of fourth order and four stages is obtained.

5

3 THE ALGORITHM 3.1 Newton-Raphson Method

For solving the initial value problem (3), the implicit Runge-Kutta method i yn+1 = yni + h cr kri

kri = f i (yn + h ars ks )

(10)

can be used. The expression (10.b) should be interpreted as a system of non-linear equations with the auxiliary variables kri as unknowns. For this reason, it is convenient to define a function gri (k) = f i (yn + h ars ks ) − kri = 0

(11)

that should be zero in each component when the solution for each kri has been found. In order to solve the system of non-linear equations (11), it is more efficient to use the Newton-Raphson method rather than the fixed point method (as is suggested by (10.b)) i kr,(m+1) = f i (yn + h ars ks,(m) )

(12)

which is easier to use but has a wore convergency. However, for using the Newton-Raphson method, the jacobian tensor of the function i gr (k), with respect to the variables ktj , has to be defined as ∂gri ∂ktj

=

∂f i  h ars δjk δst − δji δrt = h art fji (yn + h ars ks ) − δji δrt  ∂y k yn +h ars ks

(13)

the system of differential equations and subscript means the number of the corresponding stage. The notation fji is used instead of ∂f i /∂y j and r does not sum. Thus the Newton-Raphson method can be applied with the following algorithm i i i kr,(m+1) = kr,(m) + ω ∆kr,(m)

(14.a)

j where the variables ∆kt,(m) are found from the next system of linear equations

 ∂g i  r

∂ktj

(m)

j ∆kt,(m) = −gri (k(m) )

(14.b)

and ω is a relaxation factor. The iterative process (14) is applied in a succesive form until cr r,(m)  = ω cr ∆kr,(m)  < max

where

i i ir,(m) = kr,(m+1) − kr,(m)

(15)

The value ir,(m) is the local error in the auxiliary variables kri and max is the tolerance for i . the local truncation error in the numerical solution yn+1 6

For very complicated functions, it is convenient to express the partial derivatives of the jacobian tensor (13) in a linear numerical backward approximated form, as it is indicated in the following expression  ∂g i  r ∂ktj

≈ h art

 f i (y + ∆y ) − f i (y(j) )  n n n ∆ynj

− δji δrt

(16)

where ∆ynj = h ars ksj

and

f i (yn(j) ) = f i (yn1 + ∆yn1 , yn2 + ∆yn2 , . . . , ynj , . . . , ynM + ∆ynM )

(17)

j The initial values kr,(0) may be estimated by using a explicit Runge-Kutta method whose intermediate points br are the same of the implicit Runge-Kutta method.

3.2 An Example

Let the Runge-Kutta method based on Lobatto quadrature12 of sixth order (P = 6) with four stages (N = 4) (for more details see Lapidus and Seinfeld6 or Hairer et al.7 ). The coefficients of this method expressed in the Butcher notation are 0 √ (5 − 5)/10 √ (5 + 5)/10 1

(5 +

0 √

0

1/6

1/6 √ (15 + 7 5)/60 √ (5 − 5)/12

1/12

5/12

(5 −



5)/60 5)/60

0 √ (15 − 7 5)/60

0

1/6 √ (5 + 5)/12

0 0

5/12

1/12

0 (18)

The explicit fourth order (P = 4) Runge-Kutta method with the same intermediate points used to estimate the initial values may be easily obtained from (9). The coefficients of this new explicit Runge-Kutta method in Butcher’s notation can be expressed as13 0 √ (5 − 5)/10 √ (5 + 5)/10

0

0

0

5)/10 √ −(5 + 3 5)/20

0 √

0

0

0 √ (5 + 5)/12

0

1/6

(3 + 5)/4 √ (5 − 5)/12

1

0

1/12

5/12

5/12

1/12

(5 −

0 √

(19)

The mentioned example (18) is only implicit in the second and the third stages, therefore for this especial case the iterative process is simpler than for a full implicit Runge-Kutta method.

7

3.3 Iterative Process

The problem stated by (11) may be written as   f (yn + h a1s ks )         ..     .   F(k) = f (yn + h ars ks )     ..       .     f (yn + h aN s ks )

g(k) = F(k) − k = 0

(20)

The function F and the variable k has dimension (M + 1) × N . The algorithm of NewtonRaphson method (14) is then expressed as km+1 = km − ω [Jg (km )]−1 . g(km ) = km − ω [JF (km ) − I]−1 . (F(km ) − km )

(21)

or what is the same km+1 = km +ω ∆km

[Jg (km )].∆km = −g(km )

[JF (km )−I].∆km = −(F(km )−km ) (22)

where [JF (k)] is the jacobian of the function F(k), and the jacobian [Jg (k)] = [JF (k)] − [I], may be calculated by (13). It is well known that, in the proximity of the solution, the Newton-Raphson method has a cuadratic convergence when Jh (k)k∈IBρ (k∗ ) < 1

with

h(k) = k − ω [Jg (k)]−1 . g(k) [Jh (k)] = ω [Jg (k)]−1 . [IHg (k)] . [Jg (k)]−1 . g(k) + (1 − ω)[ I ]

(23)

where the jacobians are [Jg (k)] = [JF (k)] − [ I ], the hessians are equal, [IHg (k)] = [IHF (k)], and IBρ (k∗ ) is a ball of radio ρ < ρ∗ with center in k∗ , the solution of (20). The used norm is the infinity norm  · ∞ [2]. When the condition (23.a) is appropiately applied to the Runge-Kutta method, it imposes a restriction to the value of the stepsize h. This restriction should neither be condused with the restriction imposed by the stability criteria, nor with the control of the step size. For the analysis of these last aspects in the case of the proposed example, refer to [13]. REFERENCES

[1] Gerald, C. F. Applied Numerical Analysis, 2nd Edition. Addison-Wesley, New York, (1970). [2] Burden R. L.; Faires, J. D. Numerical Analysis, 3rd Edition. PWS, Boston, (1985). [3] Gear, C. W. Numerical Initial Value Problems in Ordinary Differential Equations. Prentice-Hall, Englewood Cliffs, New Jersey, (1971). [4] Butcher, J. C. “On Runge-Kutta Processes of High Order”, J. Austral. Math. Soc., Vol.IV, Part 2, pp.179-194, (1964). 8

[5] Butcher, J. C. The Numerical Analysis of Ordinary Differential Equations, Runge-Kutta and General Linear Methods. John Wiley, New York, (1987). [6] Lapidus, L.; Seinfeld, J. H. Numerical Solution of Ordinary Differential Equations. Academic Press, New York, (1971). [7] Hairer, E.; Norsett, S. P.; Wanner, G. Solving Ordinary Differential Equations I. Nonstiff Problems. Springer-Verlag, Berlin, (1987). [8] Hairer, E.; Wanner, G. Solving Ordinary Differential Equations II: Stiff and Differential - Algebraic Problems. Springer-Verlag, Berlin, (1991). [9] van der Houwen, P. J.; Sommeijer, B. P. “Iterated Runge-Kutta Methods on Parallel Computers”. SIAM J. Sci. Stat. Comput., Vol.12, No.5, pp.1000-1028, (1991). [10] Fehlberg, E. “Low-Order Classical Runge-Kutta Formulas with Stepsize Control”, NASA Report No. TR R-315, (1971). [11] Ralston, A.; Rabinowitz, P. A First Course in Numerical Analysis, 2nd Edition. McGrawHill, New York, (1978). [12] Lobatto, R. Lessen over Differentiaal- en Integraal-Rekening. 2 Vol., La Haye, (1851-52). [13] Granados M., A. L. “Lobatto Implicit Sixth Order Runge-Kutta Method for Solving Ordinary Differential Equations With Stepsize Control”, Mec´anica Computacional, Vol.XVI, compilado por G. Etse y B. Luccioni (AMCA, Asociaci´on Argentina de Mec´anica Computacional), pp.349-359, (1996).

9

Suggest Documents