Newton's Method for Solving k-Eigenvalue Problems in Neutron ...

6 downloads 0 Views 571KB Size Report
equal to solutions of the k-eigenvalue problem; a Newton-Krylov method is used to find these roots. ... Newton's method has long been utilized in solving eigen-.
NUCLEAR SCIENCE AND ENGINEERING: 167, 141–153 ~2011!

Newton’s Method for Solving k-Eigenvalue Problems in Neutron Diffusion Theory Daniel F. Gill* The Pennsylvania State University, University Park, Pennsylvania 16802

and Yousry Y. Azmy North Carolina State University, Department of Nuclear Engineering Raleigh, North Carolina 27695 Received December 7, 2009 Accepted August 10, 2010

Abstract – We present an approach to the k-eigenvalue problem in multigroup diffusion theory based on a nonlinear treatment of the generalized eigenvalue problem. A nonlinear function is posed whose roots are equal to solutions of the k-eigenvalue problem; a Newton-Krylov method is used to find these roots. The Jacobian-vector product is found exactly or by using the Jacobian-free Newton-Krylov (JFNK) approximation. Several preconditioners for the Krylov iteration are developed. These preconditioners are based on simple approximations to the Jacobian, with one special instance being the use of power iteration as a preconditioner. Using power iteration as a preconditioner allows for the Newton-Krylov approach to heavily leverage existing power method implementations in production codes. When applied as a left preconditioner, any existing power iteration can be used to form the kernel of a JFNK solution to the k-eigenvalue problem. Numerical results generated for a suite of two-dimensional reactor benchmarks show the feasibility and computational benefits of the Newton formulation as well as examine some of the numerical difficulties potentially encountered with Newton-Krylov methods. The performance of the method is also seen to be relatively insensitive to the dominance ratio for a one-dimensional slab problem.

I. INTRODUCTION

lov subspace methods, such as the Implicit Restarted Arnoldi Method ~IRAM!, have been successfully applied to transport 5 and diffusion criticality problems.6 These methods are also capable of finding multiple eigenmodes, not only the fundamental mode. Recently, the use of Jacobian-free Newton-Krylov ~JFNK! methods has been investigated in conjunction with IRAM for use in boiling water reactor ~BWR! modal analysis.7 Newton-Krylov methods have also been examined as a replacement for power iteration in diffusion theory,8,9 and JFNK methods in particular have been considered as alternatives to traditional approaches.8–10 Newton-based methods have also been considered for the k-eigenvalue problem in transport theory.9,11,12 While Newton’s method has long been utilized in solving eigenvalue problems,13,14 the above-referenced works are the

Calculation of the fundamental eigenmode in criticality problems has traditionally utilized the classical power iteration method, which although robust, converges slowly for dominance ratios 1 near 1. In practical situations the dominance ratio is often near unity, resulting in slow convergence and a need for acceleration techniques to improve the convergence of the power iterations. Two common approaches in diffusion theory are Chebyshev iteration 2 and Wielandt shift.3 Alternative approaches to power iteration have been researched in an attempt to improve upon the performance of accelerated power iteration methods. Subspace iterations 4 and Kry*E-mail: [email protected] 141

142

GILL and AZMY

first time it has been applied to neutronics criticality problems ~without thermal feedback!. In the current work, JFNK methods similar to those in Ref. 11 are explored as an alternative to power iteration for seeking the fundamental mode. The k-eigenvalue problem is posed as a generalized eigenvalue problem, and a Newton-Krylov method is used to solve the nonlinear system with several different preconditioners. The special instance of preconditioning with power iteration allows for the method to heavily utilize existing codes. In fact, it will be seen that if the problem is left-preconditioned with power iteration, then the JFNK method can be easily wrapped around an existing implementation, treating it as a black box and effectively acting as a nonlinear acceleration technique. First, in Sec. II we briefly present the details of the k-eigenvalue problem in multigroup diffusion theory and pose the problem in a simple operator form. Next, Newton’s method is discussed, particularly Newton-Krylov methods and the JFNK approximation. Newton’s method is then applied to the diffusion theory k-eigenvalue problem in the form of a generalized eigenvalue problem. Preconditioners are developed to reduce the computational cost of the Krylov method applied at each linearized Newton step, with power iteration given special attention. Numerical results are presented that examine the impact of a number of algorithmic parameters on the efficiency of the Newton methods, and finally, the Newton methods are compared to traditional power iteration ~and Chebyshev accelerated power iteration! for a number of two-dimensional ~2-D! reactor benchmark models.

n ⫽ average number of neutrons emitted per fission event S fg ⫽ macroscopic fission cross section induced by neutrons in group g. The total number of energy groups is denoted G. Spatial discretization of the problem will result in a system of linear equations that can be solved numerically. Without any loss of generality, in our numerical results we specifically consider a finite-difference discretization in Cartesian geometry onto a mesh of n cells. We define the matrix A as

A⫽

In our discussion of diffusion theory, we adopt the notation of Ref. 15 such that the multigroup neutron diffusion equations are given by ~⫺¹{Dg¹ ⫹ S R g !fg ⫺

(

S sg ' g fg ' ⫽

g ' ⫽g

xg k

...

⫺S sG1

⫺S s12

A2

...

⫺S sG2

I

I

L

I

⫺S s1G

⫺S s2G

...

AG

nS fg ' fg ' ,

g ' ⫽1

~1!



x1 nS f1

x1 nS f2

...

x1 nS fG

x2 nS f1

x2 nS f2

...

x2 nS fG

I

I

L

I

xG nS f1

xG nS f2

...

xG nS fG

~2!



,

~3!

where xg is a scalar and nS fg' is a diagonal matrix that maps cross sections to spatial cells. It is never necessary to construct these matrices directly in practice. For example, the matrix-vector product Bf is equal to Bf ⫽ @x1 f T

Dg ⫽ diffusion coefficient removal

cross

fg ⫽ scalar flux S sg ' g ⫽ macroscopic scattering cross section from group g ' to group g k ⫽ multiplication factor

x2 f T

...

xG f T # T ,

~4!

G

where f ⫽ ( g ' ⫽1 nS fg fg ' and the vector f is given by '

f ⫽ @f1T

where for energy group g,

xg ⫽ fission spectrum



,

G

(

g ⫽ 1, . . . , G ,

S R g ⫽ S t ⫺ S sgg ⫽ macroscopic section

⫺S s21

where A g is a square matrix of dimension n representing the discretized form of ~⫺¹{Dg¹ ⫹ S R g !. For a finitedifference discretization, A g is a banded matrix: 3, 5, or 7 bands for one-, two-, and three-dimensional geometries. The scattering matrices S sg ' g are diagonal matrices that map scattering cross sections to spatial cells. We also define

B⫽ II. DIFFUSION THEORY k-EIGENVALUE PROBLEMS



A1

f2T

...

fGT # T .

~5!

Using Eqs. ~2!, ~3!, and ~5!, the discretized form of Eq. ~1! is equivalently written in operator form as Af ⫽

1 k

Bf .

~6!

The traditional approach to the solution of this problem is transforming the generalized eigenvalue problem to a standardized eigenvalue problem and using power iteration: NUCLEAR SCIENCE AND ENGINEERING

VOL. 167

FEB. 2011

NEWTON’S METHOD k-EIGENVALUE CALCULATIONS

f ᐉ⫹1 ⫽

1 kᐉ

A⫺1 Bf ᐉ

~7a!

and k ᐉ⫹1 ⫽ k ᐉ

~ f ᐉ⫹1, f ᐉ⫹1 ! ~ f ᐉ, f ᐉ⫹1 !

,

~7b!

where ~. , .! is an inner product. In the traditional power iteration, A⫺1 is effected via a block Gauss-Seidel iteration where the block dimension corresponds to the number of energy groups. Each within-group problem requires the solution of a linear system, namely A⫺1 g . In the absence of upscattering, the block Gauss-Seidel iteration simplifies to block-forward substitution. Thus, three possible levels of iteration exist: the power iteration shown in Eqs. ~7!, the iterations over energy associated with A⫺1 if upscattering is present, and the iterations necessary to effect A⫺1 g . In our numerical results no problems with upscattering are considered, and so the block GaussSeidel iteration simplifies to block-forward substitution and the within-group diffusion operator, A g , is inverted using incomplete Cholesky preconditioned conjugate gradient ~CG!.

III. NEWTON’S METHOD APPLIED TO DIFFUSION THEORY k-EIGENVALUE CALCULATIONS

It is possible to construct a nonlinear function whose roots are solutions to the k-eigenvalue problem, where the nonlinearity arises from the treatment of both f and k as unknowns. We can write this nonlinear function in terms of the generalized eigenvalue problem as F~u! ⫽



Af ⫺ lBf k~f, l!



,

u⫽

冋册 f l

.

~8!

Here we have replaced the multiplication factor k with its reciprocal l since this leads to future simplifications. Any u * that is a root of Eq. ~8!, i.e., F~u * ! ⫽ 0, is necessarily an eigenpair since this implies that Eq. ~6! is satisfied. This also implies that k~f *, l* ! ⫽ 0. The choice of k~f, l! is not unique. For instance, in the majority of cases in this work we use k~f, l! ⫽ ⫺ ⫺21 f T f ⫹ ⫺21 ,

~9!

which leads to 7f72 ⫽ 1 upon convergence. It is also possible to replace Eq. ~9! with any other normalization condition desired; note that k in Eq. ~9! introduces additional nonlinearity into the system. The choice of k can be thought of as a constraint placed on the solution of the nonlinear system. A constraint relation will be shown in the discussion on preconditioners, which is not related to NUCLEAR SCIENCE AND ENGINEERING

VOL. 167

FEB. 2011

143

a normalization at all. In reality any k can be used so long as k~f, l! ⫽ 0 when f and l are an eigenpair. The evaluation of Eq. ~8! requires some operations that are not necessarily performed when solving the keigenvalue problem using the power method. For instance, the Af product is not commonly calculated since in the power method we work with A⫺1 . Consider evaluating the first block-row of F~u!, G

F1 ~u! ⫽ A 1 f1 ⫺

(

G

S sg 1 fg ' ⫺ lx1 '

g ' ⫽2

(

g ' ⫽1

nS fg' fg '

⫽ A 1 f1 ⫺ sg ' ⬎g ⫺ lx1 f ,

~10!

where the nonlinear function is partitioned such that F~u! ⫽ @F1 ~u! T . . . FG ~u! T Fl ~u!# T . The term sg ' ⬎g represents the upscattering source in group 1. For a general group g, G

Fg ~u! ⫽ A g fg ⫺

(

g ' ⫽g⫹1

g⫺1

S s g g fg ' ⫺ '

(

S sg g fg ' ⫺ lxg f '

g ' ⫽1

⫽ A g fg ⫺ ~sg ' ⬎g ⫹ sg ' ⬍g ⫹ lxg f ! .

~11!

We can see in Eq. ~11! that we are calculating very similar quantities to those found during a power iteration, mainly the upscattering, downscattering, and fission sources for a given group. The only difference now is that rather than inverting A g on this source, we are finding the product A g fg . We note that when evaluating F~u!, since there is no A⫺1 there is no coupling of the energy groups such that each Fg ~u! could be found simultaneously. Also, the evaluation of Fg ~u! requires only matrix-vector products, and evaluating Fl ~u! simply requires an inner product. This makes each evaluation of F~u! relatively inexpensive, particularly if a sparse storage structure is used for A g with an appropriately coded matrix-vector multiply. We can then use the classical Newton’s method to find a root of Eq. ~8!. Newton’s method is defined by the iterative sequence J~u m !du m ⫽ ⫺F~u m ! and u m⫹1 ⫽ u m ⫹ du m ,

~12!

where J~u m ! is the Jacobian, F ' ~u m !. We are particularly interested in the set of Newton’s methods referred to as Newton-Krylov methods in which the linear system in Eq. ~12! is solved iteratively using a Krylov subspace method. By using a Krylov method it is only necessary that we know J~u m !{v, the Jacobian-vector product; the full Jacobian does not need to be explicitly known. In a Newton-Krylov method we generally seek a du m that satisfies 7F~u m ! ⫹ J~u m !du m 7 ⱕ hm 7F~u m !7 ,

~13!

144

GILL and AZMY

where the choice of hm determines how tightly we converge the solution, dum , with the Krylov solver. The choice of hm also impacts the convergence rate 16 and computational expense of a given Newton-Krylov implementation. In the presentation of numerical results, several means of choosing hm will be employed and compared. The Jacobian is relatively simple to write out for the nonlinear function in Eq. ~8! with k given by Eq. ~9!: J~u m ! ⫽



A ⫺ lm B

⫺Bfm

⫺fmT

0



.

~14!

If we consider some vector v, which is written as v ⫽ @v1T v2T . . . vGT vl # T or even more compactly as v ⫽ @vfT vl # T , then the Jacobian-vector product is written J~u m !{v ⫽



~A ⫺ l m B!vf ⫺ vl Bfm ⫺fmT vf



.

J~u!{v ⫽

e

⫹ O~e! ,

and u m⫹1 ⫽ u m ⫹ du m ,

~16!

where e is the finite-difference perturbation parameter, generally chosen to be O~M emach !, where emach is machine precision. Various choices of e will be considered in Sec. IV. The JFNK approximation has the advantage of requiring no knowledge of the Jacobian form and can be easily implemented for any nonlinear function. Newton’s method can now be used to solve the keigenvalue problem using two distinct approaches to the Jacobian-vector product: ~a! forming the true Jacobianvector product for use with GMRES, Eq. ~15!, and abbreviated NK or ~b! using the JFNK approximation, Eq. ~16!, referred to as the JFNK approach. If we consider preconditioning the Krylov iterations, then a Newton step is given by

~17!

where M L and M R denote a left and a right preconditioner, respectively. Preconditioning is often necessary to improve the efficiency of Krylov iterative methods, and the problem can be nonpreconditioned, left preconditioned, right preconditioned, or split preconditioned ~both left and right!. We will consider only right preconditioning, with one exception. Though we have seen that F~u! is very inexpensive to evaluate, we cannot be sure that it results in a well-conditioned J~u!, necessitating the investigation of an effective preconditioner. In the following section we will examine several simple preconditioning options.

~15!

Using a Newton-Krylov method to find a root to Eq. ~8!, the Krylov solver, likely GMRES since the Jacobian is generally not symmetric, only requires the ability to compute the quantity in Eq. ~15!. Thus, a function could be written to calculate this Jacobian-vector product rather inexpensively since J~u m !{v only contains matrixvector and inner products; there are no matrix inverses. Furthermore, it is still not necessary to construct A and B, and a substantial amount of the operations involved in Eq. ~15! are common to the traditional power iteration. This can also be written by block as in Eq. ~11!, revealing a similar combination of source terms and matrixvector multiplies; this is further elaborated in Ref. 9. Another possible manner in which to find the Jacobian-vector product is through the use of the JFNK approximation.17 The JFNK approximation is a finitedifference approximation to the Jacobian-vector product given by F~u ⫹ ev! ⫺ F~u!

⫺1 ⫺1 M⫺1 L J~u m !M R M R du m ⫽ ⫺M L F~u m !

III.A. Preconditioners One preconditioner that will certainly be available for use in an existing code is the diffusion operator A g , since this is the matrix inverted during the solution of each within-group problem during a traditional power iteration. This can be used as a preconditioner by setting all of the off-diagonal blocks in Eq. ~2! to zero blocks. However, since both the eigenvector and eigenvalue are treated as unknowns in the Newton solution, the dimensions of Eq. ~2! are not compatible with Eq. ~17!. To amend this we simply introduce an additional row and column into Eq. ~2! with the scalar one on the diagonal, so that the dimensions of M R are correct.

MR ⫽



A1

0

{{{

0

0

L

L

I

I

L

AG

0

0

{{{

0

1



.

~18!

The solution method formed with this preconditioner is termed either JFNK-Ag or NK-Ag, depending on the treatment of the Jacobian-vector product. It is important to note that with this choice of preconditioner the cost of each preconditioning step is nontrivial since it will require about as much work as a traditional power iteration because of the necessary inversion of A g for all groups. A related preconditioning technique would be to use the incomplete Cholesky factorization,18 IC~0!, of A g rather than A g itself as described previously. In Eq. ~18! in this case each A g is replaced by the incomplete Cholesky factorization, denoted HH T . In our numerical implementation of traditional power iteration the withingroup problem is solved using the CG algorithm that has been preconditioned with IC~0!. So by using the IC~0! factors to precondition the Newton problem we are again recycling calculations that may potentially appear in existing implementations. It is likely the IC~0! is a better NUCLEAR SCIENCE AND ENGINEERING

VOL. 167

FEB. 2011

NEWTON’S METHOD k-EIGENVALUE CALCULATIONS

preconditioner than A g because substitution can be used to compute M⫺1 R in this case, while if we use A g each application of the preconditioner requires the action of A⫺1 g , usually computed iteratively. Implementations of Newton’s method using this preconditioner are termed JFNK-IC and NK-IC. This idea can also be generalized for other preconditioners of A g so that any preconditioning of the within-group problem can be used to construct M R. Another option is to include fission terms in the preconditioner in an effort to better approximate Eq. ~14!. This is done by introducing fission terms into the preconditioner such that MR ⫽



A⫺B

0

0

1



.

~19!

This increases the cost of the preconditioning step since the preconditioner is now so similar to the actual Jacobian. Additionally, the B operator is seldom constructed explicitly, meaning it is necessary to use another Krylov method ~likely GMRES! to find M⫺1 R . While it is possible to precondition a Krylov method with another Krylov method, it does require extra care be taken. In this case the GMRES iteration associated with each Newton step is replaced by its flexible variant, FGMRES ~Ref. 19!. The combination of Newton’s method with this preconditioning is referred to as either JFNK-AB or NK-AB. III.B. Power Iteration as a Preconditioner If we consider a variation of Eq. ~19! where the fission terms are dropped, then MR ⫽

冋 册 A 0

0

1

and

M⫺1 R





A⫺1

0

0

1



.

~20!

We can see now that each preconditioning operation is basically the same as multiplying by A⫺1 , which is a standard single power iteration. However, rather than operating on the vector lBf, the power iteration will operate on a right side determined by the Krylov vector. Still, using this preconditioner allows for the machinery of an existing power iteration to be used in conjunction with Newton’s method. An existing outer iteration could be used intact so long as it could be supplied with an arbitrary right side. Newton’s methods built using this framework are referred to as JFNK-rPI and NK-rPI, where rPI indicates the use of power iteration as a right preconditioner. We now apply a similar preconditioner on the left and show how this results in a new nonlinear function that can be viewed as an acceleration of an existing power iteration when used as the basis for Newton’s method. Choosing NUCLEAR SCIENCE AND ENGINEERING

VOL. 167

FEB. 2011

M⫺1 L



145



A⫺1

0

0

1



~21!

and evaluating the right side of Eq. ~17!, M⫺1 L F~u!, where k is currently unspecified, results in Facc ~u! ⫽



~I ⫺ lA⫺1 B!f k~f, l!



.

~22!

Comparing this to the traditional power iteration in Eq. ~7!, we see that Facc ~u! can be rewritten as Facc ~u! ⫽



f ᐉ ⫺ f ᐉ⫹1 k ᐉ ⫺ k ᐉ⫹1



,

~23!

where k~f, l! has been defined in terms of k as the difference between two successive eigenvalue estimates in traditional power iteration. The new nonlinear function Facc, which is a left-preconditioned form of the original problem, can now be seen to be the residual between two power iterations. We can see that the evaluation of Facc ~u! can be done using any existing implementation of a power iteration. Using the JFNK approximation and a GMRES implementation, this acceleration technique can be easily wrapped around an existing diffusion theory code. In fact, the specific manner in which the power iteration is performed, e.g., how A⫺1 is found and how the eigenvalue is updated, is immaterial. This choice of preconditioner is fundamentally different from the others considered since leftpreconditioning changes the operators F~u! and J~u!. However, we can see that using power iteration as a left preconditioner results in a nonlinear function that is simple to evaluate. Furthermore, since this nonlinear function is already preconditioned, no explicit preconditioning consideration needs to be taken when using GMRES to solve each linearized Newton step. This formulation of the problem is only considered for use with the JFNK approximation, resulting in the so-called JFNK-PI method.

IV. NUMERICAL RESULTS

Traditional and Chebyshev accelerated 20 power iterations were implemented in a 2-D Cartesian geometry, finite-differenced multigroup diffusion code written in Fortran 90095. Vacuum ~Marshak! and reflective boundary conditions are supported. The 11 Newton approaches discussed in the previous section were also implemented: JFNK and NK with no preconditioner ~GEP!, with the A, IC, A-B, and rPI preconditioners as well as the JFNK-PI approach. The matrices A g are stored in compressed sparse column 19 format and inverted using CG with the IC~0! preconditioner.18 The DLAP ~Ref. 21! set of Fortran subroutines was used to precondition and

146

GILL and AZMY

solve the within-group problem when performing traditional power iterations and to apply the IC preconditioner when used with Newton’s method. The GMRES implementation in the SPARSKIT library 22 was used to solve the linearized Newton step. The CG method in the SPARSKIT library was used to implement the Ag preconditioning option and the GMRES method to implement the AB preconditioning option; in the case of the Ag or AB preconditioners, the linearized Newton step is solved using the FGMRES implementation in SPARSKIT. Four reactor models were used to form the benchmark suite used to test the newly developed methods. The well-known 2-D form of the International Atomic Energy Agency ~IAEA! benchmark 23 is used as well as the Biblis benchmark,24 both pressurized water reactors ~PWRs!. A BWR benchmark is used that models a single plane at the initial conditions of a transient three-dimensional ~3-D! problem.23 Similarly, a transient 3-D CANDU model 23 is simplified to a plane at the initial condition to form the final benchmark problem. In all models the reflector has been extended into the void to create a rectangular problem domain utilizing quarter-core symmetry. The error measures used to determine convergence to an eigenpair are the same for the traditional power iterations and Newton methods. An error tolerance is placed on the eigenvalue, and both global and local measures of the fission source error are used. The error measures are defined by ek ⫽ 6k ᐉ⫹1 ⫺ k ᐉ 6 , eG ⫽

7 f ᐉ⫹1 ⫺ f ᐉ 72 ~ f ᐉ⫹1, f ᐉ ! 102

~24a! ,

~24b!

and eL ⫽ max i



fi ᐉ⫹1 ⫺ fi ᐉ fi ᐉ⫹1

冨,

~24c!

such that convergence is claimed when ek ⬍ tk , eG ⬍ tG , and eL ⬍ tL . The tolerances t are specified by t ⫽ $tk , tG , tL %, where t ⫽ $5 ⫻ 10⫺6, 5 ⫻ 10⫺5, 5 ⫻ 10⫺4 % unless otherwise stated. The preconditioned CG iterations were terminated when the residual was ,10⫺2 or when five iterations had been performed, indicating a loosely converged solution to the within-group problem that is intended to reduce the overall computational cost. For the Newton approaches a subspace size of 30 is used with a maximum of two restarts permitted ~90 total iterations!, unless otherwise specified. A flat-flux initial guess is used such that 7f72 ⫽ 1, and the initial eigenvalue guess is unity. The first set of numerical results will examine the sensitivity of the Newton methods to various algorithmic parameters, such as the JFNK perturbation e, the inexact Newton forcing factor h, or the size of the GMRES subspace used. We will also examine the effect

of the various preconditioners developed on the overall cost of Newton’s method. Finally, we will compare the Newton methods developed to the traditional power iteration ~with and without Chebyshev acceleration!. IV.A. Algorithmic Parameters The first parameter studied is the JFNK finitedifference perturbation parameter e, seen in Eq. ~16!. We consider five choices for e: e1 ⫽ M emach , e2 ⫽ M emach max~7u7,1! , e3 ⫽ M ~1 ⫹ 7u7!emach , and e4 ⫽



1

N

( 6ui 6 ⫹ 1 N i



100M emach ,

~25!

with e5 defined by the algorithm of Xu and Downar 25 and emach representing machine precision. Using each of these choices of e to solve the benchmark suite and keeping all other parameters constant between runs produces the results shown in Fig. 1. The meshes for the IAEA, BWR, CANDU, and Biblis problems were 170 ⫻ 170, 165 ⫻ 165, 195 ⫻ 195, and 187 ⫻ 187, respectively. When comparing runs done using the same Newton method, the total number of GMRES iterations is used to measure the computational cost since each GMRES iteration requires the same amount of work for a given approach. In general, only a small sensitivity to the choice of e is observed; the exception is the CANDU problem, which we will see is often an outlying case. If the same set of experiments is run using JFNK-GEP, we do see an increase in the dependence on e, meaning that the effect of e is itself dependent on the conditioning of the Jacobian. Still, for a well-conditioned system, any choice of perturbation similar to those shown will perform similarly.

Fig. 1. Impact of e on number of GMRES iterations using JFNK-IC. NUCLEAR SCIENCE AND ENGINEERING

VOL. 167

FEB. 2011

NEWTON’S METHOD k-EIGENVALUE CALCULATIONS

The forcing factor h, shown in Eq. ~13!, is a parameter that determines how tightly each Newton step is converged and also the convergence rate of the inexact Newton method.16 A simple choice of h is a value that is constant throughout all Newton iterations. However, more complex algorithms have been developed to reduce the cost of inexact Newton calculations. The algorithms considered are those by Dembo and Steihaug,26 Eisenstat and Walker,27 and An, Mo, and Liu.28 The specifics of these approaches can be found in the referenced documents. Each approach, however, seeks to loosely converge each Newton iteration early on and reduce hm as m increases. In these numerical experiments constant values of 10⫺1, 10⫺2, and 10⫺3 are used along with the two methods proposed by Eisenstat and Walker ~termed Eis-A and Eis-B!, the method proposed by Dembo and Steihaug ~termed Dembo!, the method proposed by An, Mo, and Liu ~termed An!, and a method that sets hm ⫽ eGm⫺1. The latter choice uses a global measure of the relative fission source error from the previous iteration to select hm so that as the fission source converges, each Newton step is more tightly converged. To explore the impact of the forcing factor on the computational cost, we again use the benchmark suite with the same meshes mentioned previously, though now with JFNK-PI. Results are shown in Fig. 2. A few things are very clear: There is no consistently best choice of hm , and the CANDU problem once again shows behavior that does not agree with the other problems. Ignoring the CANDU results we see that as the constant value used decreases, the number of GMRES iterations generally increases and that Eis-A and Eis-B always perform well. The An algorithm also performs quite well, while Dembo and the fission source error are both inefficient in certain cases. When comparing the results for many methods,9 not just JFNK-PI, it was seen that constant values of 10⫺1 and 10⫺2, along with Eis-A and Eis-B, are the best choices. Choosing an hm is important for this type of calculation, as Fig. 2 shows how a poor choice can more than double the cost of the computation.

Fig. 2. Impact of h on number of GMRES iterations using JFNK-PI. NUCLEAR SCIENCE AND ENGINEERING

VOL. 167

FEB. 2011

147

Fig. 3. Effect of number of initial power iterations to initialize NK-IC.

Determining a good number of power iterations to initialize the Newton method was also considered. While it is possible to initialize Newton’s method directly with a flat flux and k ⫽ 1, it has been observed that it is more efficient to first perform some number of power iterations. The sequence of initial power iterations considered was $0 to 10,15, 20, 25%. The suite of benchmark problems was tested with this sequence of initial iterations and the JFNK-PI, JFNK-IC, and NK-IC methods. It is clear that performing a few initial power iterations is better than performing none in most cases. The results for NK-IC are shown in Fig. 3. Generally, it was observed that somewhere in the five-to-ten range was ideal, though this number is only true for our specific implementation and within-group convergence settings. It is important to note that although there is a relationship between the computational cost and the initial guess, there were no circumstances where the initial guess chosen caused Newton’s method to diverge or converge to a higher-harmonic eigenmode. Only by choosing an initial guess that was artificially close to a higher-harmonic eigenmode did the method fail to converge to the fundamental mode, indicating that at least for this diffusion discretization and these problems, the Newton approach is robust. The convergence of the within-group tolerance is another parameter that was chosen experimentally, although not specific to the Newton approach. The traditional power, the Chebyshev accelerated power, and the JFNK-PI approach were considered, and a fixed number of CG iterations was permitted per group for all of the problems in the benchmark suite with the same meshes previously described. The sequence of maximum iterations permitted was $1 to 10,15, 20, 25%. These experiments show that the maximum number of iterations resulting in the fewest total CG iterations necessary to solve the eigenvalue problem were in the range of four to ten. This trend is true for all three methods

148

GILL and AZMY

tested, although the cost of power iteration is much more sensitive to the maximum number of CG iterations per group than the Chebyshev accelerated power method or JFNK-PI. Using a specified tolerance rather than an iteration maximum was also considered. Tolerances of $10⫺1,10⫺2,10⫺3,10⫺4 % were considered. Again it was seen that choosing a smaller tolerance causes a large increase in the total number of CG iterations for traditional power iteration while using Chebyshev acceleration or JFNK-PI results in an approach rather insensitive to this tolerance. Ultimately, for the majority of runs considered, the CG iterations were terminated after five iterations unless the residual decreased below 10⫺2 first. Detailed results for the two parameters discussed in this paragraph are elaborated in Ref. 9. Another important parameter that must be chosen for the Newton methods is how often to restart—in other words, the subspace size. Using too large a subspace results in increasingly expensive iterations because of the orthogonalization step and, more importantly, consumes a prohibitive amount of memory. The effect of the subspace size can be clearly seen in Fig. 4, where we consider the IAEA problem on the 170 ⫻ 170 mesh for the JFNK-PI, JFNK-GEP, and NK-GEP methods. When there is no limit on the subspace size and the number of iterations, all three methods converge within six Newton iterations. We then consider three different scenarios where 100 total iterations are permitted: subspace size of 100 with 0 restarts, subspace size of 50 with 1 restart, or subspace size of 10 with 9 restarts. We can then see that although the total iteration count is fixed as the subspace size decreases, the Newton iteration counts of JFNK-GEP and NK-GEP increase dramatically while that of JFNK-PI is constant. Thus, the preconditioned method JFNK-PI is insensitive to the subspace size in this case while the nonpreconditioned methods are very sensitive to subspace size. For problems with finer meshes, the subspace size of 10 is too small even for preconditioned methods and in general GMRES is restarted every 30 iterations with 2 restarts maximum. As with many of the other parameters discussed, these observations are true for the specific diffusion implementation we used to solve these problems. It is possible that a different diffusion discretization utilizing different iterative solvers will need a differentsized subspace to achieve similar results. The intention is to highlight the importance of these parameters when using Newton-Krylov methods to solve the k-eigenvalue problem. IV.B. Preconditioners We now turn our attention from user-chosen parameters to the choice of a preconditioner for the problem, as it is apparent from Fig. 4 that a preconditioner is necessary. We again consider the IAEA problem on a 170 ⫻ 170 mesh, and we use several of the forcing factors pre-

viously tested. Figure 5 plots the Newton residual of all of the JFNK methods against the execution time using the Eis-A forcing factor. Execution time is the fairest way to compare all of the Newton methods for this type of calculation because the cost of a GMRES iteration is dependent on the type of preconditioning used. The IC preconditioner, for instance, is computationally cheaper than the Ag preconditioner since the Ag preconditioner requires another level of iteration. The results shown in Fig. 5 agree well with our expectations. We see that JFNKGEP is converging extremely slowly; in fact, the Newton residual is hardly decreasing over time. The next worst is the JFNK-Ag method, followed by the JFNK-AB approach. Since both of these require nested iterations and are based on simple approximations to the Jacobian, their performance is not surprising. We see that the JFNKPI, JFNK-rPI, and JFNK-IC methods clearly distinguish themselves as the best preconditioning options, with JFNK-PI leading the pack. This shows that power iteration, applied as a left or right preconditioner, is an extremely effective preconditioner for the generalized eigenvalue problem solved using a Newton-Krylov method. The JFNK-IC preconditioner performs well mainly because it is very inexpensive to apply the preconditioner and it sufficiently decreases the condition number of the Jacobian. If we consider the same problem and forcing factor for the NK family of methods, in Fig. 6 we see the exact same trends, albeit with a bit more distinction between the three groups. Again, the nonpreconditioned option, NKGEP, performs very poorly. However, we can see that the NK Jacobian treatment results in improved convergence when compared to JFNK-GEP in Fig. 5. NK-Ag and NK-AB are again in the middle, between the extremes of no preconditioning and very effective preconditioning. When comparing to Fig. 5 we see that the convergence is much smoother for the NK methods; thus, the JFNK approximation can be blamed for at least some of the spikes seen in Fig. 5. In terms of execution time, NK-AB and JFNK-AB are quite similar, although NK-Ag is much better than JFNK-Ag; it actually has execution time nearly identical to that of NK-AB. The IC and rPI preconditioners are again the best and for all practical purposes are indistinguishable from one another. The success of power iteration as a preconditioner is fortunate, as leveraging existing code is very straightforward using this approach. In fact, using JFNK-PI the nonlinear function Facc can easily be evaluated using an existing outer iteration. Thus, adding the JFNK wrapper on the outside can be done almost entirely outside the code, making JFNK-PI an acceleration technique for an existing implementation. The presence of an iteration inside the preconditioner adds a layer of complication because some optimal number of iterations or tolerance needs to be sought. For instance, if one does too few iterations, the preconditioner is not very effective. If too many iterations are done, the preconditioner is very effective but very NUCLEAR SCIENCE AND ENGINEERING

VOL. 167

FEB. 2011

NEWTON’S METHOD k-EIGENVALUE CALCULATIONS

149

Fig. 4. Effect of GMRES restarts on IAEA benchmark, 100 iterations.

Fig. 5. JFNK preconditioners for IAEA benchmark.

expensive computationally. This can be seen in Fig. 7, where the number of CG iterations per group is varied between 1 and 15 for the NK-rPI method using the same IAEA problem and meshing. We see that only one CG iteration per group in the power iteration is just as bad as not preconditioning. We observe a reduction in the number of Newton iterations as the number of CG iterations per group increases. However, because of the tradeoff NUCLEAR SCIENCE AND ENGINEERING

VOL. 167

FEB. 2011

Fig. 6. NK preconditioners for IAEA benchmark.

between expense and effectiveness, five to ten CG iterations per group are used in nearly all runs. However, in some instances this is too few, and we note a failure of the preconditioners. We now show results for a set of experiments where Newton’s method did not converge as nicely as in Figs. 5 and 6. The set of experiments is identical to that shown in Fig. 5, but the Eis-B forcing factor algorithm is used

150

GILL and AZMY

Fig. 7. Effectiveness of PI preconditioner with NK-rPI ~IAEA benchmark!.

Fig. 9. IAEA benchmark with Eis-B and backtracking.

observed in many cases that backtracking hindered computations more than helped them. However, in this specific case we see a great benefit through its use. Again, these results are very specific to the problems and implementation used, but globalization is certainly something that must be considered when using Newton’s method. IV.C. Performance Versus Increasing Dominance Ratio

Fig. 8. IAEA benchmark with Eis-B and no backtracking.

in place of the Eis-A algorithm. These results are shown in Fig. 8, where we see that all of the methods except for JFNK-PI fail to converge to the desired precision. When the residual norm is in the neighborhood of 10⫺8, we see severe fluctuations and a failure to fully converge. These numerical instabilities are not present with the NK approach and can be at least partially attributed to the JFNK approximation. In this type of situation, a technique termed globalization can be used to amend the Newton directions so that “bad” steps are avoided. The specific type of globalization implemented is referred to as backtracking, and the backtracking formula used is that proposed by Eisenstat and Walker.27 If we repeat the experiments that led to Fig. 8 but instead turn on the backtracking algorithm when the L2 norm of the Newton residual falls below 10⫺4, then we obtain Fig. 9. We see that in most cases now the fluctuations disappear and full convergence is achieved. Ironically, the only method that does not converge using backtracking is JFNK-PI, which actually converged without it. It was

The impact of the dominance ratio on the performance of the JFNK-PI method was compared to that of the power method, which is well known to undergo a degradation in performance as the dominance ratio approaches unity. The numerical experiment utilized for this purpose is a one-dimensional ~1-D! homogeneous slab of increasing width. Two-group cross sections characteristic of Fuel 2 taken from the 2-D IAEA benchmark 20 were used. The problem was solved in the Matlab clone Octave by numerically inverting the multigroup diffusion matrix such that the only iterations in the process are outer iterations. This allows us to directly compare the computational expense of the power method and JFNK-PI using the common basis of a power iteration. The length of the slab was varied from 5 to 140 cm, resulting in dominance ratios d ranging between 0.0183 and 0.9996. In Fig. 10 we plot the number of outer iterations versus measure 10~1 ⫺ d !. This measure is used rather than d because it avoids the problem of the data points being tightly clustered near unity. Figure 10 shows the number of iterations required to achieve convergence to the tolerances t specified in Sec. IV. We see that as the dominance ratio increases, the number of iterations required by power iteration increases dramatically as expected, while JFNK-PI performance is much less sensitive to the dominance ratio. This indicates that even though we are using power iteration as a preconditioner, the convergence rate of Newton’s method does NUCLEAR SCIENCE AND ENGINEERING

VOL. 167

FEB. 2011

NEWTON’S METHOD k-EIGENVALUE CALCULATIONS

Fig. 10. Impact of dominance ratio on performance.

151

Fig. 11. Comparing Newton and power methods, base convergence.

not heavily depend on the dominance ratio. It should be noted, though, that to ensure convergence to the fundamental mode, it was necessary to increase the number of initial power iterations used to generate the initial guess for Newton’s method as the dominance ratio increased; these additional outer iterations are included in the counts in Fig. 10. IV.D. Comparison to Power Iteration The effectiveness of the best performing of the preconditioners will now be compared with traditional power iteration and Chebyshev accelerated power iteration for the 2-D benchmark problems in terms of total execution time. The Newton methods used are JFNK-IC, NK-IC, JFNK-rPI, NK-rPI, and JFNK-PI. The meshes used for the benchmark suite have been refined to increase execution time; the meshes for the Biblis, BWR, CANDU, and IAEA problems are now 374 ⫻ 374, 330 ⫻ 330, 390 ⫻ 390, and 340 ⫻ 340, respectively. Two different sets of convergence criteria are defined, a base set and a tight set, given by tb ⫽ $5 ⫻10⫺6, 5 ⫻10⫺5, 5 ⫻10⫺4 % and tt ⫽ $5 ⫻ 10⫺15, 5 ⫻ 10⫺14, 5 ⫻ 10⫺13 %. Although in a realistic calculation one would not want a solution converged to tt , it is useful to differentiate between the asymptotic convergence behavior of Newton’s method and the power method in this manner. The iterative tolerances for nested iterations have been tweaked for the new meshes; for power iteration six CG iterations are permitted per group with a stopping tolerance of 5 ⫻ 10⫺2. For the rPI and PI preconditioned methods, ten CG iterations are permitted with a stopping tolerance of 10⫺3 to increase the stability of the solution. A constant forcing factor of 5 ⫻ 10⫺2 is used, again in an attempt to improve stability. All of the Newton methods are initialized with five traditional power iterations that were initialized with a flat-flux initial guess and eigenvalue of 1. Figure 11 shows the results of this set of experiments for the base set of convergence criteria, tb , while NUCLEAR SCIENCE AND ENGINEERING

VOL. 167

FEB. 2011

Fig. 12. Comparing Newton and power methods, tight convergence.

Fig. 12 shows the results for the tight convergence set. For the base convergence tolerances, we see that all of the approaches are generally comparable, although JFNK-IC and NK-IC are the worst for all problems, even compared to unaccelerated power iteration. The JFNK-rPI and NK-rPI execution times are nearly identical and perform well relative to unaccelerated power iteration. In some cases they perform better than Chebyshev acceleration; in other cases they do not. In all cases except for the CANDU problem, the JFNK-PI approach results in the shortest execution times. In Fig. 12 there is more separation between the traditional and Newton approaches, as expected because of the improved convergence rate of Newton’s method. In all cases now, with the exception of JFNK-PI for the CANDU problem, the Newton methods perform better than Chebyshev acceleration and the traditional power method. Of the Newton methods the NK-IC approach is usually the slowest, although the JFNK-IC approach was not included because of instabilities. The

152

GILL and AZMY

JFNK-PI, JFNK-rPI, and NK-rPI approaches are all very comparable for the Biblis, BWR, and IAEA problems and exhibit a significant decrease in computational time when compared to Chebyshev acceleration. The instabilities witnessed in many of the experiments discussed are certainly due to numerical shortcomings and not to the method itself. For instance, all of the preconditioning options converge smoothly in a small number of Newton iterations if the GMRES iterations are fully converged at each Newton step and any nested iterations are fully converged as well. The interplay between the various levels of iterations and the potentially poor conditioning of the Jacobian makes sorting out the source of these instabilities difficult. However, if the number of restarts is increased sufficiently, the size of the subspace is increased sufficiently; if the condition number of the system is decreased sufficiently, then the instabilities are not observed. If the CG method is replaced by a multigrid approach to the within-group problem, then the behavior witnessed may be totally different; the conclusions regarding the behavior of the methods and the specific choices for various parameters are specific to this implementation. However, the methods themselves apply to a broad range of solution techniques to the multigroup k-eigenvalue problem in diffusion theory. The primary motivation behind JFNK-PI was to find a Newton formulation that could be used to accelerate any existing power iteration, regardless of the implementation specifics.

verging iterative quantities ~within-group iterations or Newton steps! and not because the Newton methods themselves are unstable. Backtracking can be used to enhance iterative stability, and it is not certain that different diffusion discretizations and numerical implementations will exhibit the same behavior. Comparisons with standard power iteration and Chebyshev accelerated power iteration show that the power iteration preconditioned Newton approaches, JFNK-PI, JFNK-rPI, and NK-rPI, are all competitive in terms of execution time. For tightly converged problems there is a clear benefit in using Newton’s method. The 1-D results showed the JFNK-PI method to be quite insensitive to the magnitude of the dominance ratio. In these numerical experiments the multigroup diffusion matrix was inverted directly, eliminating the need to iteratively solve the within-group problem. In this case we observe that the JFNK-PI approach requires far fewer power iterations than the traditional power method. This is additional evidence that the JFNK methods developed have the potential to offer improved performance. It is clear that the specific implementation and iterative strategies employed in the power method greatly impact the overall cost of the JFNK methods. Thus, to fully appreciate the potential in the NewtonKrylov problem formulation, it would be best to implement the method in an existing production-level or mature academic code and measure the speedup achieved.

ACKNOWLEDGMENTS V. CONCLUSIONS

The k-eigenvalue problem in multigroup diffusion theory was solved using a Newton-Krylov method ~and the JFNK approximation! to write a nonlinear function whose root is a solution to the generalized form of the k-eigenvalue problem. Preconditioning is essential to an efficient implementation of the Newton-Krylov method, as the Jacobian is nearly always poorly conditioned. A few simple approximations to the Jacobian were successfully used as preconditioners, though it was shown that the most effective preconditioner is traditional power iteration. This choice of preconditioner also allows for the Newton-Krylov formulation of the generalized eigenvalue problem to leverage existing implementations of the power iteration. If power iteration is used as a left preconditioner, the problem can be written such that only the ability to compute the difference between two consecutive power iterates is necessary to evaluate the nonlinear function. In conjunction with the JFNK approximation, this creates a formulation that allows for the Newton solution to be wrapped around any existing power iteration. Numerical instabilities were witnessed on several occasions, but these seem to be due to improperly con-

This research was performed under appointment of D. F. Gill to the Rickover Graduate Fellowship Program sponsored by the Naval Reactors Division of the U.S. Department of Energy.

REFERENCES 1. E. L. WACHSPRESS, Iterative Solution of Elliptic Systems, Prentice-Hall, Englewood Cliffs, New Jersey ~1966!. 2. L. A. HAGEMAN and D. M. YOUNG, Applied Iterative Methods, Dover Publications, Mineola, New York ~2004!. 3. T. DOWNAR, D. LEE, Y. XU, and T. KOZLOWSKI, PARCS v2.6 U.S. NRC Core Neutronics Simulator, Theory Manual, Purdue University, West Lafayette, Indiana ~2004!. 4. V. VIDAL, G. VERDÚ, D. GINESTAR, and J. MUNÕZCOBO, “Variational Acceleration for Subspace Iteration Method: Application to Nuclear Power Reactors,” Int. J. Numer. Meth. Eng., 41, 391 ~1998!. 5. J. S. WARSA, T. A. WAREING, J. E. MOREL, and J. M. McGHEE, “Krylov Subspace Iterations for Deterministic k-Eigenvalue Calculations,” Nucl. Sci. Eng., 147, 26 ~2004!. NUCLEAR SCIENCE AND ENGINEERING

VOL. 167

FEB. 2011

NEWTON’S METHOD k-EIGENVALUE CALCULATIONS

6. G. VERDÚ, R. MIRO, D. GINESTAR, and V. VIDAL, “The Implicit Restarted Arnoldi Method, An Efficient Alternative to Solve the Neutron Diffusion Equation,” Ann. Nucl. Energy, 26, 579 ~1999!. 7. V. MAHADEVAN and J. RAGUSA, “Novel Hybrid Scheme to Compute Several Dominant Eigenmodes for Reactor Analysis Problems,” Proc. Int. Conf. Physics of Reactors (PHYSOR), Interlaken, Switzerland, September 14–19, 2008 ~2008!. 8. D. F. GILL and Y. Y. AZMY, “A Jacobian-Free NewtonKrylov Iterative Scheme for Criticality Calculations Based on the Neutron Diffusion Equation,” Proc. Int. Conf. Mathematics, Computational Methods and Reactor Physics, Saratoga Springs, New York, May 3–7, 2009, American Nuclear Society ~2009! ~CD-ROM!. 9. D. F. GILL, “Newton-Krylov Methods for the Solution of the k-Eigenvalue Problem in Multigroup Neutronics Calculations,” PhD Thesis, The Pennsylvania State University ~2009!.

153

16. R. S. DEMBO, S. C. EISENSTAT, and T. STEIHAUG, “Inexact Newton Methods,” SIAM J. Numer. Anal., 19, 400 ~1982!. 17. D. KNOLL and D. KEYES, “Jacobian-Free NewtonKrylov Methods: A Survey of Approaches and Applications,” J. Comput. Phys., 193, 2, 357 ~2004!. 18. G. H. GOLUB and C. F. VAN LOAN, Matrix Computations, 3rd ed., Johns Hopkins University Press, Baltimore ~1996!. 19. Y. SAAD, Iterative Methods for Sparse Linear Systems, 1st ed., PWS Pub Co., Boston ~1996!. 20. D. R. FERGUSON and K. L. DERSTINE, “Optimized Iteration Strategies and Data Management Considerations for Fast Reactor Finite Difference Diffusion Theory Codes,” Nucl. Sci. Eng., 64, 2, 593 ~1977!. 21. M. K. SEAGER, “A SLAP for the Masses,” UCRL100267, Lawrence Livermore National Laboratory ~Dec. 1998!.

10. D. A. KNOLL, H. PARK, and K. SMITH, “Application of the Jacobian-Free Newton-Krylov Method in Computational Reactor Physics,” Proc. Int. Conf. Mathematics, Computational Methods and Reactor Physics, Saratoga Springs, New York, May 3–7, 2009, American Nuclear Society ~2009! ~CD-ROM!.

22. Y. SAAD, “Sparskit: A Basic Tool Kit for Sparse Matrix Computations,” NASA-CR-185876, National Aeronautics and Space Administration ~May 1990!.

11. D. F. GILL and Y. Y. AZMY, “Jacobian-Free NewtonKrylov as an Alternative to Power Iterations for the k-Eigenvalue Transport Problem,” Trans. Am. Nucl. Soc., 100, 291 ~2009!.

24. A. HEBERT, “Application of the Hermite Method for Finite Element Reactor Calculations,” Nucl. Sci. Eng., 91, 34 ~1985!.

12. D. F. GILL, Y. Y. AZMY, J. D. DENSMORE, and J. S. WARSA, “Newton’s Method for the Computation of k-Eigenvalues in SN Transport Applications,” Nucl. Sci. Eng. ~to be published!.

25. Y. XU and T. DOWNAR, “Optimal Perturbation Size for Matrix Free Newton0Krylov Methods,” Proc. Joint Int. Topl. Mtg. Mathematics and Computation and Supercomputing in Nuclear Applications, Monterey, California, April 15–19, 2007, American Nuclear Society ~2007!.

13. P. M. ANSELONE and L. B. BALL, “The Solution of Characteristic Value-Vector Problems by Newtons’s Method,” Numerische Mathematik, 11, 38 ~1967!.

26. R. S. DEMBO and T. STEIHAUG, “Truncated-Newton Algorithms for Large-Scale Unconstrained Optimization,” Math. Program, 26, 190 ~1983!.

14. G. PETERS and J. WILKINSON, “Inverse Iteration, IllConditioned Equations and Newton’s Method,” SIAM Rev., 21, 3, 339 ~1979!. 15. J. J. DUDERSTADT and L. J. HAMILTON, Nuclear Reactor Analysis, John Wiley and Sons ~1976!.

NUCLEAR SCIENCE AND ENGINEERING

VOL. 167

FEB. 2011

23. “Argonne Code Center: Benchmark Problem Book,” ANL7416, Supp. 2, ID11-A2, Argonne National Laboratory ~1977!.

27. S. C. EISENSTAT and H. F. WALKER, “Choosing the Forcing Terms in an Inexact Newton Method,” SIAM J. Sci. Comput., 17, 16 ~Jan. 1996!. 28. H.-B. AN, Z.-Y. MO, and X.-P. LIU, “A Choice of Forcing Terms in Inexact Newton Method,” J. Comput. Appl. Math., 200, 47 ~2007!.

Suggest Documents