Application of Non-stationary Iterative Methods to an Exact Newton ...

18 downloads 0 Views 252KB Size Report
The AC power ow is usually solved by the Newton-. Raphson solution method. The main step is the lin- earization of the non-linear power ow equations and.
Application of Non-stationary Iterative Methods to an Exact Newton-Raphson Solution Process for Power Flow Equations Rainer Bacher, Eric Bullinger Swiss Federal Institute of Technology (ETH), CH-8092 Z urich, Switzerland

E-mail: [email protected]

ABSTRACT

the decoupled power ow satis es the CG conditions of positive de nite and symmetric linear system matrices without applying equation transformations. Practical usage of the conjugate gradient methods is only reached when preconditioning is applied to the linear system of equations. Good preconditioning will group the eigenvalues of the transformed linear system matrix together and will thus result in faster convergence. Again, there is a natural t between the preconditioned CG method and the decoupled power ow, since the decoupled power has constant linear system matrices. Thus, for every power ow this preconditioning matrix must be computed only once and remains constant for all Newton-Raphson steps. [4], [5] emphasize the fact that only good preconditioners allow an ecient implementation of CG methods to power ow equations. Several preconditioners such as ILU(0) (no ll-in factorization) or ILU(1) (approximate factorization with 1 neighbor ll-in term) together with block-bordered matrix permutation have been applied successfully to fast decoupled power ows with large networks. Recently, in [6], the rst application of nonstationary, iterative methods to the non-linear power

ow problem has been described. A derivation of the \Krylov subspace power ow methodology" applied to the power ow problem is given to introduce power system application developers to the mathematical problem. The main distinction to the CG methods lies in the fact that the \Krylov subspace power ow methodology" is also applicable to unsymmetric, inde nite linear system matrices. The so-called KSPF (Krylov Subspace Power Flow) which is derived in [6] does not need any explicit computation of the Jacobian terms during the iterations and power ow steps. Good convergence is shown with networks up to size 57 busses. In this paper implementation aspects of nonstationary methods with larger networks are discussed which are in principle applicable to all power

ow formulations with unsymmetric and inde nite linear system matrices. Mathematicians have de-

The AC power ow is usually solved by the NewtonRaphson solution method. The main step is the linearization of the non-linear power ow equations and the subsequent solution of this linear system. The characteristics of this linear system of equations vary for di erent power ow implementations: Symmetric/unsymmetric, positive de nite/inde nite system matrices can result. Based on these characteristics di erent direct and iterative linear system solvers can be used to maximize performance and solution robustness. In this paper results are given of nonstationary iterative methods for unsymmetric, indefinite linear systems derived from power ow equations.

Keywords Load ow analysis, Modelling and simulation, Non-stationary iterative methods

1 INTRODUCTION The power ow is a very well known algorithmic problem which is usually solved by the NewtonRaphson solution method. The linearization of the non-linear power ow equations yields a linear system to be solved by an appropriate linear system solver. All Newton-Raphson (NR) based power ow algorithms have in common that one large or two smaller size linear systems of equations must be solved during each Newton-Raphson iterative step. Beside the well known direct solution of a linear system of equations [1], the solution based on socalled non-stationary iterative methods has recently appeared in power applications. [2] describes the rst application of the Conjugate Gradients (CG) method to the decoupled power ow. In [3] another application of CG methods applied to a static security power ow problem is described. For the fast decoupled power ow both papers state a signi cant performance improvement of CG based methods compared to a direct solution. From all power ow approaches known today only 1

active nodal power, gij ; bij the real and imaginary part of the nodal admittance matrix Yij , N = total number of nodes. For PQ-nodes the third equation gi3 is not needed. This power ow equations set (1) is called the \Current mismatch power ow" [10] since mainly nodal current equations are formulated. Note that the reactive powers Qi are unknown variables at PV generator nodes. These variables must be updated during all Newton-Raphson steps and linear system iterations1. The Jacobian matrix terms are as follows: Jacobian diagonal block Aii :

rived several methods to solve this type of linear system of equations: QMR (Quasi Minimal Residual), GMRES (Generalized Minimal Residual), BICG (BiConjugate Gradients), CGS(Conjugate Gradients Squared), BICGSTAB (Bi-Conjugate Gradients Stabilized) are distinctly di erent methods for the solution of this class of problems. A good summary of these methods can be found in [7]. The GMRES-algorithm is the generalization of the CG algorithm for unsymmetric and inde nite linear system matrices. Both algorithms have in common that the solution error (residual, see section 3) decreases from one iteration to the next and the exact solution is obtained within a given maximum number of iterations. This assumes exact numeric precision. All other non-stationary iterative methods use combinations of CG-like concepts and heuristics to obtain a solution for the linear system. As a consequence the residual of these methods is not guaranteed to decrease during the iterations. This paper gives details related to a successful computer implementation of non-stationary iterative methods applied to the non-linear power ow equations (1) and simulation results which should allow to get insight into the strengths and weaknesses of non-stationary iterative methods. After a problem de nition in section 2, a special preconditioning of the linear system related to the Newton-Raphson solution process is given in section 3. In section 4, simulation results are discussed for various non-stationary methods applied to networks with 233 and 685 busses. The paper ends with conclusions in section 5 and the references. The direct solution based on factorization and a subsequent forward/backward substitution is not discussed in this paper, see [1]. Also, the mathematical theory for the non-stationary iterative method is not given in this paper due to space reasons, see [7]. An excellent reference to CG is [8]. The original description of CG is [9].

@gi1 @x @gi2 @x @gi3 @x

@gi1 @x @gi2 @x @gi3 @x

i

i i

i

P

i

i

i

i

i

i

i

ii , bii

,2ei

ii

=,

ii

=,

x = Qi

fi e2i +fi2 , e2e+if 2 i i

ii + bii

, ii , gii ,2fi

 P (e2 ,f 2 )+2e f Q  i

i

i

i

i

i i

0

i

fi2 )2  Q (f 2(,e2ie+2 )+2  efP i

e f

( 2 + i2 )2 i

(2)

(3)

i i i

x = ej (i 6= j )

,gij ,bij 0

x = fj (i 6= j ) bij

,gij 0

x = Qj (i 6= j )

0 0 0

(4)

For PQ nodes those columns related to Qi and rows related to the voltage magnitude equation gi3 must be omitted. Blocking of node oriented variables ei ; fi ; Qi is important because this allows a fast and ecient block left or right preconditioning (see section 3). The linear system of equations of each NewtonRaphson step can be described in general form as follows: 2 A11 6 A21 where A = 664 .. . AN 1

3

A12 : : : A1N A22 : : : A2N 77 Ax = b 7 ... 5 AN 2 : : : ANN (5) A is the above described Jacobian matrix, b is the mismatch vector, Aij are block submatrices of A determined with (2) and (4). This matrix A has the following properties: The block structure of A is identical with the structure of the complex nodal admittance matrix Y . It is very sparse, unsymmetric and inde nite. Thus the CG method cannot be applied directly.

This paper deals with the iterative behavior of the solution of the linear system within the NewtonRaphson solution process for the non-linear power

ow equations. The power ow equations for each node i are as follows: i

ii , gii

x = fi

O -diagonal block Aij ; i 6= j :

2 PROBLEM STATEMENT

gi1 = e Pe2++ff 2Q , Nj=1 (gij ej , bij fj ) P gi2 = ,e eQ2 ++ff2 P , Nj=1 (gij fj + bij ej ) gi3 = ,e2i , fi2 + Vi2 ei ; fi represent the real and imaginary nodal voltages, Pi ; Qi represent the active

x = ei

=0 =0 =0 (1) part of and re-

1 In order not to create confusion, a distinction between Newton-Raphson steps (Each step solves one approximation to the non-linear equations) and the term iteration to get the solution of one linear system with an iterative linear system solver is made throughout this paper.

2

3 BLOCK-DIAGONAL PRECONDITIONING OF LINEARIZED POWER FLOW EQUATIONS

 Using P as right preconditioning matrix leads

to the following preconditioned linear system of equations: AP y = b (9) x =Py

(8) and (9) can be written in more general form as follows: A0 x0 = b0 (10) 0 0 0 where A = P A; b = P b; x = x for left preconditioning and A0 = A P ; b0 = b; x0 = y for right preconditioning. In both (8) and (9) A0 has unity block diagonal matrices and numerically modi ed (as compared to A) block-o -diagonal matrices. This preconditioned matrix A0 approximates the desired unity matrix which is the goal of preconditioning. For direct methods with exact numeric precision both (8) and (9) show the same result for x. Applying iterative methods, however, leads to very different iterative convergence processes for both conditioning methods. Today no practical theorem exists for large linear systems which predicts the best preconditioning method. Only the simulation can show which combination of preconditioning/iterative method is best. However, since a norm of the right hand side vector of the conditioned system, i.e. P b (8) for left conditioning and b for right-conditioning (9) is used to stop the iterative convergence process, di erently scaled mismatch vectors are used. This scaling has a consequence on the convergence criterion of the iterative method which is the residual norm: 0 0 0 kr~k2 = kb ,kbA0 k x k2 (11)

Using the Jacobian matrices as given in (2) and (4) in the non-stationary methods without any preconditioning leads to non-converging iterative behavior. In general preconditioning of a linear system of equations can be done as follows: (5) can be transformed with two matrices PL and PR as follows:

PL A PR y = PL b (6) x = PR y: PL is called left and PR right preconditioning matrix. Preconditioning has the goal of making the new conditioned matrix PL A PR  I where I is the unity matrix. In general, the better the unity matrix is approximated the faster the solution of the conditioned system. In the application discussed in this paper preconditioning has to be very fast because it has to be applied to each new Jacobian matrix. Thus one obvious choice is a block-diagonal preconditioning matrix P

2 6 = 664

A,111

A,221

...

3 7 7 7 5

(7)

A,NN1 whose inverted diagonal blocks are identical with those of the original matrix A. The e ort to compute P is small as compared to the computation of other, more sophisticated preconditioners such as ILU0 (factorization without consideration of ll-in terms). Only matrices of size 3  3 or 2  2 must be inverted. Also, the e ort per iteration for the nonstationary methods is smaller as compared to other preconditioners which usually do not allow an explicit inversion of a submatrix of the original matrix A. In the simulation runs this block diagonal preconditioning has been applied explicitely before the actual iterations are started. P can be applied as left- or right-preconditioning matrix. Both cases have been simulated, see section 4.2.2. From a theoretical point of view the following can be observed:

2

In this paper only block-diagonal preconditioning has been used.

4 APPLICATION OF NONSTATIONARY ITERATIVE METHODS TO POWER FLOW PROBLEMS 4.1 Implementation aspects

The non-stationary iterative methods have been implemented in Matlab 4.2c. Direct methods have also been simulated to allow comparisons. Matlab has the following implementation for a direct solution of a linear system with an unsymmetric matrix A0 :

 Using P as left preconditioning matrix leads

A0 x = b0

to the following preconditioned linear system of equations: P Ax = P b (8)

with 3

L~  U = A~

4.2 Simulation results

where L~ is a permuted left matrix and U is an upper matrix as produced by the Matlab-\lu" algorithm. A~ is the matrix A0 , permuted columnwise similar to the Tinney 2 scheme. Tests were run on a Sparc 10. The power ow (PF) equations have been programmed according to (1) ... (4). The networks used in the simulation runs have the following characteristics:

The following table indicates the computational effort for the direct solution of the \Current mismatch power ow". Each iteration includes setup of mismatches and Jacobian matrix, factorization and forward/backward substitution.

233/685-bus networks

Dimension Non-zeros Jacobian Jacobian 471 3740 1531 13284

Both 233 and 685 networks data are based on real power system network parts in Europe and in the U.S.A. Other, smaller networks (7, 57 bus networks) have also been used. However, simulation results have shown that no generalized conclusions can be drawn from these small networks. The following parameters have been used for all power ow simulations: Flat start for voltages (1 p.u) and reactive generator power (0); Max. number NR steps = 25; PF convergence tolerance = 0.01 p.u. (1 kA, 0.01 kV2 ) (max. mismatch norm is taken); Max. number of iterations for each iterative solver per NR step = dimension of A; convergence tolerance of the normalized residual kr~k2 = mismatchmax =100, i.e. the convergence tolerance of the iterative solver is dependent on the maximum mismatch of the previously computed step; In addition, for BICGSTAB only,  NR  (k) (k,1) if abs kr~k2 , kr~k2  mismatchmax  10,5 stop after k iterations2. Start the iterative solution with x0 = b0 (since A0  I ). The following table summarizes all used abbreviations:

CG NPC LPC RPC max G max PC-G residual NR cpu iter NC

685: 16.5 s max G cpu 30 4.9 6 3.7 1.2 3.7 0.026 3.7 2.8e-05 0.5

4.2.2 Non-stationary iterative methods

Fig. 1 shows the total CPU time of various nonstationary iterative methods for the 233 bus network.

100 90 80

60 50

QMR_LPC

10

CGS_RPC

Description Direct solution Bi-Conjugate Gradients Bi-Conjugate Gradients Stabilized Conjugate Gradients Squared Quasi Minimum Residual Generalized Minimum Residual (restarted after  m iterations) Conjugate Gradients No preconditioning Left preconditioning Right preconditioning max. mismatch jbi jmax max. conditioned mismatch jb0i jmax kr~k2 Newton-Raphson CPU seconds Number of iterations per NR step Not Converged case

CGS_LPC

20

BICG_RPC

30

BICGSTAB_RPC

40 BICGSTAB_LPC

Cpu−time [s]

70

BICG_LPC

Abbreviation LU BICG BICGSTAB CGS QMR GMRES(m)

233: 6.1 s max G cpu 64 1.0 56 0.7 62 0.7 14 0.7 11 0.7 5.8 0.7 0.97 0.7 0.095 0.7 0.00036 0.2

NR Step 1 2 3 4 5 6 7 8 9

LU_NPC

No. No. Busses Branches 233 354 685 1240

4.2.1 Direct methods

0

Figure 1: CPU times for 233 bus network Figs. 2 and 3 show the total CPU time and the total number of iterations (i.e the sum of all iterations over all NR steps) of the various non-stationary iterative methods for the 685 bus network. Some conclusions can be drawn from these test runs:  The direct method is faster than any of the iterative methods. Note, however, that only the direct method is performance-optimized within Matlab. In Fig. 3, one \iteration" in LU NPC corresponds to one NR step.  Right (block-diagonal) preconditioning diverges with QMR and CGS.

2 Doing this can reduce the number of iterations signi cantly.

4

The most successful method, i.e. BICGSTAB RPC is analyzed in more detail in the following tables.

120

100

NR Step 1 2 3 4 5 6 7

0

GMRES(12)_RPC

GMRES(12)_LPC

QMR_LPC

CGS_RPC

CGS_LPC

BICG_RPC

BICG_LPC

20

LU_NPC

40

BICGSTAB_RPC

60

BICGSTAB_LPC

Cpu−time [s]

80

The 0 in the column \iter" indicates that the chosen initial solution point, i.e. x0 = b0 satis es the chosen iterative convergence criterion and no iterations are necessary. 27.4 s is the total CPU time.

NC

Figure 2: CPU times for 685 bus network

NR Step 1 2 3 4 5 6

800

700

500

0

685 bus network: BICGSTAB RPC: 38.6 s max G iter residual cpu 30 2 0.18 2.6 6.7 35 0.058 5.4 4 43 0.07 6.4 0.41 92 0.0026 12.5 0.15 78 0.0061 10.8 0.0015 0.9

To get more insight into the overall convergence of selected methods Figs. 4 ... 9 show the residual norms, the max. mismatch and the conditioned max. mismatch during all iterations for all NR steps. The straight lines indicate the end/beginning of one NR step. Note that the max. mismatch (indicated with the dashed and dashed-dotted line in the gures) is usually not computed during the iterations. It is only displayed here to see the convergence behavior of the various iterative methods and the solution accuracy of the original non-linear equations during all iterations. Fig. 4 shows the residuals over all iterations for the 233 network, CGS LPC. It clearly shows very noisy behavior as compared to BICGSTAB RPC (see Fig. 5). However, it converges also to the desired tolerance. In Fig. 5 and 6, a dramatic visual convergence di erence can be observed between the 233 and the 685 network for the same method (BICGSTAB RPC). Fig. 7 BICG LPC shows noisy downward trend for the residual norm and the maximum mismatch. Fig. 8 shows BICGSTAB LPC. The problem converges using a large number of NR steps. Fig. 9 shows the convergence behavior for the 685 network, using GMRES(12) RPC. This method is the only one beside CG minimizing kr~k2 . Therefore the residuals decrease monotonically during the iterations of each NR step. Also, one can observe the typical restart behavior of the GMRES method: Every 12 iterations, a mismatch step can be seen. However, due to the large computational e ort per iteration, this method is slower than BICGSTAB RPC.

GMRES(12)_RPC

GMRES(12)_LPC

QMR_LPC

CGS_RPC

CGS_LPC

BICGSTAB_RPC

BICG_RPC

100

BICG_LPC

200

BICGSTAB_LPC

400

LU_NPC

No of iterations

600

300

233 bus network: BICGSTAB RPC: 27.4 s max G iter residual cpu 64 0 0.57 0.8 34 5 0.32 0.7 5.6 63 1.3 2.8 13 68 0.16 2.9 2.6 159 0.06 6.3 0.12 360 0.0035 13.6 0.0028 0.3

NC

Figure 3: Total no of iterations for 685 bus network

 The performance and robustness of the GM-

RES(m) methods varies extremely with m and the type of network: For the 233 network, GMRES(m) converges only for high values of m (right preconditioning: m  50, left preconditioning: m  150). However, the 685 network converges with m  10. The best run is achieved with m = 12, right preconditioning.

 BICGSTAB RPC is the fastest iterative method

for all networks, see also Figs. 5 and 6. It shows consistently robust convergence, i.e. all cases converged and the number of NR steps was never larger than the direct method.

 Left (block-diagonal) preconditioning takes more iterations than BICGSTAB RPC and converges in all cases.

 Hybrid methods, i.e. combinations of the above

mentioned methods are possible. However, the scope of this paper is to show a comparison of accepted and well de ned non-stationary iterative methods. 5

3

10

10

10

residual

10

1

6

10

10

0

4

10

10

−1

2

10

10

−2

0

10

10

−3

−2

10

10

−4

−4

10

residual max G

2

8

10

0

200

400

600

800

1000

1200

1400

1600

10

1800

0

Figure 4: CGS LPC - 233 bus network: Residual (70.8 s CPU time)

50

100

150

200

250

300

350

400

450

Figure 7: BICG LPC - 685 bus network: max G and residual (62.6 s CPU time)

3

2

10

10 residual max G

residual 1

max G

10

2

10

max PC−G 0

10 1

10

−1

10 0

10

−2

10 −1

10

−3

10 −2

10

−4

10

−3

10

−5

0

100

200

300

400

500

600

10

700

Figure 5: BICGSTAB RPC - 233 bus network: max G and residual (27.4 s CPU time)

0

50

100

150

200

250

300

350

Figure 8: BICGSTAB LPC - 685 bus network: max G, max-PC-G and residual (52.3 s CPU time)

1

2

10

10

residual max G

residual max G

1

10

0

10

0

10 −1

10

−1

10

−2

10

−2

10

−3

10

−3

0

50

100

150

200

10

250

Figure 6: BICGSTAB RPC - 685 bus network: max G and residual (38.6 s CPU time)

0

100

200

300

400

500

600

Figure 9: GMRES(12) RPC - 685 bus network: max G and residual (64.7 s CPU time) 6

5 CONCLUSIONS

References

In this paper implementation aspects of nonstationary iterative methods have been presented. Emphasis has been given to a comparison of the direct and various iterative methods based on the \current mismatch power ow". Due to the fact, that the Jacobian derived from the non-linear power ow equations is unsymmetric and inde nite CG cannot be applied. Preconditioning of matrices is key to successful convergence of all iterative methods applied to practical power system problems. In this paper a diagonal block preconditioning scheme has been applied. This scheme is fast and leads for many methods to successful convergence. Simulation results applied to networks up to size 685 busses allow the following conclusions: The BICGSTAB RPC \Bi-Conjugate Gradients Stabilized right preconditioned" algorithm is the most robust algorithm for the solution of the \Current mismatch power ow" equations with a NewtonRaphson approach. The CPU times for a one CPU implementation of this algorithm are 2.5 - 4.5 times slower than those of direct solution methods, all implemented in Matlab 4.2c. The \Conjugate Gradient Squared" algorithm together with left block diagonal preconditioning (CGS LPC) shows good robustness, however, it is slower than BICGSTAB RPC. The \Generalized minimum residual" algorithm together with right block diagonal preconditioning (GMRES(m) RPC) shows smooth convergence properties even for quite low values of m. Although being slower than BICGSTAB RPC and CGS LPC this method is very appealing, because it minimizes the residuals from one iteration to the next in the same way as the \Conjugate Gradients" (CG) method does for symmetric and positive de nite matrices. The algorithms presented in this paper together with the diagonal block preconditioning show almost perfect parallelism and can be implemented easily in a parallel CPU environment. A parallel implementation will reduce the total computation time significantly. This paper presents for the rst time practical applications of these non-stationary methods to large size non-linear power ow problems with unsymmetric and inde nite Jacobian matrices. The implementation results in this paper show the relative performance and robustness of the various non-stationary methods applied to the power ow problem. It can be foreseen that the use of more sophisticated preconditioners, a deeper understanding of the characteristics of these methods applied to the power ow and the use of parallel CPU environments will further improve performance and robustness.

[1] W.F. Tinney and J.W. Walker, Direct solutions of sparse network equations by optimally ordered triangular factorization, Proceedings of the IEEE,Vol. 55, Nov. 1967, pp. 1801-1809 [2] F. D. Galiana, H. Javidi., S. McFee, On the application of a preconditioned conjugate gradient algorithm to power network analysis, IEEE Transactions on Power Systems, Vol. 9, No. 2, May 1994, pp. 629 - 636 [3] H. Mori, H. Tanaka, A preconditioned fast decoupled power ow method for contingency screening, IEEE Power Industry Computer Applications Conference, Salt Lake City, May 7-12, 1995, pp. 262-270 [4] F. Alvarado, D. Hasan, S. Harmohan, Application of conjugate gradient method to power system least squares problems, SIAM conference on Linear Algebra, Snowbird, Colorado, June 1994 [5] F. Alvarado, H. Da~g and M. ten Bruggencate, Block-Bordered Diagonalization and Parallel Iterative Solvers, Colorado Conference on Iterative Methods, Breckenridge, Colorado, April 5{ 9, 1994. [6] A. Semlyen, Fundamental concepts of a Krylov subspace power ow methodology IEEE SM 6007 PWRS, IEEE/PES Summer Meeting, July 2327, 1995, Portland, OR [7] R. Barrett, M. Berry, T. Chang, J. Demmel, J. Donato, J. Dongarra, V. Eijkhout, R. Pozo, Ch. Romine, H. Van der Vorst, Templates for the solution of linear systems: Building blocks for iterative methods, SIAM, Philadelphia, Pennsylvania, 1993 (ftp netlib2.cs.utk.edu; cd templates; get templates.ps) [8] J. R. Shewchuk, An introduction to the conjugate gradient method without the agonizing pain, School of Computer Science, Carnegie Mellon University, Pittsburgh, Ed. 1 41 , August 1994 (ftp warp.cs.cmu.edu; cd quake-papers; get painlessconjugate-gradient.ps) [9] M. R. Hestenes, E. Stiefel, Methods of conjugate gradients for solving linear systems, J. Res. National bureau of standards, Vol. 49, p. 409-436, 1952 [10] R. Bacher, Computer aided power ow software engineering and code generation, IEEE Power Industry Computer Applications Conference, Salt Lake City, May 7-12, 1995, pp. 474-480 7

Suggest Documents