lem may have a number of distinct solutions. Studies of the multiple ... a single scalar quadratic equation which locates ... real eigenvalues of the matrix J ..... function (27) can not be negative. ... parameter reaches its extremal value 0 at the.
SOLUTION CHARACTERISTICS OF QUADRATIC POWER FLOW PROBLEMS Y. V. Makarov A.M. Kontorovich University of Sydney Tel Aviv, Israel Australia
D. J. Hill I. A. Hiskens University of Sydney University of Newcastle Australia Australia
KEYWORDS Load ow analysis, numerical techniques. ABSTRACT A number of facts about quadratic power
ow problems f (x) = y + g (x) = 0, x 2 Rnx, y 2 Rny and applied Newton-Raphson methods with optimal multipliers are presented. The main results are about solution structure, singular points and Newton-Raphson solutions. Comments and discussions are given. Although we address here to the power ow problems, there are many areas where presented results may be eectively used. Actually they are valid for any problem described by an algebraic system of quadratic equations. INTRODUCTION The Newton-Raphson (NR) method and its various modi cations are the most popular numerical techniques used in load ow problems see for instance [1]-[5]. There are two main forms used for the load
ow equations: the polar and rectangular forms. Both of them have some advantages. The polar form provides signi cant reduction of computations. For instance, the method, which uses PQ decomposition of the load ow problem [3], is widely used on practice. The rectangular form of load ow equations can be eectively used as well - see [6]-[12]. The most important feature of that form is that the power mismatch function can be exactly expressed using linear and second order terms of the Taylor series. It is well known that the NR method has good quadratic convergence if the initial estimates are close to a solution point. However, if they are far from a solution or the load ow problem is an ill-conditioned one, convergence of the NR method may be slow or not at all. To overcome this problem, a number of numerical techniques were proposed - see [4]-[8]. The general idea behind all of them is to apply corrections to each step of the methods in such a way that iterative processes do not oscillate or diverge.
Due to its nonlinearity, the load ow problem may have a number of distinct solutions. Studies of the multiple solutions of the load ow problem play a role in determining proximity to voltage collapse [9, 13]. In order to obtain multiple load ow solutions, Tamura et al used a set of quadratic load ow equations and the NR optimal multiplier method [11]. Iba et al used Tamura's approach and some newly discovered convergence peculiarities of the NR method to nd a pair of closest multiple solutions [12]. It is observed from experimental results [12] that if a point x comes close to a line connecting a couple of distinct solutions, a further NR iterative process in rectangular form goes along this line. The next observation in [12] is that, in the vicinity of a singular point, the NR method with the optimal multiplier gives a trajectory which tends to the straight line connecting a pair of closely located but distinct solutions. These features are eectively used in [12] to locate multiple load ow solutions. The authors asked for a theoretical background of these experimentally discovered properties. The present paper makes a number of proofs of such properties and explains some properties of the load ow problem, the NR method in rectangular coordinates, and optimal multiplier technique. The main results establish the following properties: A variation of x along a straight line through a pair of distinct solutions of the problem f (x) = 0 results in variation of the mismatch vector f (x) along a straight line in Rny. There is a singular point in the middle of a straight line connecting a pair of distinct solutions x1; x2 in Rnx [9, 10, 14, and others]. A vector co-linear to a straight line connecting a pair of distinct solutions in Rnx nulli es the Jacobian matrix J (x) = @f=@x at the centre point the line [9, 10, and others]. If x belongs to a straight line connecting a pair of distinct solutions, the NR iterative process goes along that line. The maximal number of solutions on any straight line in Rnx is two. 1
Along a straight line through two distinct solutions x1 ; x2, the problem can be reduced to a single scalar quadratic equation which locates these solutions. If a loading process y( ) in Rny reaches a singular point detJ (x) = 0, the corresponding trajectory of x( ) in Rnx tends to the right eigenvector nullifying J (x) at the singular point (except in some special cases). At any singular point, there are two merging solutions (except in some special cases). For any two points x1 6= x2 and the Jacobian matrix detJ (x1 ) 6= 0, the number and location of singularities of the quadratic problem f (x) = 0 on the straight line through x1 ; x2 is de ned by real eigenvalues of the matrix J ?1 (x1)J (x2 ). The derivative of the function (; x) = kf (x + x)k2, where x is the NR correction vector, with respect to the optimal multiplier is equal to minus twice the function value [6]. An optimal multiplier , corresponding to a minimum of the function (; x), always exists in the range [0,2] [6, 12]. A scalar product of the NR correction vector x and gradient of the function (; x) is equal to minus twice the value of this function [6]. The function (; x) taken along a straight line through a pair of distinct solutions x1; x2 has a maximum in the middle of the line. Zero gradients of the function (; x) correspond to either solutions of the problem or points where f (x) is an orthogonal vector to the singular surface detJ (x) = 0. The gradient of the function (; x) in the centre of a straight line connecting a pair of distinct solutions is an orthogonal vector to this line. Some of the results are already known - see references above. They are reported to assemble them with new results presented in the paper. New more clearer explanations are given to them. Some earlier versions of new results presented here are given in [6, 15, 16]. 1. OPTIMAL MULTIPLIER TECHNIQUE A load ow problem consists of the solution of nonlinear equations of the general form f (x) = y + g(x) = 0
(1)
where y 2 Rny is the vector of speci ed independent parameters such as active and reactive
powers of loads and generators or xed voltages and x 2 Rnx is the state, consisting of nodal voltages. The vector function g (x) de nes the sum of power ows or currents into each bus from the rest of the network. If nodal voltages x are expressed in rectangular coordinates then f (x) is a quadratic function of x. The results given here are motivated by, but not limited to the power ow problem. We refer to the general form (1) throughout. The optimal multiplier technique [6, 7, 8] uses the following iterative step. xi+1 = xi ? iJ ?1 (xi )f (xi) (2) where i is a number of iteration, J (x) is the Jacobian matrix of f (x) calculated at the point x. The algorithm is the usual NR one with addition of the optimal multiplier i , which is chosen for xed xi and xi as a solution of the problem min (i) = min kf (xi + ixi)k2 i
i
(3)
where xi = xi+1 ? xi is calculated for xi+1 in (2) when i = 1. The multiplier i can be found as a solution of the equation (4) 0 = 0 where 0 denotes @=@i. If rectangular coordinates are used and f (x) has quadratic nonlinearity, the function 0 is a scalar cubic function of i , and solutions of (4) can be easily found. Practical implementations of the optimal multiplier technique show that it provides reliable convergence for a great number of cases [6]-[8]. An optimal multiplier technique for solution of ill - conditioned load ow problems was proposed in [6]-[8]. 2. QUADRATIC POWER FLOW STUDIES This section deals with some basic properties of the quadratic power ow problems, their solutions, singularities, and applied NR method. Property 1. For quadratic mismatch functions f (x), a variation of x along a straight line through a pair of distinct solutions of the problem f (x) = 0 results in variation of the mismatch vector f (x) along a straight line in Rny. Proof. Let x be a point on the straight line connecting two distinct solutions x1, x2 : x = x1 + (x2 ? x1 ) = x1 + x21 (5) i
i
i
where is a parameter, and x21 = x2 ? x1 . For quadratic mismatch functions, f (x)=f (x1)+J (x1 )x21+0:52W (x21) (6) f (x2) = f (x1) + J (x1 )x21 + 0:5W (x21) (7) where 0:5W (x21) is the quadratic term of the Taylor series expansion (7). At points x1 , x2 , we have f (x1 ) = f (x2) = 0. So, from (7), 0:5W (x21) = ?J (x1 )x21 (8) Using (8), the equation (6) transforms to f (x1 + x21 ) = (1 ? )J (x1 )x21 = (9) where = (1 ? ), = J (x1 )x21. Thus the mismatch function f (x1 + x21) varies along the straight line in Rny. 2 Comments. This fact was mentioned in [9]. There is an interesting practical application: nding multiple solutions of a quadratic problem f (x) = 0. Note (9) can be rewritten as f (x1 + x) + ( ? 1)J (x1 )x = 0 (10) where x1 is a known solution; x is an unknown increment of state variables; is unknown scalar parameter. Except the trivial case x = 0, the last equation corresponds to a dierent solution x2 = x1 + ?1x; jj < 1; 6= 0 The system (10) has n equations and n + 1 unknown variables, so it is necessary to add an additional equation in (10), for instance, rtx ? 1 = 0 (11) where r is a nonzero vector. By varying r and substitution of newly discovered solutions instead of x1 in (10), (11), it is possible to get all solutions of a quadratic problem. Property 2. For a quadratic problem f (x) = 0, there is a point of singularity in the centre of a straight line connecting a pair of distinct solutions in Rnx, and a vector co-linear to this line nulli es the Jacobian matrix evaluated in the centre point. Proof. Let x1, x2 be two distinct solutions of a quadratic problem f (x) = 0. A line connecting these solutions can be de ned as (5), and due to quadratic nonlinearity, f (x1 ) = f (x2 ) ? J (x2)x21 + 0:5W (?x21) (12)
f (x2) = f (x1)+ J (x1 )x21 +0:5W (x21) (13) It is clear that W (?x) = W (x). From (12) and (13),
[J (x1 ) + J (x2 )] x21 = 0
(14)
For a quadratic function f (x), the Jacobian matrix contains elements which are linear functions of x. So, it can be represented as
J (x) =
X A x + J (0) n
i=1
i i
(15)
where Ai , J (0) are (n n) constant matrices of Jacobian coecients, xi 2 x. Using (15), the equality (14) can be rewritten as 2J (x0 )x21 = 0
(16)
where x0 = (x1 + x2 )=2. As x21 6= 0, the vector x21 is the right eigenvector corresponding to a zero eigenvalue of the Jacobian matrix. Moreover, for all x 6= 0 which are co-linear vectors with respect to x21, we get J (x0 )x = 0. 2 Comments. Both the rst and second parts were proved in [9, 6, 14, and others]. The above proof seems to be more simple and compact. An interesting conclusion follows from Properties 1 and 2. Variations of x along a straight line connecting a couple of distinct solutions are actually motions of x along the right eigenvector nullifying J (x) in the middle of the line. Property 3. If any point x is on a straight line connecting two distinct solutions of the quadratic problem f (x) = 0, the NR iterative process with initial point from x follows this line. Proof. If any point xi is on the line connecting two distinct solutions, it can be described in the form (5). The middle point of x21 is x0 = 0:5(x1 + x2 ). The quadratic mismatch function can be expressed as
f (xi) = f (x0) + J (x0 )(xi ? x0) + 0:5W (xi ? x0 ) (17) To express the last term of (17) in terms of Jacobian matrices, we write
f (x) = f (0) + J (0)x + 0:5W (x) (18) f (0) = f (x) ? J (x)x + 0:5W (x) (19)
By summing of the last two equalities,
W (x) = [J (x) ? J (0)] x; and so
0:5W (xi ? x0) = 0:5[J (xi ? x0 ) ? J (0)](xi ? x0 ) On the other hand, taking into account quadratic nonlinearity of f (x) and (15),
J (xi ? x0 ) = J (xi ) ? J (x0 ) + J (0)
t
0:5W (xi ? x0) = 0:5[J (xi) ? J (x0 )](xi ? x0 ) Noting (16), then J (x0 )(xi ? x0 ) = 0. From (17), (20)
For the NR method with initial point xi , we have the following expression for the correction vector xi
f (xi) + J (xi )xi = 0
(21)
From (9),
f (xi) = (1 ? )J (x1 )x21; and so (22) J (x1 )x21 = ?1 (1 ? )?1 f (xi); 6= 0; 1 (23) At the point = 0:5 we have xi = x0, and it
follows from (22) and (23) that
f (x0 ) = 0:25?1 (1 ? )?1 f (xi)
(; x) = kf (x) + J (x)x + 0:52 W (x)k2 = = kf (x)k2 + kJ (x)xk2 + +k0:52 W (x)k2 + 2f (x)J (x)x+ +3 W (x)J (x)x + 2 f (x)W (x) The function (; x) equals zero if and only if f (x + x) = 0. At a solution point x = x , f (x ) = 0, and the function (27) is (; x) = 0:254 kW (x)k2 + 3 W (x)J (x)x +2 kJ (x)xk2 = (a2 + b + c)2 where a; b; c are the obvious functions of x. For any xed direction x = 6 0, (; x) equals zero t
Therefore, in (17), we have
f (xi ) = f (x0 ) + 0:5J (xi)(xi ? x0 )
So,
(24)
By substitution of (24) into (20), it follows
f (xi)= 0:25?1 (1 ? )?1f (xi)+0:5J (xi)(xi ? x0 )
(25) Multiplying (25) by J ?1 (xi ) and taking into account (21), xi = ?2(1 ? )[4(1 ? ) ? 1]?1(xi ? x0 ) (26)
The equation (26) shows that the NR correcting vector xi belongs to the straight line directed by the vector (xi ? x0), i.e. the iterative process goes along the line connecting x1 , x2 . 2 Comments. This fact was experimentally discovered in [12], where again the authors asked for some theoretical proof of the phenomena. Property 4. The maximum number of solutions of a quadratic equation f (x) = 0 on each straight line in the state space Rnx is two. Proof. Let us take the function
(; x) = f t(x + x)f (x + x) (27) For a quadratic mismatch function f (x), f (x + x) = f (x) + J (x)x + 0:52W (x)
t
t
in the two following cases (a) = 0; (b) a2 + b + c = 0. The rst case gives us the original solution point x = x . The second case corresponds to solutions x 6= x on the straight line directed by x. However, as it is clear from (27), the function (27) can not be negative. Thus a2 + b + c 0; and in the case (b) it is possible to have only one additional solution except x , but not two or more. So, on the line we get one root x = x , and we can have only one additional root corresponding to the condition (b). 2 Property 5. For a straight line connecting two solutions in Rnx, the system of quadratic equations can be reduced to a single scalar quadratic equation which locates these solutions. Proof. Let x1 , x2 be unknown distinct solutions of a quadratic problem f (x) = 0. Suppose we have a point x and direction x which de ne a line connecting the pair of solutions. The mismatch function calculated along this line is F () = f (x + x ) (28) where is a scalar parameter. Let us take any xed value = and de ne a constant = F ( ) 6= 0 (29) Having (28) and (29), consider the equation tF () = 0 (30) It follows from Property 1 that and F () are co-linear vectors, and (30) is true only if F () = 0. Using (29) and the Taylor series expansion F () = f (x ) + J (x)x + 0:52W (x)
we get the scalar quadratic equation
a2 + b + c = 0 (31) where a = 0:5tW (x ), b = tJ (x )x , c = tf (x). Equation (31) is to have a pair of distinct real roots 1 , 2 corresponding to x1; x2, and we can de ne these solutions as x1 = x + 1x, x2 = x + 2x . 2
Property 6. For almost all cases, if any loading process y ( ) ends at a singular point x0 of the problem y ( ) + g (x) = 0, where g (x) is a quadratic function of x and is a scalar loading parameter, there are two distinct solutions x1 , x2 merging at the singular point, and the trajectories x1( ), x2( ) tend to the right eigenvector r corresponding to a zero eigenvalue of the Jacobian matrix J (x0). Proof. Let a loading process with variable and y( ) + g(x) = 0 (32) end at a singular point x0 corresponding to = 0. At the singular point, dy = y 0 ( 0)d = Y0d : The implicit function theorem gives
J (x0 )dx + dy = J (x0 )dx + Y0 d = 0 (33) By multiplying (33) by st , where s is the left eigenvector of J (x0 ) corresponding to a zero eigenvalue, we get
stJ (x0 )dx + stY0 d = st Y0d = 0 For the general case of loading, st Y0 = 6 0; and, therefore, d = 0. This means that the loading parameter reaches its extremal value 0 at the singular point x0 . Alternatively, when st Y0 = 0, the loading trajectory y ( ) tends to the tangent hyper-plane to the singular margin (detJ (x) = 0) of the problem f (x) = 0 at the point x0 . It follows from the fact that s is an orthogonal vector with respect to the singular margin in Rny. Using (33), we have
J (x0)dx = 0 (34) So, the increment dx has the same direction as the right eigenvector r of the Jacobian matrix corresponding to its zero eigenvalue, and the last part of Property 6 has been proved. On the other hand, having (34), !0 1 ?Y0 = J (x0)x + 21 W (x) ?! 2 W (x)
and W (x) = W (?x): So, for a small increment at the point x0 , we have two increments of x of opposite signs directed along r. Therefore, the rst part of Property 6 has been proved. 2 Comments. The Property follows from [17, page 217]. A similar fact was mentioned in [18]. The proof presented here was rst given in [19]. It explains the experimental fact obtained in [12]. It was observed that in vicinity of a singular point the NR method with optimal multiplier gives a trajectory which tends to the straight line connecting a pair of closely located load ow solutions x1, x2 . Actually, the loading trajectory y ( ) in (32) can be represented as the convergence trajectory of the NR method. If it comes close to the singular margin, it tends to the right eigenvector r which nulli es the Jacobian matrix in the middle point between closely located solutions. Property 3 says that further iterative process goes along the line directed by the vector (x1; x2), and that line is co-linear to r. So, Properties 3 and 6 explain these phenomena. Property 7. For any two points x1 6= x2 and detJ (x1 ) 6= 0, the number and location of singularities of the quadratic problem f (x) = 0 on the straight line through x1 ; x2 is de ned by real eigenvalues of the matrix J ?1 (x1)J (x2 ). Proof. Let us de ne the line through x1; x2 as (5). Using (15), it is easy to show that J [x1 + (x2 ? x1)] = (1 ? )J (x1 )+ J (x2 ) (35) As x1 is a nonsingular point, for 6= 0, the expression (35) can be written as
h
i
J (x) = J (x1 ) J ?1 (x1)J (x2 ) ? ( ? 1)?1 I where I is the identity matrix. So, all singular
points on the line (5) can be computed as real eigenvalues of the matrix J ?1 (x1)J (x2 ). 2 3. OPTIMAL MULTIPLIERS This section is devoted to some interesting properties of so-called optimal multiplier function (; x) = kf (x + x)k2. Property 8. The directional derivative of (; x) = kf (x + x)k2, along the NR correction vector x, with respect to the optimal multiplier at = 0, equals twice the value of the function evaluated at the same point. Proof. Let us consider the function (; x) = f t (x + x)f (x + x)
where x is a given vector, x is a NR correction vector evaluated at x, and is the optimal multiplier. By dierentiating (; x) with respect to , 0(; x) = 2xtJ t (x + x)f (x + x); and 0(0; x) = 2xtJ t (x)f (x) = ?2(0; x) (36)
2
Comments. The fact was proved in [6]. It was also mentioned in [6] that (36) can be used as a measure of proximity of a correction vector x to the NR correction vector. This estimation can be utilized in numerical algorithms which use simpli ed Jacobian matrices. Property 9. The optimal multiplier always exists in the range [0,2]. Proof. It is clear from Property 8 that 0(0; x) = ?2f t(x)f (x) = ?2(0) 0 (37) For a NR correction vector x, 0(2; x) = 2xt J t (x + 2x)f (x + 2x) = = 2xt [J (x) + 2J (x) ? 2J (0)]t [f (x) + 2J (x)x + 2W (x)] = = 2k ? f (x) + 2J (x)x ? 2J (0)xk2 0 (38) In (38), we take into account that for a quadratic case J (x + 2x) = J (x) + J (2x) ? J (0) = J (x)+2J (x) ? 2J (0), and W (x) = [J (x) ? J (0)]x: By comparing of (37) and (38), we get that (; x) has at least one extremum in the range 2[0,2]. As 0(0; x) 0, the function (; x) has a minimum in the range 2[0,2]. 2 Comments. Property 9 was proved in [6]. It explains the observation about a characteristic of the optimal multiplier given in [12]. Property 10. The scalar product of the NR correction vector x and the gradient of the function (0; x) = kf (x)k2 is equal to minus twice the value of the function. Proof. The gradient of the function (0; x) = kf (x)k2 = f t(x)f (x) is equal to grad[(0; x)] = 2f t (x)J (x) (39) Multiplying (39) by the NR correction vector x gives grad[(0; x)]x= 2f (x)J (x)x = ?2(0; x) 2 (40) Comments. Property 10 gives a relationship between NR and gradient methods. To illustrate this, let us consider the following cases. t
1. A point where detJ (x) = 0. It is clear that in this case the NR correction vector kxk ! 1 while (x) still has a nite value. This means that x becomes an orthogonal vector with respect to ?grad[(0; x)]. So, we have that cos = 0; where is an angle between the vectors. The vectorx is a tangent vector to the hypersurface (0; x) = const. 2. A point where ?grad[(0; x)] has the same direction as x. In this case cos = 1: 3. Intermediate points. As the right part of (40) is always positive, we have that 0 < cos < 1: This means that x at the intermediate points is directed inside the region restricted by the hypersurface (0; x) = const, and it minimizes (0; x). But a question remains with the length of the vector x. In the vicinity of the singular margin it can be so big that nally the function increases after the corresponding NR iteration. Anyway, it is possible to restrict the step length, and so provide decrease of the function on each NR iteration. This is an idea of the optimal multiplier technique. Property 11. The function (; x) = kf (x + x)k2 calculated along a straight line connecting a pair of distinct solutions of a quadratic problem f (x) = 0 has a maximum in the middle point of this line. The gradient of taken in the middle of a straight line connecting a couple of distinct solutions is an orthogonal vector to this line. Proof. For the line of x connecting a pair of distinct solutions x1 ; x2, the function is (; x) = kf [x1 + (x2 ? x1)]k2: It has extremal values if 0;x() = 0x x0 = 2f t(x)J (x)(x2 ? x1) = 0 (41) Property 2 says that at the middle point x0 = 1 2 (x1 + x2 ) of the line (x1 ; x2), the following equality is true J (x0 )(x2 ? x1) = 0. Thus, 0 (0:5) = 0 (42) On the other hand, the function 0 () can be expressed as the cubic function of : 0() = 4a43 + 3a32 + 2a2 + a1 (43) and two solutions of (43), i.e. = 0, = 1, correspond to global minima of (; x) at zero value: 0(0; x) = 0 ; (0; x) = 0 0(1; x) = 0 ; (1; x) = 0
It follows from Property 4 that (; x) has only two zero minima on the line. The third zero point (42) corresponds to a maximum of (; x). The rst part of Property 10 has been proved. The second part follows directly from (41). Actually, (41) can be rewritten as
grad[(; x)](x2 ? x1 ) = 0 2 Comments. It should be pointed out that Property 11 does not mean that (; x) has an absolute maximum at x = x0 . To have the absolute maximum, the function 0x = 2f t (x)J (x) has to be zero, and the quadratic form xt00xx x has to be negative de nite. So, Property 11 deals with the maximum of (; x) de ned along the line. In some cases, we can have the absolute maximum or minimum of the function (; x) at some points of the singular margin. They can be described with the help of Property 11. Property 12. A zero gradient of the function (0; x) corresponds to a solution of the problem f (x) = 0, or to points where the function f (x) is an orthogonal vector with respect to the singular margin of the load ow problem. Proof. The function can be represented as
(; x) = kf (x)k2 = kf (x) ? f (xi )k2
(44)
where xi , i = 1; ; m, is a solution of the problem f (x) = 0, m is the number of distinct solutions. It is clear that (44) is the second power of the distance from a solution point f (xi ) to a point f (x) in Rny. The distance function (0; x) has extrema if its gradient equals to zero:
grad[(0; x)] = 0x = 2f t(x)J (x) = 0
(45)
There are two cases when (45) is satis ed. (a) The function f (x) = 0, i.e. x = xi , where xi is a solution point. (b) The Jacobian matrix J (x) is a singular one, and f (x) is a left eigenvector corresponding to a zero eigenvalue of the Jacobian matrix. It is clear that in case (b) (due to (44) and (45)) the vector f (x) corresponds to a local minima or maxima of distance from a solution point to the singular margin in Rny. 2 Comments. Equation (45) can play an important role in de nition of a shortest distance from a current load ow point to the saddle node bifurcation boundary margin [19]-[24].
ACKNOWLEDGMENTS This work was sponsored in part by an Australian Electricity Supply Industry Research Board grant \Voltage Collapse Analysis and Control". The authors wish to thank Professors N. Barabanov (Russia), I. Dobson (USA), A. Volberg (USA, France), V. Yakubovich (Russia) for their interest, examination and consultations regarding some aspects of the work. The authors are expressing their gratitude to Mr. Bhudjanga Chakrabarti, who made a big work with the bibliography and put very valuable questions, which actually initiated the present work.
References
[1] L.A. Krumm, "Applications of the NewtonRaphson method to a stationary load ow computation in bulk power systems", Izvestia AN SSSR: Energetika i Transport, No. 1, 1966, pp. 3-12 (in Russian). [2] W.F. Tinney and C.E. Hart, "Power ow solution by Newton's method", IEEE Winter Power Meeting, N.Y., No. 31, 1967, pp. 17-25. [3] B. Stott and O. Alsac, "Fast decoupled load ow", IEEE Trans. on Power App. and Syst., Vol. PAS-93, No. 3, May-June 1974, pp. 859-869. [4] V.A. Matveev, "A method of numerical solution of sets of nonlinear equations", Zhurnal Vychislitelnoi Matematiki i Matematicheskoi Fiziki, Vol. 4, No. 6, 1964, pp. 983-994 (in Russian). [5] A.M. Kontorovich, Y.V. Makarov and A.A. Tarakanov, "Improved methods for load
ow analysis", Acta Polytechnica, Prace C VUT v Praze, Vol. 5/III, 1983, pp 121125 [6] A.M. Kontorovich, "A method of load ow and steady-state stability analysis for complicated power systems with respect to frequency variations", PhD Thesis, Leningrad Polytechnic Institute, Leningrad, 1979 (in Russian). [7] S.Iwamoto and Y.Tamura, "A load ow calculation method for ill-conditioned power systems", Proc. of the IEEE PES Summer Meeting, Vancouver, British Columbia, Canada, July 1979.
[8] S.Iwamoto and Y.Tamura, "A load ow calculation method for ill-conditioned power systems", IEEE Trans. on Power App. and Syst., Vol. PAS-100, No. 4, April 1981, pp. 1736-1743 [9] Y.Tamura, Y, K.Sakamoto and Y.Tayama, "Voltage instability proximity index (VIPI) based on multiple load ow solutions in illconditioned power systems", Proc. of the 27th Conference on Decision and Control, Austin, Texas, December 1988. [10] Y. Tamura, Y. Nakanishi and S. Iwamoto, "On the multiple solution structure, singular point and existence condition of the multiple load ow solutions", Proc. of the IEEE PES Winter Meeting, New York, February, 1980. [11] Y. Tamura, K. Iba and S. Iwamoto, "A method for nding multiple load- ow solutions for general power systems", Proc. of the IEEE PES Winter Meeting, N.Y., February 1980. [12] K. Iba, H. Suzuki, M. Egawa and T. Watanabe, "A method for nding a pair of multiple load ow solutions in bulk power systems", Proc. of the IEEE Power Industry Computer Application Conference, Seattle, Washington, May 1989. [13] Y.Tamura, H.Mori and S.Iwamoto, "Relationship between voltage instability and multiple load ow solutions in electric power systems", IEEE Trans. on Power App. and Syst., Vol. PAS-102, No. 5, May 1983. [14] V.I. Idelchik and A.I. Lazebnik, "An analytical research of solution existence and uniqueness of electrical power system load
ow equations", Izvestia AN SSSR: Energetika i Transport, No. 2, 1972, pp. 18-24 (in Russian). [15] Y.V. Makarov and I.A. Hiskens, "Solution characteristics of the quadratic power ow problem", Technical Report No. EE9377 (revised), Department of Electrical and Computer Engineering, University of Newcastle, Australia, May 1994. [16] Y.V. Makarov, I.A. Hiskens and D.J. Hill, "Study of multisolution quadratic load ow problems and applied Newton-Raphson like methods", Proc. IEEE International Symposium on Circuits and Systems, Seattle,
[17] [18]
[19]
[20]
[21]
[22]
[23]
[24]
Washington, April-May 1995, paper no. 541. S.N. Chow and J. Hale, "Methods of bifurcation theory", N.Y.: Springer-Verlag, 1982 (Section 6.2). I. Dobson and L. Lu, "Voltage collapse precipitated by the immediate change in stability when generator reactive power limits are encountered", IEEE Trans. on Circuits and Systems- Fundamental Theory and Applications, Vol. 39, No. 9, September 1992, pp. 762-766. Y.V. Makarov and I.A. Hiskens, "A continuation method approach to nding the closest saddle node bifurcation point", Proc. NSF/ECC Workshop on Bulk Power System Voltage Phenomena III, Davos, Switzerland, published by ECC Inc., Fairfax, Virginia, August 1994. A. M. Kontorovich, A.V. Krukov, Y. V. Makarov, et al., "Computer methods of steady-state stability indices computations for bulk power systems", Irkutsk: Publishing House of the Irkutsk University, 1988 (in Russian). C.A. Ca~nizares and F.L. Alvarado, "Computational experience with the point of collapse method on very large AC/DC systems", Proc.: Bulk power system voltage phenomena - voltage stability and security ECC/NSF workshop, Deep Creek Lake, MD; published by ECC Inc., Fairfax, Virginia, August 1991. T. Van Cutsem, "A method to compute reactive power margins with respect to voltage collapse", IEEE Trans. on Power Systems, Vol. 6, No. 1, February 1991, pp. 145156. J. Jarjis and F.D. Galiana, "Quantitative analysis of steady state stability in power networks", IEEE Trans. on Power App. and Syst., Vol. PAS-100, No. 1, January 1981, pp. 318-326 I. Dobson and L. Lu, "New methods for computing a closest saddle node bifurcations and worst case load power margin for voltage collapse", IEEE Trans. on Power Systems, Vol. 8, No. 3, August 1993, pp. 905-913.