1
Improving Power Flow Convergence by Newton Raphson with a Levenberg-Marquardt Method P. J. Lagace, Member, IEEE, M. H. Vuong, and I. Kamwa, Fellow Member, IEEE
Abstract— This paper presents a comparative study of different techniques for improving the convergence of power flow by adjusting the Jacobian iteratively within the Newton Raphson method. These techniques can be used for improving the rate of convergence, increasing the region of convergence or just to find a power flow solution when traditional procedures may diverge or possibly oscillate until an iteration limit is reached even though a valid solution exists. Different techniques are first derived from an investigation of various schemes with the Newton process for the search of roots for a single variable function. The methods are then extended to the power flow problem. Comparisons of the region of convergence on a large power system are also presented to illustrate the effectiveness of the proposed methods. Index Terms—Power Systems, Power flow, Load flow, Convergence, Gauss Method, Newton's method, Newton Raphson, Jacobian, Hessian, Levenberg-Marquardt Method.
I. INTRODUCTION
P
ower flow or load flow is the process of calculating the steady state voltage magnitude and angle at every bus and the power flow in each branch of a power system given the status of generators and loads. The voltage is determined by solving a set of equations representing the net active power and reactive power flowing out at each bus. Because these equations exhibit a non linear characteristic, they must be solved iteratively. The first power flow algorithms were based on the Gauss-Seidel method which can be used to determine the voltage for a relatively large power system [1]. Among all the power flow methods, the Gauss-Seidel method remains serviceable but suffers from slow convergence characteristics. The incentive to accelerate the convergence led to the Newton-Raphson method which consists of solving a linearization of the active and reactive power equations around an operating point. The Newton-Raphson method is an iterative algorithm for simultaneously solving a set of nonlinear equations and exhibits a quadratic convergence when the initial operating point is close to the solution [2].
________________________________________________ This work was supported by the research group GREPCI. P. J. Lagace is with the electrical engineering department of Ecole de technologie supérieure, Montreal, QC, Canada H3C 1K3.; phone:514-3968634; fax:514-396-8684; (email:
[email protected]) M. H. Vuong is a postdoctorate researcher at the electrical engineering department of Ecole de technologie supérieure, Montreal, QC, Canada H3C 1K3.; (email:
[email protected]) I. Kamwa is with Hydro-Québec/IREQ, Power System Analysis, Operation and Control, Varennes, QC, Canada J3X 1S1.; (email:
[email protected])
©2008 IEEE.
Despite this powerful convergence property, the method may diverge when the initial operating point is far from the solution. Power system configurations have been classified as belonging to the secure region when the load is met and no limits are violated [3]. This paper is mainly concerned with power system configurations belonging to the secure region whose power flow solution may still be difficult to obtain. Such difficulties may arise for ill-conditioned power systems [4][5], for systems with a high r/x ratio [6][7] or when the system loading approaches critical loading [8]. Investigations of large and stressed power systems have been presented in polar and rectangular coordinates [9] [10]. The Newton Raphson method has also been used in complex form [11]. Improvements in the convergence of power flow with the Newton Raphson method have been obtained by using optimal multipliers [12]. Interior point methods were used to restore system solvability [13]. Methods proposed in the literature often involve the calculation of an optimal factor used to rescale the incremental voltage vector for convergence control during the power flow iterations [14]. The strategies typically consist in developing a scheme for estimating the incremental voltage based on a more accurate estimation of the sensitivity of the power mismatches to variations of the buses voltage. The approach amounts to enhancing awareness of the behavior of power mismatches locally at a particular operating point and have yielded schemes with better performance than the traditional Newton Raphson method. This paper first presents a scheme based on enhanced local awareness of the behavior of power mismatches. Like other schemes based on enhanced local awareness, this scheme using the Hessian matrix offers improvements in convergence over the traditional Newton Raphson method. From this scheme, the paper presents a second method based on enhanced surrounding awareness. In this method, the correction on the voltage is evaluated using power mismatches calculated at the local operating point and information on another operation point. Power mismatches evaluated at a local operating point, and the Jacobian evaluated at another operating point are used to yield improvements in convergence. This method is finally combined with the Levenberg-Marquardt method widely used in other areas of engineering [15][16] and is applied to the area of power flow to yield better regions of convergence. A modified version of the Levenberg-Marquardt method based on enhanced surrounding awareness is also presented and provides better rates of convergence when the operating point approaches the solution.
2 i
II. CONVERGENCE FOR VARIOUS SCHEMES Different techniques can be used to minimize the magnitude of an objective function F(X). For illustrative purposes, let us evaluate the region of convergence of various schemes for the determination of the root of the function F(X), where: X = [ x1
x2
L
xn ]
T
F ( X ) = tan −1 ( X )
(1)
Method#1 NR: Newton Raphson method The first order Taylor series expansion of the function F(X+∆X) yields an estimation of the correction ∆X required to determine the root: F ( X + ∆X ) = F ( X ) + ∇ F ( X ) ∆ X = 0 (2) The correction ∆X required for minimizing F is given by: ∆X = −∇F ( X ) \ F ( X ) (3) which is equivalent to: ∆X = −∇F ( X )
−1
F ( X)
(4)
Equation (3) utilizes the backslash operator [17] to describe the resolution of a general system of simultaneous equations (2) and involves matrices operations such as reordering, factorization, forward and backward substitutions. This operator doesn't require the inversion of the Jacobian and offers significant reduction of CPU and memory requirements for large sparse systems. Using this operator, the Newton Raphson algorithm can be written as follows: 1 Let X = Initial value 2 Loop 2.1 ∆X = −∇F(X)\F(X) 2.2 X = X+∆X 2.3 End_Loop The conditions for interrupting the loop in Newton Raphson can be the following: 1) The magnitude of F(X) is smaller than a specified tolerance and the process has converged. 2) The magnitude of X is larger than an upper limit and the process diverges. 3) The number of iterations exceeds an upper limit and the process fails to converge within the specified tolerance. Fig. 1 illustrates the tendency of the method to diverge when the initial value is far from the solution.
Method#2 NRH : Iterative adjustments with Hessian The second order Taylor series expansion of the function F provides a better estimation of the correction ∆X required to determine the root: 1 F ( X + ∆X ) = F ( X ) + ∇F ( X ) ∆X + ∆X T ∇ 2 F ( X ) ∆X = 0 (5) 2 The correction ∆X required for minimizing F can be estimated from a fixed point algorithm with “i” iterations: 1 ⎛ ⎞ ∆X = − ⎜ ∇ F ( X ) + ∆ X T ∇ 2 F ( X ) ⎟ \ F ( X ) (6) 2 ⎝ ⎠ Using this correction, the NRHi method can be written as follow: 1 Let X = Initial value 2 Loop 2.1 ∆X = −∇F(X)\F(X) 2.2 ∆X = −(∇F(X) + ∆XT∇2F(X)/2)\F(X) 2.3 X = X+∆X 2.4 End_Loop In this algorithm, statement#2.2 can be repeated “i” times in a second loop following a fixed point method. The conditions for interrupting this loop used for the calculation of ∆X can be as follows: 1) The magnitude of ∆X is larger than an upper limit and the process diverges. 2) The number of iterations exceeds an upper limit and the process fails to converge within the specified tolerance. 3) The magnitude of the variation of ∆X at iteration#i is smaller than a specified tolerance and the process has converged: ∆Xi − ∆Xi −1 < tolerance (7) During the iterations of the second loop, only the value of ∆X changes, therefore the Jacobian and Hessian matrices need to be updated only once at the beginning of the loop. It should be noted that using zero iteration NRH0 is equivalent to the traditional Newton Raphson method. In this paper, the algorithm NRH1 was investigated. One iteration was found to be sufficient to improve the convergence characteristics of the algorithm. Further iterations do not significantly improve the convergence. Fig. 2 illustrates the tendency of the method to converge when the Hessian matrix is taken into consideration:
F(X)=tan-1(X)
F(X)=tan-1(X)
1
1
(X1,Y1)
0
0.5
X2
X3
X1
∆X
-0.5
-1
F(X)
F(X)
0.5
(X1,Y1)
X3
0
X2
-0.5
(X2,Y2) -2-
X1
∆X
(X2,Y2)
-1 -1-
0 X axis
1
Fig. 1 Pathological case for the Newton method
2
-2-
-1-
0 X axis
1
Fig. 2 Convergence using the Hessian matrix
2
3 Method#3 NRJi: Jacobian adjustments The previous method can be modified by noticing that the parenthesis of (6) is an estimation of the Jacobian at X+∆X/2 : ∆X = −∇F ( X + ∆X 2 ) \ F ( X ) (8) i
Using this correction, the NRJ algorithm can be written as follow: 1 Let X = Initial value 2 Loop 2.1 ∆X = −∇F(X)\F(X) 2.2 ∆X = −∇F(X+∆X/2)\F(X) 2.3 X = X+∆X 2.4 End_Loop Similarly to the previous algorithm, statement#2.2 can be repeated “i” times in a second loop in a fixed point fashion. During the iterations of the second loop, the Jacobian must be updated each time before evaluating the value of ∆X. Using zero iteration NRJ0 is equivalent to the traditional Newton Raphson method. In this paper, the algorithm NRJ1 was investigated and only one iteration was utilized to improve the convergence characteristics of the algorithm. Fig. 3 illustrates the tendency of the method to converge when the Jacobian is adjusted to the slope at operating point X+∆X/2: F(X)=tan-1(X) 1
(X1,Y1)
F(X)
0.5
∇F
(X2,Y2)
0
X1
X2
∇F
-0.5 ∆X/2
∆X/2
-1 -2-
-1-
0 X axis
1
2
Fig. 3 Convergence using Jacobian adjustments
Method#4 NRM: Levenberg-Marquardt Method The Levenberg-Marquardt method is a minimization technique commonly used in several areas of engineering [15] [16] and is briefly presented here: The value of the function F(X) evaluated at an initial point Xo can be interpreted as an error: ε o = F ( Xo ) (9) The error at X=Xo+∆X can be estimated from the first order Taylor series expansion of the function F: ε = F ( X o + ∆X ) (10) = ε o + ∇ F ( X o ) ∆X
Let us determine the variation ∆X that minimizes the sum of the square of the errors: S = εT ε
(11)
= ∆X ∇F ( Xo ) ∇F ( Xo ) ∆X + 2∆X ∇F ( Xo ) T
T
T
T
T εo + ε o εo
The Jacobian ∇S vanishes when the sum of the square of the errors is at a minimum:
∇S =
⎡ ∂S ⎢ ⎣ ∂x1
∂S ∂x 2
K
∂S ∂x n
⎤ ⎥ ⎦
T
(12)
= 2∇F ( Xo ) ∇F ( Xo ) ∆X + 2∇F ( Xo ) ε o = 0 T
T
The correction ∆X required for minimizing S is then given by:
(
)
∆X = − ∇ F ( X o ) ∇ F ( X o ) \ ∇ F ( X o ) ε o T
T
(13)
One problem subsists with this solution due to the tendency of the method to diverge when the initial point Xo is far from the solution. The Levenberg-Marquardt method offers an alternative approach which consists in introducing an additional constraint to the sum of the errors in the form of a Lagrange multiplier. The additional constraint is the distance between the solution and an initial guess Xo: SM = εT ε + λ ( X − Xo )
T
( X − Xo )
(14)
The correction ∆X required for minimizing SM is then given by:
(
)
∆X = − ∇ F ( X o ) ∇ F ( X o ) + λ I \ ∇ F ( X o ) ε o T
T
(15)
The introduction of the damping factor λ in the LevenbergMarquardt method has the advantage of reducing divergence problems but results in a solution constrained to be close to an initial guess Xo. The value of the damping factor λ can be adapted during the iterative process. Larger values of damping factors are preferable whenever the initial estimate of the solution is poor. Generally its values should be chosen so that the error function “S” decreases continuously during the process. A damping factor of zero yields the traditional Newton Raphson method: NRM ( λ = 0 ) = NR (16) The NRM algorithm can be written as follows: 1 Let X = Initial value 2 Loop 2.1 Evaluate λ 2.2 ∆X = −(∇F(X)T∇F(X)+λI)\∇F(X)TF(X) 2.3 X = X+∆X 2.4 End_Loop Method#5 NRMJ: Adjusted Levenberg-Marquardt Method It is possible to improve the rate of convergence of the Levenberg-Marquardt method using enhanced surrounding awareness by recalculating the correction ∆X required for minimizing S with the Jacobian evaluated at X1=X0+∆X/2:
(
)
∆X = − ∇F ( X1 ) ∇F ( X1 ) + λI \ ∇F ( X1 ) F ( X 0 ) (17) T
T
The NRMJ algorithm can be written as follows: 1 Let X0 = Initial value 2 Loop 2.1 Evaluate λ 2.2 ∆X = −(∇F(X0)T∇F(X0)+λI)\∇F(X0)TF(X0) 2.3 X1 = X0+∆X/2 2.4 ∆X = −(∇F(X1)T∇F(X1)+λI)\∇F(X1)TF(X0) 2.5 X0 = X0+∆X 2.6 End_Loop
A damping factor of zero is equivalent to the Newton Raphson method with one Jacobian adjustment: NRMA ( λ = 0 ) = NRJ1
(18)
III. APPLICATION TO POWER SYSTEMS The NRHi method can be translated in the power system domain by rewriting equation (5) using power mismatches, Jacobian and Hessian matrices: ⎡ P ( θ + ∆θ, V + ∆V ) ⎤ ⎡ P ( θ, V ) ⎤ ⎡ ∆θ ⎤ ⎢ ⎥=⎢ ⎥ + J (θ, V , ∆θ, ∆V ) ⎢ ⎥ (19) , , Q θ + ∆ θ V + ∆ V Q θ V ( ) ( ) ⎢ ⎥ ⎣ ⎢ ⎣ ∆V ⎦ ⎣ ⎦ ⎦⎥ =0 T
1 ⎡ ∆θ ⎤ (20) ⎢ ⎥ H (θ, V ) 2 ⎣ ∆V ⎦ Equation (20) is composed of 1D, 2D and 3D matrices and is not suitable for sparse and vectorized operations. This limitation is circumvented by reshaping the Jacobian matrix into a 1D matrix and the Hessian into a 2D matrix for the purpose of (20) only. Equation (19) can then be solved using the original matrix dimensions and yields to the desired corrections on the voltage angles and magnitudes:
where
J (θ, V , ∆θ, ∆V ) = J(θ, V ) +
⎡ ∆θ ⎤ ⎢ ⎥ ⎣ ∆V ⎦
( θ, V ) ⎤ ⎥ Q θ, V ) ⎦⎥ ⎣⎢ ( ⎡P
= − J (θ, V , ∆θ, ∆V ) \ ⎢
(21)
The approach is demonstrated on a 3-bus system in Fig. 4, composed of swing, PV and PQ buses. Pgen 2 V2 = 1.0
V1 = 1.0∠0°
V3
Fig. 4 3-Bus System
In this case, the unknowns and the Jacobian are respectively given by: T
= [ ∆θ2 ∆θ3 ∆V3 ]
⎡ ∂P 2 ⎢ ⎢ ∂θ 2 ⎢ P ∂ ⎢ 3 ⎢ ⎢ ∂θ 2 ⎢ ∂ Q ⎢ 3 ⎢ ∂θ 2 ⎣⎢
H 2D =
⎢ ⎢ ∂2P 2 ⎢ ⎢ ∂θ ∂θ 3 2 ⎢ ⎢ 2 ⎢ ∂ P2 ⎢ ⎢ ∂V3∂θ2 ⎣
∂ 2P 3 ∂θ ∂θ 3 2 ∂ 2P 3 ∂V ∂θ 3 2
⎥
∂ 2P ∂2Q ⎥ 3 3 ⎥ ∂θ ∂V ∂θ ∂V ⎥ 3 3 3 3⎥ ∂ 2P 3 2 ∂V 3
∂2Q 3 2 ∂V 3
(25)
⎥ ⎥ ⎥ ⎥ ⎦
The number of non-zero elements of the sparse Jacobian and Hessian matrices depends on the topology of the power system. In order to determine the storage requirements for these matrices, let us define the following characteristics of the system: nPV : Number of PV Bus nPQ : Number of PQ Bus nPVPV : Half the number of non-diagonal entries in YBUS associated with branches having both extremities connected to PV buses. nPVPQ : Half the number of non-diagonal entries in YBUS associated with branches having one extremity connected to a PV bus and the other to a PQ bus. nPQPQ : Half the number of non-diagonal entries in YBUS associated with branches having both extremities connected to PQ buses. Then the maximum numbers of entries in the Jacobian and Hessian matrices are given respectively by: n J _ estimated = n PV +4n PQ +2n PVPV +4n PVPQ +8n PQPQ (26) n H _ estimated = n PV +8n PQ +6n PVPV +17n PVPQ +44n PQPQ (27)
Schg3
⎡ ∆θ ⎤ ⎢ ⎥ ⎣ ∆V ⎦
4 The Hessian is then obtained by differentiating each component with respect to the unknowns: ⎡ ∂ 2P ∂ 2P ∂2Q ⎤ ∂2P ⎢ 3 3 3 ⎥ 2 ⎢ ⎥ 2 2 ∂θ ∂ ∂θ ∂ V V ∂θ 2 3 2 3⎥ ⎢ ∂θ2 2
(22)
∂P ∂P ⎤ 2 2⎥ ∂θ ∂V ⎥ 3 3⎥ ∂P ∂P ⎥ 3 3⎥ J= (23) ∂θ ∂V ⎥ 3 3 ⎥ ∂Q ∂Q ⎥ 3 3 ⎥ ∂θ ∂V ⎥ 3 3⎦ Reshaping the Jacobian into a 1D matrix yields to: ⎡ ∂P ∂P ∂Q ∂P ∂P ∂Q ∂P ∂P ∂Q ⎤ 3 3 3 3 3 3 ⎥ (24) 2 2 J1D = ⎢ 2 ∂θ ∂θ ∂θ ∂θ ∂θ ∂V ∂V ∂V ⎥ ⎢ ∂θ 2 2 2 3 3 3 3 3 3⎦ ⎣
Significant reductions in storage requirement and CPU can be achieved by using optimization techniques for generating large matrices. Table I illustrates statistics on generating the same Hessian matrix on a 2688-bus system using two techniques. TABLE I STATISTICS ON HESSIAN MATRIX H HT Memory (bytes) 102 706 772 2 054 012 cpu 2h 52min 47sec 1.2 sec Entries 168 832 168 832 Dimensions 5017 x 25170289 25170289 x 5017 The first technique generates the sparse H matrix by successively storing individual non-zero entries. The second technique uses preallocated memory space based upon the maximum number of entries for storing indices and values, then exchanges the row and column indices to represent the transpose of H before generating the sparse format. Both techniques use a compressed column scheme, or HarwellBoeing format for storing sparse matrices [17]. Using this scheme, the memory required to store a sparse matrix containing a large number of rows but having few columns is
5 much less than the memory required to store the transpose of the same matrix. Further CPU gain is achieved in reducing memory management operations by converting vectors of indices and values into sparse format rather than creating the sparse matrix in an elementwise manner. The Hessian matrix remains relatively sparse as illustrated in the sparsity pattern[17] of Fig. 5.
Simulation time (sec)
10
10
10
10
3
Simulation time vs K for "2688bus.cf"
2
NRH1 NRJ1 NR
1
0
0.7
0.8
0.9
1
1.1
1.2
1.3
Adjustment factor K Fig. 7 Region of convergence versus K
Fig. 5 Hessian matrix for a 2688-bus system
A second power flow method using Jacobian correction is investigated. The NRJ method consists in reevaluating the Jacobian at (θ+∆θ/2,V+∆V/2) and aims at finding a better estimate of the sensitivity of the power mismatch to fluctuations in buses voltage between operating points (θ,V) and (θ+∆θ,V+∆V). The incremental voltage is then given by: ⎡ ∆θ ⎤ ∆θ ∆V ⎡ P ( θ, V ) ⎤ ,V + )\⎢ (28) ⎥ ⎢ ⎥ = − J (θ + 2 2 ⎢⎣Q ( θ, V ) ⎥⎦ ⎣ ∆V ⎦ The convergence properties of the methods are compared by estimating the initial values of the voltage of PQ buses and the angle of PV and PQ buses from the converged solution modified by a factor K as follows: Vinitial = Vconverged
K
(29)
θinitial = K × θconverged
(30)
It should be noted that for K=1 the case is already converged and no additional iterations are required, whereas K=0 is equivalent to a flat start with all the unknown voltages initialized to 1.0 pu. Outside a certain range of K, the power flows fail to converge as illustrated in Fig. 6 .
Number of iterations
15
Iterations vs K for "2688bus.cf" NRH1 NRJ1 NR
10
5
0 0.7
0.8
0.9
1
1.1
1.2
Adjustment factor K Fig. 6 Region of convergence versus K
1.3
Fig. 7 illustrates significant increase in cpu time required by the NRH1 method associated with calculations performed on large matrices, whereas the simulation time from the NRJ1 method remains lower since it only involves an extra Jacobian evaluation and resolution. The region of convergence from the NRH1 and NRJ1 methods more than tripled over the traditional Newton-Raphson method. This improvement in the algorithm's robustness doesn't guarantee that the methods lead to the solution considering that convergence could not be obtained from a flat start (K=0) where all the initial voltages are set to 1.0 pu. All converged power flows required information on the final solution up to various degrees. Further improvement in the region of convergence is needed and can be obtained when the Levenberg-Marquardt method is used to minimize the sum of the square of active and reactive power mismatches:
( θ, V ) ⎤ ⎥ ⎢Q ( θ, V ) ⎦ ⎥ ⎣ ⎡P
T
S= ⎢
⎡P ⎢ ⎢Q ⎣
( θ, V ) ⎤ ( θ, V ) ⎥⎦⎥
(31)
The corrections on the voltage required for minimizing S for the NRM and NRMJ methods applied to power systems are respectively given by: ⎡ P ( θ, V ) ⎤ ⎡ ∆θ ⎤ T T (32) ⎥ ⎢ ⎥ = − J ( θ, V ) J ( θ, V ) + λI \ J ( θ, V ) ⎢ Q θ, V ) ⎥⎦ ⎢ ⎣ ∆V ⎦ ⎣ (
(
)
(33) ⎡ P ( θ0 , V0 ) ⎤ ⎡ ∆θ ⎤ T T ⎥ ⎢ ⎥ = − J ( θ1 , V1 ) J ( θ1 , V1 ) + λI \ J ( θ1 , V1 ) ⎢ V ∆ ⎣ ⎦ ⎣⎢Q ( θ0 , V0 ) ⎦⎥
(
)
∆θ ∆V , V0 + ) (34) 2 2 Various schemes for determining the damping factor λ are available in the literature. The calculations of the damping factors in the Levenberg-Marquardt method vary widely according to the applications and even according to the study cases. It is generally recognized that larger values of λ should be used for large errors S whenever the initial estimate of the solution is poor. The values of the damping factor λ can subsequently be reduced as the distance S decreases in order to achieve faster convergence. In this application, the damping factor was arbitrarily chosen to vary linearly with the distance S on a logarithm scale with a slope m=1 and an offset b= -3:
where
( θ1 , V1 ) = (θ0 +
λ = 10m log S+ b
(35)
6 Fig. 8 illustrates the number of iterations required to converge versus the adjustment factor K including the flat start (K=0). Simulation times of Fig. 9 from NRM and NRMJ methods do not reach excessive values comparatively with the Newton Raphson method even when the initial voltage is far from the solution.
[2] [3]
Iterations vs K for "2688bus.cf"
15
Number of iterations
[1]
NR NRM NRMJ
[4] [5]
10
[6]
5
[7] 0
10
0
0.2
0.4
0.6
0.8
1
1.2
Adjustment factor K Fig. 8 Number of iterations versus K Simulation time vs K for "2688bus.cf"
2
[8] [9]
Simulation time (sec)
[10]
10
NR NRMJ NRM
1
[11] [12] [13]
10
0
[14] 0
0.2
0.4
0.6
0.8
1
1.2
Adjustment factor K Fig. 9 Number of iterations versus K
IV. CONCLUSIONS This paper showed how Jacobian adjustments within the Newton Raphson method can be used to enhance surrounding awareness. These adjustments combined with the LevenbergMarquardt method can serve as an aid to determine the power flow solution if it exists when no other traditional method converges. The additional processing consisting of Jacobian adjustments and matrix resolutions, are found to be reasonable and do not result in significant computational burden over the Newton Raphson method. V. APPENDIX TABLE II ENTRIES OF THE SPARSE JACOBIAN MATRIX Equations Number of entries ∂Pi ∂θi
n PV +n PQ
∂Pi ∂θ j
2n PVPV +2n PVPQ +2n PQPQ
∂Pi ∂Vi
n PQ
∂Pi ∂Vj
n PVPQ +2n PQPQ
∂Qi ∂θi
n PQ
∂Qi ∂θ j
n PVPQ +2n PQPQ
∂Qi ∂Vi
n PQ
∂Qi ∂Vj
2n PQPQ
Total
n PV +4n PQ +2n PVPV +4n PVPQ +8n PQPQ
[15] [16] [17]
REFERENCES J.D. Glover and M.S. Sarma, “Power Systems Analysis and Design, Third Edition”, Brooks/Cole, Thomson Learning Inc., ISBN 0-53495367-0, 2002. B. Stott, “Review of Load-Flow Calculation Methods”, Proceedings of the IEEE, vol. 62, no. 7, pp. 916-929, July 1974. L.M.C. Braz, C.A. Castro, “A Critical Evaluation of Step Size Optimization Based Load Flow Methods”, IEEE Trans. Power Systems, vol. 15, no. 1, pp. 202-207, Feb. 2000. S. Iwamoto, Y. Tamura, “A Load Flow Calculation Method for IllConditioned Power Systems”, IEEE Trans. Power App. Syst., vol. PAS100, no. 4, pp. 1736-1743, Apr. 1981. S.C. Tripathy, D.G. Prasad, O.P. Malik, G.S. Hope, “Load Flow Solutions for Ill-Conditioned Power Systems by a Newton Like Method”, IEEE Trans. Power App. Syst., vol. PAS-101, no. 10, pp.36483657, Oct. 1982. L. Wang, X.R. Li, “Robust Fast Decoupled Power Flow”, IEEE Trans. Power Syst., vol. 15, no. 1, pp. 208-215, Feb. 2000. D. Rajicic, A. Bose, “A Modification to the Fast Decoupled Power Flow for Network with High R/X Ratios”, IEEE Trans. Power Syst., vol. 3, no. 2, pp. 743-746, May 1988. P.R. Bijwe, S.M. Kelapure, “Nondivergent Fast Power Flow Methods”, IEEE Trans. Power Syst., vol. 18, no. 2, pp. 633-638, May 2003. R.P. Klump, T.J. Overbye, “Techniques for Improving Power Flow Convergence”, Proc. Power Eng. Soc. Summer Meeting., vol. 1, no. 4, pp. 598-603, 2000. V.H. Quintana, N. Muller, “Studies of Load Flow Methods in Polar and Rectangular Coordinates”, Elect. Power Syst. Research, vol. 20, pp. 225-235, 1991. H.L. Nguyen, “Newton-Raphson Method in Complex Form”, IEEE Trans. Power Syst., vol. 12, no. 3, pp. 1355-1359, Aug. 1997. J.E. Tate, T.J. Overbye, “A Comparison of the Optimal Multiplier in Polar and Rectangular Coordinates”, IEEE Trans. Power Syst., vol. 20, no. 4, pp. 1667-1674, Nov. 2005. S. Granville, J.C.O. Mello, A.C.G. Melo, “Application of Interior Point Methods to Power Flow Unsolvability”, IEEE Trans. Power Syst., vol. 11, no. 2, pp. 1096-1103, May 1996. M.D. Schaffer, D.J. Tylavsky, “A Nondiverging Polar-Form NewtonBased Power Flow”, IEEE Trans. Ind. App., vol. 24, no. 5, pp. 870-877, Sept. 1988. R.L. Lines, and S. Treitel, “A Review of Least Squares Inversion and its Application to Geophysical Problems”, Geophysical Prospecting, vol. 32, pp. 159-186, 1983. A. Franchois, C. Pichot, “Microwave Imaging-Complex Permittivity Reconstruction with a Levenberg-Marquardt Method”, IEEE Trans. Antennas and Propagation, vol. 45, no. 2, pp. 203-215, Feb. 1997. MATLAB The Language of Technical Computing, Mathematics, The Mathworks Inc., 2005.
Pierre Jean Lagace received B. A. Sc., M. A. Sc. and Ph. D. degrees in electrical engineering from Ecole Polytechnique de Montréal, QC, Canada in 1982, 1985 and 1988 respectively. From 1988 to 1991, he was Assistant Professor with Norwich University, Northfield, VT. From 1991 to 1996, he was a Researcher with Hydro-Quebec, Varennes, QC, Canada in power systems software development. Currently he is a professor with the Ecole de technologie supérieure in Montréal. Mai Hoa Vuong received the Ph. D. and B. Ing in electrical engineering and the M. Sc. A in applied mathematics in 2002, 1989, and 1990 respectively, from Ecole Polytechnique de Montréal. She worked for Spar Aerospace, Montreal, Canada, the Montreal Stock Exchange, and Hydro-Quebec, Varennes, Canada. Her research interests include energy management systems, and electromagnetic compatibility. Innocent Kamwa (S'83, SM'98, F’05) received a Ph.D in Electrical Engineering from Université Laval, Québec, Canada, in 1988, after graduating in 1984 at the same university. Since then, he has been with the Hydro-Québec Research Institute, where at present he is a Principal Researcher with interests broadly in bulk system dynamic performance. Since 1990, he has held an associate professor position in Electrical Engineering at Université Laval. A member of CIGRÉ, Dr. Kamwa is a recipient of the 1998 and 2003 IEEE PES Prize Paper Awards and is currently serving on the System Dynamic Performance Committee, AdCom. He is also the acting Standards Coordinator of the PES Electric Machinery Committee and an editor associate of the Transaction on PWRS.