probabilistic optimization of mv distribution network in ... - PSCC

52 downloads 2324 Views 230KB Size Report
Keywords: Optimisation, probabilistic approaches, distribution network planning, Distributed Generation. 1 INTRODUCTION. The need for more flexible electric ...
14th PSCC, Sevilla, 24-28 June 2002

Session 11, Paper 1, Page 1

PROBABILISTIC OPTIMIZATION OF MV DISTRIBUTION NETWORK IN PRESENCE OF DISTRIBUTED GENERATION G. Celli, S. Mocci, F. Pilo DIEE – University of Cagliari Cagliari, Italy

R. Cicoria CESI Milan, Italy

[email protected], [email protected]

[email protected]

Abstract - Distributed Generation is predicted to play an increasing role in the electric power system of the near future. With so much new distributed generation being installed, the level of uncertainties that characterise the planning environment increases, particularly focused on the energy production of the DG units. These uncertainties are often so relevant that traditional deterministic paradigms can easily lead to uneconomical or unreliable solutions. To overcome this problem, a probabilistic load flow has been developed, taking into account the probability density function of the loads and of the annual power production associated to each generating unit. The possible existing correlations between DG units, between generators and loads and between loads have also been considered. This procedure has been implemented inside a heuristic optimisation algorithm to find the best MV distribution network architecture that minimises the overall cost (i.e. the costs of building, maintenance, losses and disruptions). Application examples are presented to illustrate the algorithm effectiveness, and comparative results between deterministic and probabilistic approaches are also discussed.

generation going online by the year 2010 [2]. Therefore, with so much new distributed generation being installed, the level of uncertainties that characterise the planning environment increases, particularly focused on the energy production of the DG units. These uncertainties are often so relevant that traditional deterministic paradigms can easily lead to uneconomical or unreliable solutions. This conclusion implies the necessity to develop new planning tools, based on probabilistic methodologies, able to correctly deal with all these uncertainties. In the paper, a probabilistic load flow has been developed and implemented inside the heuristic optimisation algorithm, used to find the best MV distribution network architecture that minimises the overall cost (i.e. the costs of building, maintenance, losses and disruptions). This procedure takes into account the probability density function (pdf) of the loads and of the annual power production associated to each generating unit. Generally, in order to guarantee the correctness of the results, it is important to consider the existing correlation between DG units, between generators and loads and also between loads. In the paper the following assumptions have been made: − if it exists, correlation between loads is linear; − complex correlations between generators has not been considered, due to the present absence of any dispatchment action at the distribution level; only linear correlation has been taken into account for renewable resources (e.g. wind generators); − correlation between generators and loads, if it exists, is linear; more precisely, some local linear correlations among generators and loads can be reasonably hypothesised (e.g. the correlation between industrial loads and CHP plants installed nearby), but linear correlation cannot be widely applied, especially if DG penetration is very high and it is mainly based on renewable resources. These hypotheses allow simplifying the resolution of the probabilistic load flow problem. Once known the currents flowing in each branch and the voltage of each node through their corresponding pdf, it is possible to choose the correct size of each conductor and to verify all the technical constraints, taking into account the uncertainties associated to these electrical variables. Application examples are presented to illustrate the algorithm effectiveness, and comparative results between deterministic and probabilistic approaches are also discussed. The structure of the paper is the following: in section 2 the deterministic approach for the network planning

Keywords: Optimisation, probabilistic approaches, distribution network planning, Distributed Generation 1

INTRODUCTION

The need for more flexible electric systems, changing regulatory and economic scenarios, energy savings and environmental impact are providing impetus to the development of Distributed Generation (DG), that is predicted to play an increasing role in the electric power system of the near future. Undeniably, DG offers an alternative that utility planners should explore in their search for the best solution to electric supply problems. In fact, if DG units are correctly placed in a MV distribution network, they can reduce power losses cost and defer utility investment for enforcing its system. For these reasons and in order to provide useful tools for the medium term planning of distribution networks, efficient software packages were developed in latter years, based on heuristic optimisation techniques, able to take into consideration the presence of DG units [1]. These packages use deterministic procedures to find the optimal network configuration, starting from specific data (annual average power requested by load and generated by DG, power demand growth rate, etc.). This way to perform planning studies may be acceptable until the DG penetration level is low compared with the total amount of power demand. But studies have calculated that DG may account for up to 20% of all new

14th PSCC, Sevilla, 24-28 June 2002

optimisation are briefly described, in section 3 some generalities on probabilistic load flow are provided, in section 4 the proposed procedure is presented and, finally, in section 5 some results are shown and discussed. 2

DETERMINISTIC OPTIMISATION OF MV DISTRIBUTION NETWORK

The main objective of planning studies for large MV networks is to establish the strategies, which are necessary to achieve optimal network configuration, taking into account expansion over time and the usual technical constraints. The function to be minimized, the generalized cost of the network, has to take into consideration all those terms, building, maintenance, losses and costs of disruptions, that have a great impact on the choice of network configuration both in normal and in emergency conditions [3]. As the problem is combinatorial, mathematically non-linear with mixed integers, constrained and must be tackled with dynamic or pseudo-dynamic procedures, it is not possible to employ non-linear programming techniques. Heuristic methods seem to be a more convenient option. In spite of the fact they do not provide the absolute optimum solution, these methods do however consider all the facets of the problem and yield sufficiently approximate near-optimum solutions when a suitable search strategy is adopted [4]. 2.1 Distribution network structure Distribution networks always have a radial structure and are often subdivided into two different levels: trunk feeders and lateral branches. The degree of reliability obtainable with this network arrangement is limited by the fact that a fault in one part of the network results in outage in a large number of load points. To improve service reliability, emergency ties provide alternative routes for power supply in case of outages or scheduled interruptions. Emergency ties end with an open switch so that radial structure is maintained during normal conditions; furthermore trunks are subdivided in some segments by means of normally closed switches, generally positioned in MV/LV nodes. During emergencies, segments can be reswitched to isolate damaged sections and route power around outaged equipment to customers who would otherwise have to remain out of service until repairs were made [5]. An important class of such networks are the “open loop networks” which are usually employed in urban power distribution systems (fig. 1). If there are no laterals (“pure open loop networks”) then service restoration is ensured through the emergency tie that connects the ends of the feeder. An intermediate alternative is to install laterals (“spurious open loop networks”) in which top priority customers are supplied through the main feeder and can be completely re-energized in the event of a fault. The main characteristic of both “open loop networks” is that only two branches can converge in a trunk node (topological constraint) [6]. 2.2 Network optimisation algorithm The optimisation algorithm allows finding “open

Session 11, Paper 1, Page 2

Emergency connection

A) Pure “open loop network” with no lateral MV nodes

Emergency connection

B) Spurious “open loop network” with lateral MV nodes

Figure 1: “Open loop network”

loop” networks in scenarios with several hundreds of trunk nodes in a reasonable computing time. A further optimisation for the radial network supplying lateral nodes is also performed in a negligible time. The main advantage of the proposed methodology is the capability of finding simultaneously both the normal state network and the number and position of emergency connections. By so doing, the most important service quality indicators can be correctly evaluated during the optimisation phase and they can be used as constraints in order to guide the search through more reliable structures. The heuristic algorithm implemented can be classified as a “hill-climbing” strategy. In other words, starting from an initial structure, new network configurations are developed by means of some local perturbations. Only cheaper solutions can be accepted during the research and, for this reason, the algorithm will be stopped when no more improving solutions can be found. The success of such a procedure is strongly dependent from the rules employed to generate new network configurations to be evaluated during the optimisation. In fact, a combinatorial choice, that could guide the process to find the absolute optimum, is not feasible due to the exponential growth of computing time. On the other hand, if too few configurations are examined there is the risk of trapping into local minima. Summing up, the choice of suitable heuristic rules that allow finding good quality solutions, possibly near to the absolute optimum, with reasonable computing time is of the greatest importance. The most important phases into which the proposed algorithm can be subdivided are: − Clustering of lateral nodes into the corresponding trunk nodes; − “Open loop” network optimisation for trunk nodes; − Radial network optimisation for lateral nodes. In the following each phase will be described with more detail. 2.3 Clustering techniques To study networks comprising many hundreds, and maybe even thousands, of nodes, using modern computers and in reasonable computing times, the problem may be reduced by combining the optimisation algorithm with

14th PSCC, Sevilla, 24-28 June 2002

accelerating techniques based on clustering [4-6]. In the paper, clustering procedures have been applied to group lateral nodes into the geographically nearest trunk node. At the end of this procedure, each trunk node is characterized by a mean power and a standard deviation obtained with the power pdf combination of its closest lateral nodes. 2.4 Heuristic algorithm Choice of link candidates. To alleviate computational effort, only those links that have a reasonable probability of generating low cost configurations should be examined. Consequently in a preliminary step a set of candidate links is automatically generated: for each node only the connections with nodes comprised within a circular area, whose center lies in the node in question and with radius equal to the distance of the nearest HV/MV sub-station, are explored. However a maximum number of candidates is preliminarily fixed for each node, and manual changes to the set of candidates are allowed before the optimisation process starts. Existing branches are automatically used as candidate links because they are available with no costs. Branch and bound optimisation. After an initial “open loop” network is generated (trying to widely use the existing network, if there is one), an iterative optimisation of the starting topology is performed. Optimisation is of the pseudo-dynamic type. It is not just a dynamic procedure in that it determines the optimum configuration of one network (target network) with both existing and extended portions, at the end of the planning period. This static feature of the procedure is however able to make provision for the dynamic expansion of the network insofar as the whole planning period is divided into subperiods, each beginning when one or more substations or loads are added concurrently. As a result, the final optimum topology represents the sequence of network extensions, which must be followed by means of gradually adding new links without modifying or reconstructing the portions of network previously realized. The optimisation algorithm is based on a special kind of branch and bound procedure whereby the network structure is optimised changing small portions at a time. For each node xi, a new edge uij is added, chosen among the xi candidate links. Then, to restore the “open loop” network topological constraint, a set of possible actions is performed obtaining different configurations (fig. 2). It should be noticed that each new added branch is chosen in the set of candidate links so that during the optimisation the algorithm is forced to build a network constituted by the most convenient paths [6]. For each network structure obtained, before going on to calculate the costs, feeder cross-section is sized and the constraints imposed on conductor capacity and voltage drops in the nodes are verified with reference to the planning horizon both in the normal operative state and in emergency situations. In this phase emergency connections are found considering that in an “open loop” network the most convenient branch to sectionalise is the

Session 11, Paper 1, Page 3

one with minimum current. In addition the network structures examined are also required to comply with the constraints imposed by the short circuit currents, including the peak homopolar current. The disruption cost is also taken into consideration by means of the not supplied energy cost that is one of the most important terms in the objective function. Service quality indicators, i.e. average number or duration of service interruptions, can be used in order to privilege network reliability instead of economy. It should be noticed that a special algorithm developed by the authors able to find the optimal number and position of automatic sectionalising switching devices is also employed [5,7]. At this juncture, the cost evaluation criteria are applied to calculate the value of the objective function for each new structure. Then the structure with the least cost is chosen. Once all the nodes have been explored, the resulting distribution network structure can be used as the starting network for a new iterative cycle. A procedure of this type generates networks with increasingly lower cost. It terminates when at the end of a complete iteration (all nodes have been examined) the network remains unchanged and hence its cost can be reduced no further. Radial network optimisation. The network for supplying lateral nodes is of the radial type. All lateral nodes supplied by a trunk node are connected with a tree having the trunk node as radix. In this case network optimisation is achieved with a specialized algorithm particularly well suited for radial network [4]. 2.5 DG impact on technical constraints The most significant impact of DG is the need of suitable procedures for correctly determining the voltage profile along the network trunks and for calculating the three-phase short circuit currents in the network nodes [8]. The presence of generation nodes in the distribution system can cause a voltage drop or an overvoltage in some points of the network. This situation depends particularly on the transformer control system used. Generally speaking, the connection of a generator to a network can result in an increase in the voltage that depends on the power supplied by the generator. For this i

Starting Network Candidate link

uij j i

i j

j i

i j i j

j i j

Figure 2: Some possible moves implemented in order to restore topological constraints.

14th PSCC, Sevilla, 24-28 June 2002

reason, in the network optimisation procedure the voltage profile is checked and only those networks arrangements able to maintain the voltage within prefixed ranges both in normal and emergency situations are evaluated. In the proposed methodology, DG does not contribute to voltage regulation, according to Italian standards, but anyway it can be useful to improve the voltage profile. Calculations are performed by determining the impedance matrix Z of each feeder examined and by calculating the voltage in each node of the network. The calculation of Z can be noticeable simplified considering the particular network architecture (“open loop” network) and observing that only one or two feeders are modified for each configuration examined. The presence of DG changes the magnitude, duration, and direction of the fault current. The fault current is modified, since the connection of rotating generators modifies the characteristics (impedance) of distribution networks. In this context, one needs to verify that the alteration in magnitude, duration and direction of the fault current due to DG units does not affect the selectivity of protection devices. In fact, the selectivity must be checked for each connection of a new generator to the distribution network. Fault currents are calculated for each configuration examined by using the diagonal elements of the short circuit matrix and the voltages in each node. All those situations, which do not comply with this technical constraint, imply an increase of the network cost due to the need to update protection devices. 3

PROBABILISTIC APPROACH

The most common and widely used analysis in power system engineering is the load flow calculation. In the past decades various algorithms have been developed for solving this problem and they differ very much in their characteristics, performances, as well as in their mathematical foundations. Generally, they are very accurate and allow detailed modelling of the system. However, most of them are dealing with deterministic conditions and accept only fixed input variables. So, their accuracy depends on the knowledge of the input variables, which are almost always unknown, but estimated on some past and present data. In the case of statistical uncertainty this problem can be overcome with stochastic approach, by using random variables and applying methods from the probability theory. In this paper a probabilistic load flow (PLF) has been developed, taking into account the pdf of the loads and of the annual power production associated to each generating unit. The possible existing correlation between DG units and between generators and loads has also been considered, in addition to correlations between the loads themselves. It is known that considerable correlation can exist between the various nodal powers, and this is particularly the case when the time scale of interest is associated with operational planning. Omission of this correlation can lead to misleading results, and create a resulting density function of the output parameters that is either narrower or broader than the true density function. There are various reasons for correlation between nodal

Session 11, Paper 1, Page 4

power to exist, and these reasons depend on whether load/load, generation/generation or generation/load correlation behaviour is being considered. In operational planning problems the probabilistic variation of loads is associated with time and a group of loads existing in the same area will tend to increase and decrease in a like manner: so a certain degree of correlation exists between them. When the loads rise and fall together, this correlation is positive. Similarly, in the event of a load falling while another rises, the correlation is negative. Frequently, in operation of a power system, a group of generators is controlled to meet the load within a certain load area, this being known as area control. In such cases, there must be correlation between those generators assigned to the load area and the load itself: i.e. as the load rises and falls, the output from the relevant group of generators is increased and decreased likewise; in this case, the correlation is again positive. It was shown that linear dependence (perfect positive correlation) is a valid assumption when considering loadto-load relations [9]. Also in the cases when the correlation is not perfect, it can still be modelled in this way with the addition of another independent random variable with zero mean value and appropriate standard deviation. Consider the case of n linearly dependent random variables Xi represented by the normal distribution (having expected value µx and standard deviation σx) and consider that they are to be combined to give an other random variable W, such that:

W = c1 X 1 + c 2 X 2 + m + ci X i + m + c n X n + c n +1 (1) where the coefficients ci represent the sensitivity coefficients and may themselves be positive or negative. For the purposes of this example, it is assumed that the variables X1 and X2 have positive linear dependence, X3 and X4 have negative linear dependence and the others are independent of all other nodal powers. From these relationships, the equivalent random variable W is simply deduced as it is normally distributed and has an expected value µW and a variance σ2W given by:

µW = c1 µ X 1 + c 2 µ X 2 + c3 µ X 3 + c 4 µ X 4 + m + + c n µ X n + c n +1

(

σ W2 = c1σ X1 + c 2σ X 2 n

+ ∑ c i 2σ X i 2

)2 + (c3σ X

3

− c 4σ X 4

)2 + m +

(2)

(3)

i =5

If different groups or if all nodal powers have been considered dependent, then the above mathematical considerations would be equally applicable although the final result and therefore the derived conclusions would be different. From these results, it is clearly evident that dependence between random variables can have an effect on the resulting probability density curve and standard deviation and therefore any such dependence should be

14th PSCC, Sevilla, 24-28 June 2002

taken into account in probabilistic analysis. The dependencies between the generators in the system, which exist because of the need to balance the active power, are more difficult to model, depending on the utility’s operating policy. One way to overcome this difficulty is to adopt one slack bus responsible for system power balance. When considering distribution networks the problem is simplified: in fact, it is acceptable to admit no generation/generation relations, and the concept of slack bus is the real situation. In case this is not a valid assumption, “pure” analytical solution of PLF is infeasible and the only option is to apply a Monte Carlo Simulation (MCS). The problem of non-linearity of the load flow equations has been overcome by their linearisation around expected operating point. As a result, when there are no complex generation/generation relations, the solution of the PLF becomes a sum of independent random variables weighted by sensitive coefficients [10]. In a generic case, density function of a variable W may be obtained by convoluting the various density functions of Xi following eqn. (1) and using some of the available numerical techniques. The most accurate and appropriate ones are those based on Fast Fourier Transform algorithms (FFT) [10]. It should be noted that the convolution process might be avoided only if all the random variables are normally distributed: since all input variables are normal the results were obtained directly by combining the parameters of the pdf without using any convolution [10]. Considering that a simplified PLF is performed for each network alternative examined by the heuristic algorithm, the resorting to a more precise representation of loads and generation with not normal distribution is not feasible due to the dramatic increasing of the computation time. For this reason, all random variables are described with a simple normal distribution. When the optimisation algorithm finds the optimal network, the real pdf’s are used and the convolution is performed. By so doing, it is possible to plan the network taking into consideration the randomness of loads and generation, even in a simplified manner, and then to perform a more correct and precise design once the optimal scheme is known. 4

PROBABILISTIC DISTRIBUTION NETWORK PLANNING

This paper presents a modified analytical PLF aimed for use in distribution networks. Introducing some proper approximations and considering the radial operating structure of the system significantly simplify the general techniques used in transmission systems. In this case the following assumptions can be made: • correlation between loads, if it is present, is linear; • there is no correlation between generators, since there is one slack bus responsible for system power balance and actually no one dispatching action for distribution network is considered; • local correlations between generators and loads are modelled with linear dependence.

Session 11, Paper 1, Page 5

When the power in each node is known, it is possible to calculate nodal currents, by using the nominal voltage. Thus the voltage in every node can be determined directly, by using nodal currents like random variables, with the following equation: (4) [V ] = [ Z ] ⋅ [ I ] where Z denotes the impedance matrix. This probabilistic approach considers the real and imaginary parts of the nodal current by means of their expected value and standard deviation. By substituting current expected values in system (4), the expected values of the real and imaginary parts of nodal voltages are determined. In order to calculate the nodal voltage variances, it is necessary to substitute in (4) the current standard deviations instead of the mean values, paying attention to combine them as shown in (3). Then it is possible to calculate the branch currents simply dividing the corresponding branch voltage drop by the branch impedance. Real and imaginary parts of a branch voltage drop have to be calculated separately by combining the expected values and the standard deviations of the voltage at the edge extremes. Once known the currents flowing in each branch and the voltage of each node through their corresponding pdf, it is possible to choose the correct size of each conductor and to verify all the technical constraints (like the voltage profile), taking into account the uncertainties associated to these electrical variables. In order to perform the branch sizing, the verification of voltage profile and the evaluation of losses and disruption costs, it is necessary to use mean values and standard deviations of the module of both node voltages and branch currents, starting from the corresponding real and imaginary parts. As a consequence, the module is defined as a non-linear function of two random variables, and the evaluation of its expected value and standard deviation can be done resorting to the approximated formulae reported in appendix.

4.1 Voltage profile constraint In order to verify the voltage profile, it will be accepted those configurations that imply the violation of this technical constraint with a probability less than a prefixed value considered tolerable. Known the expected value and standard deviation of each node voltage module, the probability, p(Vmax_∆V)i, to have a voltage drop, in the ith node, greater than the maximum limit, Vmax_∆V, is immediately evaluated (dashed area in fig. 3a). Then, this value is compared with the acceptable value of probability, p*(Vmax_∆V), and if p(Vmax_∆V)i > p*(Vmax_∆V), the branch is resized. It is important to notice that if the branch feeds a lateral node, only that branch is resized; instead, if the branch connects two trunk nodes, all the feeder branches are resized, in order to preserve a constant section. Similarly, it is checked the upper limit of the node voltage (maximum overvoltage accepted), and if the probability to overcome this limit is greater then the accepted one (white area in fig. 3b), the branch section is updated (fig. 3b).

14th PSCC, Sevilla, 24-28 June 2002

Session 11, Paper 1, Page 6

p(v)

0

v∗

v

µv

p(v)

µv

0



v

v

Figure 3: Pdf curves of node voltage module, used to verify the voltage profile technical constraint: (a) maximum voltage drop, (b) maximum overvoltage.

4.2 Probabilistic network sizing At the beginning of the study, the planner defines an acceptable probability of overload occurrence, poverload (e.g. 10%). Then, once known the pdf of each branch current, the current value, I*, that has the probability poverload (white area in fig. 4) to be surmounted is evaluated. Finally, this value I* is compared with the thermal limit current of a branch that has a specific section, ITL; if happens that: − I*ITL, it is necessary to resize the branch. 4.3 Evaluation of losses and energy not supplied Joule losses are evaluated by means of the module mean values of the branch currents. The energy not supplied due to scheduled interruptions or line faults is directly evaluated considering the module mean value of the current in each load node. 5

RESULTS AND DISCUSSIONS

In order to show the capabilities of the proposed methodology a simple case derived from a real Italian distribution network is presented. As shown in figs. 5 and 6, a single distribution feeder has been considered constituted by 21 MV/LV trunk nodes, 4 MV/LV lateral

nodes, 2 primary substations and 3 DG units. The period taken into consideration for the planning study is 20 years long, with all nodes existing at the beginning of the period. For each MV/LV node, a constant power demand growth rate of 3% per year has been assumed; the size of the installed transformers ranges from 100 kVA to 630 kVA. The majority of the branches are of the overhead type, but some buried cables exist. The annual medium active power delivered to MV nodes is, at the beginning of the planning period, 3.1 GW. The three DG units have different sizes, 320 kW for generator A and 290 kW for generators B and C (fig. 4). Three cases have been examined: 1) DG is not considered in the network design due to its intrinsic random nature (e. g. wind turbines); 2) DG is considered in the network design assuming that its power production is known certainly and it is equal to the expected value; 3) DG is considered in the network design according to the probabilistic approach. Table 1 summarizes capital costs (network building or upgrading) as well as the cost of the Expected Energy Not Supplied (EENS) and of losses over the time period studied for the three cases examined. Case 1 Cost of investments Cost of losses Cost of EENS Total cost

Ι∗

2373.4 k€

2621.3 k€

37.4 k€

26.7 k€

25.2 k€

25.6 k€

28.6 k€

28.6 k€

3490.2 k€

2428.7 k€

2675.1 k€

C

Primary substation MV/LV Trunk node MV/LV Lateral node DG node Emergency connection

i

Figure 4: Determination of maximum allowed current I* once fixed the overload probability.

3427.2 k€

In case 1), due to the natural growth of energy demand, some branches in the network must be upgraded leading to a more expensive solution. Even by adopting conductors with the maximum allowed section (copper 150mm2) for all the upgraded branches, loads are such that power losses are still relevant. In cases 2) and 3) the impact of DG is highlighted. By considering certainly known the power production of DG units, the investments for network upgrading can be considerably reduced. In this case, the same modified lines of case 1) are upgraded with 70 mm2 copper conductors. Unfortunately, the

A µI

Case 3

Table 1: Comparison between costs for the MV distribution network in Fig. 2.

p(i)

0

Case 2

Figure 5: MV test network

B

14th PSCC, Sevilla, 24-28 June 2002

assumption to know exactly the production of small private generators is too optimistic, especially if renewable sources are adopted. A more realistic assumption is the one considered in case 3), where DG production and energy demand are both random variables. In this case, the number of modified edges is the same of the precedent cases, but the 150 mm2 is necessary only for two branches. It is worth noticing that the cost of power losses in both cases 2) and 3) is smaller than in case 1) even if smaller lines can be used thanks to DG; in case 3) losses are smaller because the new conductors with the largest section are not much exploited. Finally, it should be pointed out that the cost of EENS is not affected by the presence of DG because intentional islanding in the event of network faults or scheduled interruptions is not allowed, according to the majority of international standards. Actually, service interruptions due to overloads should also be taken into account and, among the benefits introduced by DG, the reduction of overload costs, treated like EENS, should be considered. Anyway in this paper, overload costs are disregarded because the power flow implemented considers only an annual growth of the load energy demand. Thus, even in this very simple and small example, the benefits of the probabilistic approach in MV optimal network planning are clearly shown. It should be noticed that the optimal solution in case 2) is more unreliable even if it is the cheapest. This is a general conclusion because planning alternatives, which do not consider DG generation at all, or consider it in a deterministic manner, can easily lead to unreliable schemes with a deterioration of service quality. 6

Session 11, Paper 1, Page 7

REFERENCES [1]

G. Celli, F. Pilo, “Optimal Distributed Generation Allocation in MV Distribution Networks”, Proc. of 22nd PICA Conference, Sydney, Australia, 20-24 May 2001, pp 81-86.C

[2]

P. P. Barker and R. W. De Mello, “Determining the Impact of Distributed Generation on Power Systems: Part 1- Radial Distribution Systems”, Proc. of IEEE PES Summer Meeting, Seattle (USA), vol. 3, 2000, pp. 1645-1656.

[3]

A. Invernizzi, F. Mocci and M. Tosi, "Planning and Design Optimization of MV Distribution", Proceedings of T&D World '95 Conference, New Orleans, USA, 1995, pp. 549-557.

[4]

C. Muscas, F. Pilo and W. Palenzona, "Expansion of large MV networks: a methodology for the research of optimal network configuration", Proceedings of CIRED96 Conference, Buenos Aires, Argentina, pp. 69-74.

[5]

G. Celli and F. Pilo, “Optimal Sectionalizing Switches Allocation in Distribution Networks”, IEEE Trans. on Power Delivery, Vol. 14, No. 3, 1999, pp.1167-1172.

[6]

B. Cannas, G. Celli and F. Pilo, “Optimal MV distribution networks planning with heuristic techniques”, Proc. of Africon 99, Cape Town (South Africa), 1999, pp.995-1000.

[7]

B. Cannas, G. Celli, F. Pilo, “Heuristic Optimization Algorithms for Distribution Network Planning with Reliability Criteria”, Proc. of SCI 2001 Conference, Orlando, USA, July 22-25, 2001.

[8]

N. Hadjsaid, J.F. Canard, F. Dumas: “Dispersed generation impact on distribution networks”. IEEE Computer Applications in Power Vol 12 n°2 April 1999.

[9]

R. N. Allan, M. R. G. Al-Shakarchi, “Linear dependence between nodal powers in probabilistic a. c. load flow”, Proc. IEE, vol. 124, No. 6, June 1977, pp. 529-534.

[10] A. Dimitrovski, R. Ackovski, “Probabilistic Load Flow in Radial Distribution Networks”, T&D Conference, 1996, Proc. IEEE, 1996, pp. 102-107. [11] A. Papoulis, “Probability, Random Variables and Stochastic Processes”, McGraw-Hill Book Co., New York, 1965.

CONCLUSIONS

DG is predicted to play an increasing role in the electric power system of the near future. With so much new distributed generation being installed, the level of uncertainties that characterise the planning environment increases particularly focused on the energy production of the DG units. To overcome this problem, a probabilistic load flow has been developed, taking into account the probability density function of the loads and of the annual power production associated to each generating unit. This probabilistic load flow has been implemented in an efficient heuristic MV network optimisation algorithm developed by the authors. The results of the proposed methodology have proved the need of tools better suited to face the new skills introduced by a high level of DG penetration. Indeed, even in a very simple and small case, the advantages of considering DG in the planning of the network are clearly evident (network upgrade deferment) as well as the risks related to a deterministic approach (unreliable architectures or more expensive solutions). ACKNOWLEDGEMENT This activity has been sponsored by CESI in the context of the Italian Government Research Project on Electrical Power Systems.

APPENDIX Let z be a complex variable whose real and imaginary parts are random quantities. The module of z is a well known function expressed by (5) where x and y are random variables defined by means of their expected values (µx and µy), variances (σx2 and σy2) and covariance (σxy2).

z = x2 + y2

(5)

The module function is a non-linear function of two random variables and its expected value and variance can be calculated by using the following approximated equations [11].

µ z ≅ µ x2 + µ y2 +

2 σ x2 µ y2 + σ y2 µ x2 − 2 ⋅ σ xy µxµ y

2⋅

σz ≅

(

µ y2

+

)

3 µ x2

2 σ x2 µ x2 + σ y2 µ y2 + 2 ⋅ σ xy µxµy

(σ −

µ y2 + µ x2 2 2 xµy

2 + σ y2 µ x2 − 2 ⋅ σ xy µxµy

(

4 ⋅ µ y2 + µ x2

)

3

(6)

+

)

2

(7)

Suggest Documents