Discrete Optimization via Simulation to Determine ...

3 downloads 0 Views 680KB Size Report
Hugh Rudnick. Rodrigo Moreno*. Department of ..... [10] R. Allan and R. Billinton, “Probabilistic assessment of power systems,”. Proc. IEEE, vol. 88, no. 2, pp.
This is a draft version for personal use only - For other use, please address yourself to the publisher

Discrete Optimization via Simulation to Determine Reliable Network Investments Tomás Lagos Fernando Ordóñez

Rafael Sacaan Hugh Rudnick

Alejandro Navarro-Espinosa Rodrigo Moreno*

Department of Industrial Engineering Department of Electrical Engineering Department of Electrical Engineering Universidad de Chile Pontificia Universidad Católica de Chile Universidad de Chile/Imperial College* Santiago, Chile Santiago, Chile Santiago, Chile/London, UK

Abstract— Solving optimization problems in power systems planning often imposes a compromise between the accurate representation of the power system operation and the simplifications made in the mathematical methodologies used to find the optimal solution. Hence, classic heuristic algorithms go deep modelling operational details without having a clear message about the quality of the solution, whereas mathematical programming approaches find the optimal solution by (significantly) simplifying system operation. In this vein, this article proposes the utilization of Discrete Optimization via Simulation algorithms to solve optimization problems when a detailed representation of the system and information about the quality of the solution are required. In particular, the Industrial Strength COMPASS algorithm is applied to find the optimal set of new transmission lines that maximizes power system reliability given a certain budget and considering a detailed power system model, where a full unit commitment with network constraints and an hourly sequential Monte Carlo are implemented. Index Terms — optimization, Sequential Monte Carlo Simulations, Discrete Optimization via Simulation, reliability.

I.

INTRODUCTION

Large power systems are complex dynamic entities, and their complexity is still increasing due to the large adoption of intermittent generation, the additional requirements of reliability and the needs for enabling the development of a more resilient system to face low probability and high impact events (e.g., earthquakes, floods, etc.) [1]. Hence, the development of detailed models to mimic real power system operation is complicated and therefore its incorporation in optimization models is very challenging. In fact, to take optimal investment decision in large power systems, some compromise must be made between the accuracy to model system operation and the optimality of the methodology implemented to find the best solution. Thus, if full optimization mathematical programming is used, then some simplification is made in the operation modelling [2], such as: reduction in the number of buses or in the number of constraints in the unit commitment model, simplification of the chronological demand behavior through clustering or “block” representation [3], etc. On the other hand, if power system operation is modelled with detail, then some heuristic methodologies must be used to find a good solution. For instance, in [4] and [5], the authors solve the transmission This work was supported by the UK Research Council and Conicyt through the grant Newton- Picarte (MR/N026721/1). (Consortium that includes Pontificia Universidad Católica de Chile, Universidad de Chile, and The University of Manchester).

network expansion planning by using Genetic Algorithms and Tabu Search, respectively. Considering this tradeoff between modelling accuracy and optimality, this article analyses the utilization of Discrete Optimization via Simulation (DOvS) to solve optimization problems in power system planning in those cases that require deep understanding of operational aspects and the feasible candidates are integer variables. DOvS finds the best integer solution according to the expected performance of a stochastic system that is represented by a computer simulation model [6]. These methodologies move forward from classic heuristic process because they are able to mathematically guarantee correctness of the solution (i.e., the solution found is at least the best visited local optimum, where a local optimum is a solution that is the best among all neighbors) and guarantee the convergence to the global optimum if a large budget of simulations is available [6]. Some important development has been done in this area since [7] proposes a DOvS algorithm that delivered true optimal convergence guaranties. Reference [8] proposes a Convergent Optimization via Most Promising Area Stochastic Search (COMPASS) algorithm that is able to converge to a local optimal solution with probability equal to one. That work enabled the development of the Industrial Strength COMPASS (ISC) algorithm in [9], which delivers correctness guaranties. To highlight the benefits of DOvS in power systems, the ISC algorithm is implemented in this article to find the optimal set of new transmission lines to increase the reliability of a power system given a certain budget. The assessment of reliability needs a full representation of the system components and the chronological time dependence between loads, failure components and recovery times. This interdependence is modelled through Sequential Monte Carlo Simulations (SMCs) [10] to represent the stochastic behavior of power systems. Furthermore, to assure a realistic power system operation, a full unit commitment with generation and network constraints is formulated and included in the analysis. The rest of the article is structured as follows. Section II describes the Industrial Strength Compass (ISC) implementation. Section III presents considerations developed to model power system reliability and Section IV analyses the ISC application in a case study. Finally, the main conclusions are drawn in section V.

II.

INDUSTRIAL STRENGTH COMPASS

Discrete optimization via simulation, DOvS, finds the best solution according to the expected performance of a stochastic system that is represented by a computer simulation model. DOvS algorithms explore more promising solution sets while utilizing some form of randomization to escape local optimal regions. This feature allows to visit all feasible solutions if the computational budget to carry out simulations is large enough [6]. In this work, the feasible region is integer valued (e.g. the decision of constructing a new transmission line) and the problem can be formulated as: min{𝑔(𝑥) = 𝐸𝑥 [𝐹(𝑥, 𝜉)]} 𝑥∈Θ

Θ = Φ ∩ 𝑍𝑑 ,

(1)

where Φ ⊂ 𝑅𝑑 is a set that could be bounded or unbounded. The expected value is taken with respect to the random variable 𝜉 that represents a scenario realization, given the decision variable 𝑥. In this procedure, the output of a single simulation is considered as a "black box" (F(x, ξ)) whose internal structure remains unknown to the solving scheme. Since there is no assumption on the structure of the objective function, this procedure allows to model complex problems that cannot be solved by means of other modeling approaches. Specifically, the Industrial Strength Compass, ISC, is an algorithm for optimizing the expected value of a performance measure of a stochastic simulation with respect to integerordered decision variables in finite feasible region defined by linear-integer constraints. ISC offers correctness guarantees while also being competitive with the features provided by commercial DOvS solvers [9]. Correctness matters because when stochastic noise is present, an inferior solution may be selected and its actual performance may be poorly estimated in absence of this guarantee. The ISC algorithm has three main phases (Fig.1): the first assures global convergence, the second assures local convergence and the third consists of a clean-up phase (selection of best solutions). In the following sub-sections each of these phases are explained in more details.

Figure 1. ISC algorithm flowchart.

A. Niching Genetic Algorithms (NGA) Fig. 1 shows the steps followed by the global convergence phase. The role of the NGA is to serve as a global search engine. It forms niches (Fig. 2) that always have a local optimal solution at the center. Transition rules to the local convergence are:  Niche rule: If at any time there is only one niche.  Budget rule: If the number of samples is exceeded.  Improvement rule: If there are no new solutions (better than the best already found) in a number of consequent iterations.  Dominance rule: If the solutions within one niche dominate all other niches (in total, not just the best individual). Let 𝑥𝑖𝑗 be the 𝑗-th solution in niche 𝑖, (𝑖 ∈ {1, . . , 𝑞}, 𝑗 ∈ {1, . . , 𝑞𝑖 }) let 𝑛𝑖𝑗 be the number of evaluations already taken to 𝑥𝑖𝑗 , and let 𝑌𝑖𝑗ℓ be the ℓ-th replication of 𝐹(𝑥𝑖𝑗 , 𝜉). Denote 𝑡𝜈,𝛽 the 𝛽 quantile of the 𝑡 distribution with 𝜈 degrees of freedom. Further define:

Figure 2. Most promising area around two niches. 𝑞𝑖 𝑛𝑖𝑗

𝑞𝑖 2

𝑆𝑆𝐸𝑖 = ∑ ∑(𝑌𝑖𝑗ℓ − ̅̅̅ 𝑌𝑖𝑗 ) ,

𝑛𝑖 = ∑ 𝑛𝑖𝑗

𝑗=1 ℓ=1 𝑞𝑖 𝑛𝑖𝑗

𝜇̂ 𝑖 =

1 ∑ ∑ 𝑌𝑖𝑗ℓ , 𝑛𝑖

𝑗=1

(2)

𝜈𝑖 = 𝑛𝑖 − 𝑞𝑖

𝑗=1 ℓ=1 1

𝛽 = (1 − 𝛼𝐺𝐴 )𝑞−1

𝜔𝑖𝑗 = √(𝑡𝜈2𝑖 ,𝛽

𝑆𝑆𝐸𝑖 𝜈𝑖 𝑛𝑖

+ 𝑡𝜈2𝑗 ,𝛽

𝑆𝑆𝐸𝑗 𝜈𝑗 𝑛𝑗

)

Let 𝐼 = {𝑖: 𝜇̂ 𝑖 ≤ 𝜇̂ 𝑗 + 𝜔𝑖𝑗 , ∀𝑗 ≠ 𝑖}, the set that contains niches that are statistically better than others in terms of average quality of individuals within those niches. If |𝐼| = 1, then there is a dominant niche and thus stop the NGA. Within the grouping procedure, a fitness sharing scheme is implemented in the algorithm. The idea is that if a niche is populated with too many solutions, these solutions should be given less chance to reproduce than they would have had in an ordinary GA, thus, allowing solutions in less populated niches to have higher probabilities of being selected to generate new solutions. A grouping procedure is performed to form groups that are similar in their fitness value. For each of these an 1 Ni i average probability of selection mij = (∑j=1 sj ) is Ni

calculated, where sji is the probability of selection of individual j within population enumerated by [Ni ]. Probability sji is

of the form ||𝑥 − 𝑥̂𝑘 || ≤ ||𝑥 − 𝑦|| with 𝑦 ∈ 𝐴𝑘 . Fig. 1 shows in more detail each step of the local improving procedure. The sampling allocation rule, SAR, in COMPASS (Fig. 1) assigns the number of evaluations to each solution at iteration k. This implementation considers the adaptive SAR described in [5], which gives replications to the previously visited solutions. Then, a constant number of replications is assigned to the new sampled solutions. The COMPASS algorithm does not require very strong conditions to converge to a local optimal solution as 𝑘 → ∞ (the number of iterations of the algorithm goes to infinity), essentially that each sample mean satisfies a strong law of large numbers, and that the SAR guarantees that the number of replications allocated to all visited solutions goes to infinity. Let 𝑁(𝑥) = {𝑦 ∈ Θ: ||𝑥 − 𝑦|| ≤ 1}. Given a local solution 𝑥̂𝑘∗ of a specific niche, the Transition Rule follows the hypothesis test:

)) and holds the

𝐻0 : 𝐺(𝑥̂𝑘∗ ) ≤ min∗ 𝐺(𝑦)

value of the probability of selection of that particular individual over all current solutions, where η is a parameter between [1,2] and 𝑚𝐺 is the number of solutions initially sampled in the NGA as shown in Fig. 1. Then, the Stochastic Universal Sampling (SUS) section constructs a roulette wheel (i.e., circle area divided in slices) where the area for each individual is proportional to its selection probability. The roulette wheel is spun once (i.e., a uniform number between 0 and 1 selects the position of a pointer in the circle) and an individual is selected as a parent. Other individuals are selected by advancing the pointer at a regular spacing until it wraps back to its starting 2 point. The pointer is advanced by a spacing of .

𝐻1 : 𝐺(𝑥̂𝑘∗ ) > min∗ 𝐺(𝑦)

calculated as si =

1

mG

(η − 2(η − 1) (

i−1

mG −1

mG

For each solution xi selected in the SUS, a Mating Restriction scheme is used to select its partner. That is, a sample of m individuals from the population is taken and the best one among the ones that are on the same group as xi is selected. If there is no such an individual, the closest one to xi is selected. A Crossover scheme is applied to choose two new solutions between xi and xk , if some of these values are infeasible, then we keep its parent value. For each new solution, a Mutation scheme is implemented, randomly changing the value of a coordinate of the solution. B. COMPASS The next step is to converge locally for each niche. The procedure starts with a population of individuals within a niche. Denote the set 𝑉𝑘 as all solutions visited at iteration k. Let 𝐶𝑘 = {𝑥 ∈ 𝛩| ||𝑥 − 𝑥̂𝑘 || ≤ ||𝑥 − 𝑦||, ∀𝑦 ∈ 𝑉𝑘 , 𝑦 ≠ 𝑥̂𝑘 } be the most promising area at iteration k around current iteration optimal solution 𝑥̂𝑘 and let 𝐴𝑘 = {𝑥: 𝑥 defines an active (non redundant) constraint in 𝐶𝑘 } be the set of neighbors that define active constraints. For instance, the starting point C in Fig. 2 have a better performance measure than all the solutions in 𝐴𝑘 . The point with the best performance measure in the niche is referred as head of the niche. The upper right point (A) is within a non-niched structure, since at least one of its three neighbors have a better performance and do not define any active constraint in any head of niche (notice that halfspace B is not active in the most-promising area of niche C). Note that the set 𝐶𝑘 of niche C can be obtain only by computing the halfspaces

𝑦∈𝑁(𝑥̂𝑘 )

(3)

𝑦∈𝑁(𝑥̂𝑘 )

The type I error is set to 𝛼𝐿 and the power of the test to be at least 1 − 𝛼𝐿 if 𝑔(𝑥̂𝑘∗ ) ≤ min∗ 𝑔(𝑦) + 𝛿𝐿 , where 𝛿𝐿 is a 𝑦∈𝑁(𝑥̂𝑘 )

tolerance user specified. If the solution past the test, the current COMPASS iteration over the niche is stopped and 𝑥̂𝑘∗ is declared a local optimal. This test can be viewed as a special case of comparisons with a standard [11]. C. Clean-up Phase Once the COMPASS local search has exhausted all niches found by the NGA global search, ISC enters the clean-up phase with a set of local optimal solutions, let this set be 𝐿. The objective of this step is to compare these local optimal solutions to select the best of them, or one within 𝛿𝐺 of the best, with high confidence, and also states a ±𝛿𝐺 confidence interval on the performance of the selected solution. The user can specify parameters 𝛿𝐺 and 𝛼𝐺 . The stages on this phase are as follows [12]:  Screening: Using whatever data already available on the solutions in 𝐿, discards any solutions that can be shown to be statistically inferior to others. Let 𝐿𝐶 be the surviving solutions.  Selection: Acquire enough additional replications of the solutions in 𝐿𝐶 to select the best. Let 𝑥𝐵 be the selected solution.  Estimation: With confidence level 1 − 𝛼𝐺 , 𝑥𝐵 is the best, or within 𝛿𝐺 of the best solution. III.

ISC TO POWER SYSTEM RELIABILITY

A. Framework for Reliability Analysis A reliability framework has been developed to accurately model power system failures in terms of energy not supplied (𝐸𝑁𝑆). A sequential Monte Carlo simulates hourly the continuous operation of a determined power system for a 24hour period. Then, a probabilistic approach for transmission line failures is adopted, modelling the inherent failure rates, λ (occurrences per hour). These can occur during any hour of the

day. Deterministic recovery times, r (hours), are also modelled. Thus, system operation will respond accordingly to an available transmission line set for the present hour, having a sequential behavior throughout a day given the two parameters described. 𝐸𝑁𝑆 will be observed on buses when transmission lines are unavailable and thus infrastructure capacity does not suffices (modelling also generating units constraints and transmission lines congestions). To have an accurate measure of the 𝐸𝑁𝑆 per bus and hour, it is important to model with detail the operation of the power system. Consequently, the sequential Monte Carlo (SMC) is built upon two modules: a unit commitment (UC) and an hourly DC-OPF calculation. Firstly, the mixed-integer linear programming UC formulation in [13] is extended to consider the transmission system. Thus, ramp limits, minimum and maximum power outputs, minimum and maximum on-off times for generation units and line capacities are fully considered to determine the available units for each hour during 24-hour period. Secondly, a DC-OPF determines the hourly production for each generating unit in order to satisfy the demand at minimum cost. The cost function is to minimize total production cost including the cost of energy not supplied (ENS) at each bus. Thus, if an outage occurs, the DC-OPF will provide a solution that minimizes the aforementioned cost function, which implicitly includes minimization of unsupplied demand. The system considers a multi-bus network and is restricted by 1) generation constraints, 2) lines capacities, 3) UC array, 4) generating unit’s production state of the previous hour and 5) the set of available transmission lines of the previous hour. B. Sequential Monte Carlo Simulations Each SMCs starts by receiving a set of available lines. Then, this set is updated due to transmission line failure rates and recovery times. This is done by comparing a randomly generated number with the respective line failure rate, and checking the time it has remained unrepaired, for every line on the system. The result is a new set of online/offline transmission lines. Then, a DC-OPF is executed for the present hour and topology. These steps are performed sequentially for 24 consecutive hours, completing one SMC simulation. For each simulation, hourly ENS on each bus is added up and represented by ENSi (total ENS of simulation 𝑖). Finally, the output will be the expected ENS (EENS) per day, calculated as the average of ENSi over the number of simulations. IV.

ISC APPLICATION

An example is presented on the IEEE 14-bus test system (Fig. 3). This has been completed with data shown in Table I. In addition, the cost of unsupplied load is assumed to be 10,000 $/MWh on each bus and transmission lines have a maximum capacity of 100 MW. The reactance value of new lines is 0.05 p.u. and nominal capacity of 200 MW. Generating units U4 and U5 have the same characteristics as U3. Four parameters need to be set for the SMC: 1) Network/line failure rates “λ” (occurrences per hour), 2) Recovery time “r” (hours), 3) Number of Monte Carlo simulations “N”, and 4) Time horizon of each simulation “T” (hours). Values of 0.05, 3 and 24 are given for λ, r and T respectively. By using these parameters, the case study (Fig. 3) has an EENS equal to 2034 MWh.

Figure 3. IEEE 14-bus system

TABLE I.

IEEE 14-BUS SYSTEM.GENERATING UNITS DATA

Min/Max Power Output [MW]

Ramp Up/ Down [MW/h]

Min. Downtime/ Uptime [h]

Startup/ Shutdown cost [$]

Prod. Cost [$/ MWh]

U1

100/332

100

2/4

5/10

100

U2

10/140

100

1/1

6/11

110

U3

10/100

50

1/1

6/11

110

ISC is applied to determine the optimal set of new lines to increase the power system reliability given a certain budget of lines to construct. The objective function is set to minimize the expected energy not supplied (4): 𝑚𝑖𝑛𝑥∈ Θ {𝐺(𝑥) = 𝐸𝜉 [𝐹(𝑥, 𝜉)]}

(4)

Θ = { 𝑥: ∑ 𝑥𝑎 ≤ 𝑏 𝑎∈𝐴

0 ≤ 𝑥𝑖 ≤ 1, 𝑥𝑖 ∈ 𝑍 𝑑 ,

𝑖 = 1, . . , 𝑑

(5)

𝑖 = 1, . . , 𝑑}

Thus, one specific system simulation, x, will return the energy not supplied for a scenario realization ξ (i.e., daily simulation with certain realization of failures in the SMCs framework). The decision of installing candidate lines is represented by a binary vector in ISC, 𝑥 ∈ {0,1}|𝐴| with 𝐴 = {(𝑖, 𝑗) ∈ 𝑉 × 𝑉 ∖ 𝐸}, where 𝑉 is the set of existing buses and 𝐸 is the set of existing lines or branches. In addition, it is necessary to consider constraint (5) which represents the budget constraint. In this case, 𝑏 = 1 and 𝑏 = 3 are used (here we assume that every candidate line has a unit investment cost equal to 1 and this can be changed). The best line to add to the system (𝑏 = 1) and the best set of three or less lines (𝑏 = 3) will be determined. For simulation purposes, a Samsung NP870Z5G-X01CL with an Intel i7 2.4 GHz processor and 8GB of RAM is used. In order to assess the quality of the proposed methodology (with 𝑏 = 1), the case study is solved using two methods: (i) ISC plus SMCs and (ii) full enumeration (full enumeration can be undertaken if b = 1). The case study assumes deterministic recovery times of two periods after the failure.

TABLE II.

Sol Value MWh Eval #

[1,9] [3,14]

[3,14] [1,9] [11,12]

[8,13] [10,12] [11,14]

[7, 12] [8, 13]

[2, 9] [2, 13] [6, 8]

[1,3] [2, 9] [8,12]

[1,3] [5, 8] [10,12]

[4, 8] [2 14]

1603

1716

1833

1744

1789

1590

1530

1738

1786

60

20

60

60

20

40

60

20

20

TABLE III. Sol. Value MWh Eval. #

NGA RESULTS

[1,3] [2,8]

COMPASS RESULTS

[1,3] [1,9]

[1,3] [1,9]

[1,3] [1,9] [6,10]

[4, 8] [10, 12] [11, 14]

[6, 8] [6, 14] [10,13]

[6,10] [6,14]

[1,3] [1,9] [11,14]

[4, 8] [10,12] [1,3]

[4, 8] [12,14]

1561

1543

1543

1645

1653

1642

1591

1599

1734

1008

757

346

107

764

166

310

90

480

TABLE IV.

ISC RESULTS 𝑏 = 3

As a result, the lines presented in Table IV are the optimal solutions obtained. Here, three potential solutions are shown because, statistically speaking, the best solution cannot be recognized (1 − 𝛼𝐺 = 90% confidence is binding). The time to determine the best solution (𝑏 = 3) was 141 min. It worth noticing that the ISC does not expend significant simulation effort on bad solutions, this is because of the sample allocation rule (SAR, mentioned in section II.B.2). At each COMPASS stage iteration, the algorithm would give new evaluations only to solutions in the set {x̂k∗ } ∪ Ak ∪ Sk (Ck−1 ), where x̂ k∗ is the current optimal, Ak is the set of solutions that define active constraint on Ck−1 and Sk is the set of new sampled solutions of the set Ck−1 . The SAR allows this procedure, when convergence is being achieved, to stop giving evaluations to solutions that are too far away from the most promising area, since the solutions on the set 𝑆𝑘 are necessarily closer to 𝑥̂k∗ that all the solutions on the set 𝐴𝑘 .

A

Solution

Value [MWh]

# Evaluations

Coeff. of Var.

[1,3] [1,9]

1553.72

1765

0.242

[1,3] [1,9] [6,10]

1555.71

1325

0.25

[1,3] [1,9] [11,14]

1574.44

898

0.248

For 𝑏 = 1, the number of possible integer solutions is 72 ( set |𝐴| + 1) (71 feasible lines plus no lines). In this case, a full enumeration approach (that assesses all feasible solutions) is able to give the best solution which corresponds to a new line between buses 1 and 9, taking approximately 143 minutes of simulation time (2000 evaluations for each solution). On the other hand, the exact same solution is found by ISC in only 12 minutes. The lower simulation time is because a lower number of simulations are performed. In fact, the full enumeration case requires 144,000 SMCs and whereas ISC only requires 11,388. Finally, to show the application of the ISC algorithm and to assess its performance in a more challenging case, the developed methodology was run with a budget of 3 (𝑏 = 3). As a result, the number of possible solutions is equal to 1 + 71 + 71 ⋅ 70 + 71 ⋅ 70 ⋅ 69 = 347,972 (all possible combinations of 0, 1, 2 and 3 lines). Firstly, following the steps in fig. 1, the NGA stage is implemented, obtaining the results shown in Table II, where each column represents the best solution for a particular niche. As an example, the first column indicates that the line between the buses 1 and 3, [1,3], and between buses 2 and 8, [2,8], are the best for that niche. Secondly, the COMPASS algorithm is executed and a new set of solutions is found (Table III). These solutions evolve from the NGA stage but they are not necessarily the same because the COMPASS algorithm allows to escape the niches local structure. It is important to highlight that the number of evaluations differ among solutions in Table III because some niches require different simulation effort depending on the performance of the neighbors at distance one (recall the hypothesis test (3)). In this COMPASS stage, solution [[1,3] [1,9]] was reached in the convergence of two niches, and in both cases the simulation effort needed was higher than in most of the niches (Table III). Lastly, in the clean-up phase, the first two columns of Table III were combined.

V.

Conclusions

We propose the utilization of an optimization via simulation approach as the right compromise between the accuracy of the power system operation modelling and the optimality of the procedure to find the best solution of a planning problem. To demonstrate the advantage of this methodology, the problem of finding an optimal portfolio of transmission lines to maximize power system reliability was solved in the 14-bus system, being able to find a good solution (i.e., the same optimal solution as in the full enumeration method) within a reasonable time. REFERENCES [1]

[2]

[3]

[4]

[5]

[6] [7] [8] [9] [10] [11]

[12]

[13]

M. Panteli and P. Mancarella, “Modeling and Evaluating the Resilience of Critical Electrical Power Infrastructure to Extreme Weather Events,” IEEE Syst. Journal, pp. 1–10, 2015. G. Latorre, R. Darío Cruz, J. M. Areiza, and A. Villegas, “Classification of publications and models on transmission expansion planning,” IEEE Trans. Power Syst., vol. 18, no. 2, pp. 938–946, 2003. S. de la Torre, A. J. Conejo, and J. Contreras, “Transmission expansion planning in electricity markets,” IEEE Trans. Power Syst., vol. 23, no. 1, pp. 238–248, 2008. E. L. Da Silva, H. A. Gil, and J. M. Areiza, “Transmission network expansion planning under an improved genetic algorithm,” IEEE Trans. Power Syst., vol. 15, no. 3, pp. 1168–1175, 2000. E. L. Da Silva, J. M. Areiza, G. C. De Oliveira, and S. Binato, “Transmission network expansion planning under a Tabu Search approach,” IEEE Trans. Power Syst., vol. 16, no. 1, pp. 62–68, 2001. L. Hong and B. Nelson, “A brief introduction to optimization via simulation,” Winter Simul. Conf., no. 2002, pp. 75–85, 2009. D. Yan and H. Mukai, “Stochastic Discrete Optimization,” SIAM J. Control Optim., vol. 30, no. 3, pp. 594–612, 1992. L. J. Hong and B. L. Nelson, “Discrete Optimization via Simulation Using COMPASS,” Oper. Res., vol. 54, no. 1, pp. 115–129, 2006. J. Xu, B. L. Nelson, and L. J. Hong, “Industrial strength COMPASS,” ACM Trans. Model. Comput. Simul., vol. 20, no. 1, pp. 1–29, 2010. R. Allan and R. Billinton, “Probabilistic assessment of power systems,” Proc. IEEE, vol. 88, no. 2, pp. 140–162, 2000. S.-H. Kim, “Comparison with a standard via fully sequential procedures,” ACM Trans. Model. Comput. Simul., vol. 15, no. 2, pp. 155–174, 2005. J. Boesel, B. L. Nelson, and S.-H. Kim, “Using Ranking and Selection to ‘Clean Up’ after Simulation Optimization,” Oper. Res., vol. 51, no. 5, pp. 814–825, 2003. M. Carrión and J. M. Arroyo, “A computationally efficient mixedinteger linear formulation for the thermal unit commitment problem,” IEEE Trans. Power Syst., vol. 21, no. 3, pp. 1371–1378, 2006.

Suggest Documents