Towards More Accurate Unit Commitment ...

4 downloads 0 Views 310KB Size Report
197-204, Feb 1998. [33] G. K. Purushothama and Lawrence Jenkins, “Simulated Annealing With. Local Search—A Hybrid Algorithm for Unit Commitment”, IEEE.
1

2006 IEEE PES Transmission and Distribution Conference and Exposition Latin America, Venezuela

Towards More Accurate Unit Commitment Performance Comparisons Frank Leanez and Rodrigo Palma-Behnke, Senior Member, IEEE.

Abstract-- Unit commitment (UC) problem is still an open research field given the economic impact of its multiple applications in nowadays operation planning. It has been solved by a wide variety of optimization and heuristic methods. However, qualitative comparisons have traditionally been limited by two different factors: computer platform dependency and uses of custom testing models instead of standardized benchmark problems. This paper proposes a methodology for comparing UC simulation results based on instance classification and computer performance benchmark indices. It also establishes guidelines towards standardization of UC benchmark models. A complete numerical example using the proposed methodology is presented. Performance comparison results from various solution methodologies are also discussed. It is intended to prepare the basis for future objective benchmark comparisons in the UC research field. Index Terms-- Generating scheduling, lagrangian relaxation, power system operation, Unit commitment.

I. INTRODUCTION

S

everal methods have been proposed to solve the largescale, combinatorial, mixed-integer, UC optimization problem. Primal and dual methods alone with some heuristics have shown large applications in solving the UC problem while meta-heuristics are some of the most recent approaches. There are two extensive reviews on the UC solution methodologies. The first [1] is an overview of the classical methodologies available until 1994 and, the second [2], is an exhaustive 151 articles compilation from the past 35 years. In these two papers, it can be found citations for almost any existing method. This variety of methods and publications are usually presented with numerical results and execution times. However, platform dependency is usually a barrier for comparing performances among methods, while solving different test cases are barriers for solution quality analysis. Ideally, optimal comparisons can be done by solving the same numerical example with various bibliographical methodologies on the same computer platform. Given the complexity of general optimization algorithms and metaheuristics and the wide variety of applications they have to the UC problem, this comparison seems to exceed standard research activities. This paper proposes a method that facilitates these comparisons without the need to simulate all methodologies on the same This paper has been partially supported by grant Fondecyt #1020801 and the Facultad de Ciencias Físicas y Matemáticas of Universidad de Chile.

1-4244-0288-3/06/$20.00 ©2006 IEEE

computer. Reviewing some of the existent methodologies, priority lists, dynamic programming and lagrangian relaxation are, according to classic reference [3], the most “talk about” methods. They all have enhanced versions that promise better computing times, convergence characteristics or solution quality. (e.g. Fast priority lists [4] and 15 different DP proposals for search-space reduction [1]). Lagrangian Relaxation has been target of several investigation efforts over the past 15 years. It certainly suffers from parameters tuning drawback inconvenience, irregular convergence (when a small number of units are committed) or unfeasible solutions (when similar cost units exist). Lagrange multipliers update has been identified as the most critical part of the LR process. Several updating methods have been evaluated: adaptive subgradient [5], penalty bundles and interior point [6], and genetic algorithms [7]. Reference [8] proposes solving the individual unit committing with a genetic algorithm while the subgradient method takes care of the Lagrange multipliers updating process. The augmented lagrangian relaxation [9,10] was proposed to handle more efficiently the duality gap between primal and dual solutions. However, it prevents the characteristic separability property of LR. To restore it, the quadratic term is linearized around the solution of the previous iteration using decomposition and coordination technique [9]. As for the unfeasible solutions concerns, unit decommitment method and three-phases algorithms [11,12] seems to provide mathematical solutions but not necessarily in a fair way [13]. An adaptive UC that resumes several enhanced propositions and heuristics can be found in [14]. Additionally, LR methodology has been used to solve UC problems including ramping constraints [15], transmission constraints [16,17,18], security and voltage constraints [15,17,18] and even the OPF constraints [18,19] are considered. Primal methods, like Branch & Bound or linear programming, are some of the earliest approaches for solving the UC problem [1]. According to [20], dual methods have proven to be faster but yield wider accuracy bounds than primal B&B or LP. Logic programming combined with B&B optimization [21] can be used for making an efficient method for solving the UC called constraint logic programming. The complex optimization problem that UC represents seems to be very attractive for none-exact mathematical models and meta-heuristics. There are applications solved with Genetic Algorithms [22-24], Evolutionary Programming [25], Fuzzy Logic approaches [26,27], unit classification heuristic

2

combined with GA[28], Artificial Neural Networks ANN [29,30], Simulated Annealing [31-33] and Tabú Search [34,35]. They promise higher probabilities of obtaining near optimal solutions. However, in practical applications, time consumed in obtaining these quality solutions is usually reported as too expensive, specially for large-scale systems. There are even more methods for solving the UC [1,2]: Risk Analysis, Decision Analysis, Expert Systems, Separable Programming, Network Flow Programming, Ant Colony Search Algorithms, etc. The paper is organized as follows: Section II presents standard UC problem formulations and a discussion of the relevant performance UC factors. In Section III, the proposed methodology is described in detail. Section IV presents a numerical example of performance comparison of various methodologies using similar UC models. Finally, Section V briefly summarizes conclusions and future developments. II. PROBLEM FORMULATION A. UC Optimization Problem The Basic Unit Commitment Formulation (BUCF) includes one of the following objective functions. BUCF objective functions are: - Cost based schedule: Minimize: Operational Cost + Start Up Cost + Shut down Cost - Profit based schedule: Maximize: Incoming – [Operational Cost + Start Up Cost + Shut down Cost] - Social benefit maximization: Maximize: Dem. bids – [Operational Cost + Start Up Cost + Shut down Cost] or Generator bids The fundamental constraints for a BUCF are [3]: - Demand balance constraints, - Unit capability limits, - Unit status, - Minimum up-time and down-time, - Ramp limits. The periodic form of the energy demand makes the UC time frame between 24 hrs and 168 hrs (day to a week) with 15 minutes to one-hour time steps. The constraints above are generally included within common algorithm formulations. However, some of them like LR, need important algorithm modifications to include ramping constraints. Chapter 8 in [15] discusses this issue. There are higher order constraints not covered by most of the methodologies. Some of them are: - Reserve constraints [15], - Transmission capacity limits [16], - N-1 security criteria [17,18], - Frequency regulation [36], - Voltage constraints [15], - OPF constraints [18,19]. B. Relevant Performance Factors Several UC related papers consider the number of

generating units as the exclusive relevant factor for scalability analysis. However, it is important to empathize that there exist many other influential factors. In specific, some of the most evident (and also relevant) for UC problems are: - Number of available units (as mentioned before), - Schedule time frame, - System detail level, - Individual system elements operating constraints, - Stopping (convergence) criteria, - Contingency inclusions. Each factor has their influence over total computing time showing an independent monotonic increasing behavior. Thus, it is recommended that the evaluation of the UC scalability analysis has to follow a ceteris paribus criterion (study of the influence of one factor at a time over the objective function value). Consequently, benchmarking UC methodologies must be like any other sensitivity analysis. Scalability variables except one have to be fixed as well as the computer platform, and the total computer time and solution quality (performance) must be registered for each methodology. However, comparisons in every mentioned dimension are not encouraged and seldom used in the respective bibliography. Moreover, based on previous presented bibliographical survey, it is possible to deduce that historical computational experiments with UC can fall into three main categories: Basic UC formulation with the number of units as the main scalability parameter, Basic UC formulation with time frame as the main scalability parameter, and heavily constrained UC problems. Figure 1 shows a 3D representation of these common UC problem dimensions.

Fig. 1. Dimension increment directions in UC problems.

Nevertheless, there are two additional key obvious external influence factors over the UC performance which, in general, complicates even more standard result comparisons: - Computer specifications (CPU, RAM, HDD), - Solving methodology. Execution times are “computer specific” and therefore different computer simulation results are traditionally difficult to compare. However, it is possible to set margins of error using adequate computer benchmark parameters [37]. These parameters may be extracted from international organizations for the measure of computer platform performance comparison: Standard Performance Evaluation Corporation (SPEC), Business Applications Performance Council

3

(BAPCo), EDN Embedded Microprocessor Benchmark Consortium (EEMBC) and the Transaction Processing Performance Council (TPC). In general, different solving methodologies (suboptimal seek routines and metaheuristics) are certainly difficult to compare. Many attempts can be found in the literature that usually suggests some of the following comparison indices [38]: - Computational time for: average solution, best solution, fixed iteration number, etc. - Solution quality: closeness to global optimum, relaxed bound gap. - Other: robustness, flexibility, etc. Of course these results can be compared only if they are referred to the same, or at least, similar numerical problem. C. Acronym List The following list of acronyms and abbreviations will be used in the following sections: ALR Augmented Lagrangian Relaxation, APP Auxiliary Problem Principle, BCD Block Coordinate Descent, BCGA Binary-Coded Genetic Algorithm, B&B Branch and Bound, CCA Cooperative Co-evolutionary Algorithm, CLP Constrained Logic Programming, DP Dynamic Programming, DPLR Dynamic Programming with ELR, ELR Enhanced Lagrangian Relaxation, EP Evolutionary Programming, EPL Enhanced Priority List, GAUC Genetic Algorithm with Unit Classification, ICGA Integer-Coded Genetic Algorithm, LR Lagrangian Relaxation, LRGA Lagrangian Relaxation with Genetic Algorithm, SA Simulated Annealing. III. UC PERFORMANCE COMPARISON METHODOLOGY A. General methodology description Based on the previous analysis, the following are proposed guidelines for comparing and reporting results concerning UC methodologies: 1) Define the computer platform to be used and extract or calculate the performance index and error. 2) Define a BUCF to be study, based on the general structure presented in Section II.A. 3) Define the number of generating units, time frame and steps for the BUCF. 4) Select the type of unit and related data randomly from [44]. Define the system demand for each time step accordingly. Details about this procedure can be found in [43]. 5) Execute UC optimization and fill in a report table with the following results: average solution time, time for best solution, best solution found, average solution, distance to global optimum (if available), relative gap between best

feasible and relaxed solution (if applicable). 6) Select only one of the following scalability parameters respect to the predefined BUCF: - Increase the number of generating units: Bigger sized problems are constructed from increasingly available units. They should also be randomly selected as defined in Step 4. - Changes in the time frame: There are few recommended changes in the total time frame. Common evaluation periods and divisions can be used for simulations i.e. 24-168hrs, 1-6 month schedules. Other different than these are seldom used in real life scheduling. - Higher order constraints (if considered): Different reserve requirements, transmission grid effects, voltage limits, security and contingencies. It is encourage referring to standard synthetic systems for analysis of transmission network effects on the overall UC methodology performance i.e. IEEE 14, 118, 300 busbar standard systems. 7) Repeat Steps 5 and 6 until the last scalability parameter selection is reached. 8) Use the calculated computer performance indexes to correct the execution times from the result table for a reference computer platform. This final table represents the performance map of the methodology, allowing an overview of its behavior for different set of scalability parameters. 9) Identify the computer platform used in other reported results and use the corresponding computer performance indexes for calculate the equivalent execution time ranges. A complete numerical example of this procedure is fully developed in the next section. 10) Carry out a statistical performance comparison study including UC reported outputs as much as possible. For example: pondered execution times, objective function values, constraint inclusion effects, relative savings/expenses, etc. The proposed methodology is useful for both, historical comparisons with previous reported research results and future benchmarking performance comparison maps. B. Computer Platform Performance Index and Error This section presents a suggested procedure for estimating relative computer performance indexes. The revised computer benchmarks are: - SPECint®95, SPECfp®95, SPECint®2000, SPECint®2000 [37] - Linpack®, Towards Pick Performance (TPP) and theoretical Pick Performance [40] - Dhrystone [41] and Whetstone [42] (obtained using ©SYSsoftware) The details of the benchmark characteristics and the scope of the results can be found in [37], [40], [41] and [42]. Table I shows the punctuations obtained by computers according to benchmarks SPECint®95, SPECfp®95, SPECint®2000, SPECint®2000 respectively [37]. Computers are limited to a reduced number that will be used in the next section to compare different performance results.

4 TABLE I RANKING USING SPECINT® AND SPECFP® BENCHMARKS [37] Name 486 Dx2 /66 P-60 Mhz

int95 39.6 70.4

SPEC® fp95 int2000 fp2000 18.8

-

-

55.1

rank int

fp

0.0156 0.0067 0.0277 0.0198

TABLE II RANKING USING LINPACK® BENCHMARK [39] Name 486 Dx2 /66 P-60 Mhz

LinP N=100

TPP N=1000

2.4 3.05 5.3 7.2

rank

pick

N=100 N=1000

pick

-

-

0.00383

-

-

-

-

0.00878

-

-

HP 9000/720

1.57

2.02

-

-

0.028

0.0284

HP 9000/720

18

36

50

0.02528 0.02584 0.01136

HP 9000/735

4.04

4.55

-

-

0.072

0.0639

HP 9000/735

41

120

198

0.05758 0.08615

P- 200 Mhz

5.00 8.2

3.92 6.34

-

-

0.1176 0.0721

P- 200 Mhz

62

-

200

0.08708

Sun Ultra 2 2200

6.85

12.9

-

-

0.122

0.1812

Sun Ultra 2 2200

114

117

500

0.16011 0.08399 0.11364

PII- 266Mhz

10.4 10.8

6.82 7.98

-

-

0.1853

0.229

HP C160

140

421

640

0.19663 0.30223 0.14545

HP C160

10.4

16.3

-

-

0.1889 0.1039

P4 - 1.5Ghz P4 - 1.6Ghz Intel Xpress 133Mhz Dell Precision Workstation 420 (733Mhz)

-

-

536 562 563 588

558 643 587 666

-

-

4.2

3.08

-

-

35.7

31

374

290

0.9337 0.9017

P4 - 1.5Ghz P4 - 1.6Ghz

326

955

1500

483

1311

3000

1393

4400

363 712

0.045

-

0.04545

0.56812 0.81335 0.51136 0.75492

1

0.88636

0.9787 0.9407

TABLE III RANKING USING SYNTHETIC BENCHMARKS [41,42] Name

According to SPEC’s “Fair use” Policy, also available in [37], there is no linear equivalence between CPU92, CPU95 and CPU2000 benchmark results. However, there are no other available results besides the ones shown in Table I. In order to accomplish our objective, Dell Precision Workstation 420 and Intel Xxpress 133Mhz will be used as reference for translating both 92 and 95 to 2000 CPU results. Thus, the following relationship will be used: int 2000(ref ) (1) int(i ) = int 95(i ) int 95(ref ) × max{int 2000( j )} where: i: row index j: column index (ref): index for Computer used as a reference. Equation (1) was used for calculating columns 6 and 7 (replacing int with fp as needed). The results shown in Table II correspond to benchmarks presented in [39]. Columns from two to four show the Mflops average obtained from these three different benchmarks. In this case, the mentioned per unit scalar factor was simply computed as the actual result divided by the highest one. These results are presented in columns five to seven. Dhrystone and Whetstone synthetic benchmark results will be used as a final reference in order to enlarge even more the total accuracy bounds. The results in Table III were obtained using ©SYSsoftware Sandra 2004.2.9.104.

486 Dx2 /66 P-60Mhz

Dhrystone ALU Whetstone rank rank (MIPS) (MFLOPS) MIPS MFLOPS 25.2 0.00961 54 83 64 0.0222 0.05405 100

P- 200 Mhz

540

268

0.13104 0.22635

PII-266Mhz

718

356

0.17423 0.30068

P4 - 1.5Ghz

3768

1019

0.91434 0.86064

P4 - 1.6Ghz

4121

1184

1

1

Finally, the corrector factor and standard deviation were computed using rankings from Tables I, II and III. The following table shows the results. TABLE IV FINAL PERFORMANCE INDEX AND ERROR Proc Clock Clock C.Factor Name # (Mhz) rel. (pu) 486 Dx2 /66 1 66 0.041 0.0089 P-60Mhz 1 60 0.038 0.0265 HP 9000/720 1 50 0.031 0.0238 HP 9000/735 1 99 0.062 0.0649 P-200Mhz 1 200 0.125 0.1133 Sun Ultra 2 2200 2 250 0.156 0.1322 PII- 266Mhz 1 266 0.166 0.1919 HP C160 1 160 0.1 0.2117 P4 - 1.5Ghz 1 1500 0.938 0.7862 P4 - 1.6Ghz 1 1600 1.0 0.9372

Error (pu) % 0.0050 56.0 0.0159 60.2 0.0067 28.0 0.0145 22.4 0.0621 54.8 0.0364 27.5 0.0755 39.4 0.0554 26.2 0.1933 24.6 0.1394 14.9

Column 2 in Table IV shows the number of processors in the PC or Workstation, column 3 presents processor’s clock speed. Under “Clock rel.” label lie relationships between the actual processor speed and the fastest one (1600 MHz). The

5

average factor and the error, in per unit and per cent, fill in the last three columns. This scalar number (or ranking) will be used as a unique performance index. Computational times (generally in seconds), will be weighted if they are all multiplied by a scalar factor from zero to one. By doing this, we will be “artificially translating” results from older computers to the fastest one creating a performance index for each system. IV. METHODOLOGY APPLICATION EXAMPLE FOR HISTORICAL COMPARISONS In this section a numerical example using the proposed methodology from Section III is presented. It is very important notice that this example is a historical UC result comparison. In other words, no simulations were performed in the present work; only results from reviewed bibliography were extracted and compared. Given the complexity to find a specific common BUCF for a wide range of reported results in the literature (absence of a standard test case in this field), the selection of the application example was reduced to the following similar characteristics: ·BUCF conformed by 10 generating units and 24 hrs schedule. ·Scalability factor: Increasing unit number in ranges 10 to 100.

Fig. 2. Execution times and errors for fast methodologies in 24hr schedule.

A. Execution Time Comparisons Tables V and VI show execution times for different number of available units in a 24hr period. Execution times, in seconds, are already multiplied by the performance index from Table IV. TABLE V EXECUTION TIMES - FAST METHODOLOGIES

Units ALR-APP ALR-BCD EPL[4] Sun Ultra 2 2200 [9] P4 - 1.6Ghz # 10 0.30 0.304 0.67 20 1.04 1.044 2.78 30 0.71 0.714 40 1.47 1.467 11.15 50 2.58 2.578 60 4.81 4.812 21.56 70 8.99 8.989 80 12.99 12.995 41.61 90 6.60 6.597 100 9.85 9.849 60.45

DP [21] LR [21] CLP [21] LR [6] CCA[8] P-200Mhz P-60Mhz 486 Dx2 /66 0.054 0.036 0.027 0.063 0.125 0.045 1.25 0.152 0.214 0.063 2.38 0.241 0.143 0.089 4.22 0.402 0.170 0.429 0.384 9.98 0.509 0.563 4.53 0.420 0.733 6.46 16.70

TABLE VI EXECUTION TIMES - SLOW METHODOLOGIES Units LRGA [7] BCGA [22] ICGA [24] EP [25] ELR [14] DPLR [14] SA[34] P4 - 1.5Ghz # 486 Dx2 /66 HP 9000/720 P4 - 1.6Ghz HP C160 P-266Mhz 10 4.6 5.3 6.9 21.2 3.1 84.9 7.2 20 10.2 17.4 21.0 72.0 12.6 235.1 17.2 40 19.3 64.1 54.6 249.0 40.9 943.4 52.7 60 21.6 138.8 109.9 479.9 88.8 2514.9 124.4 80 30.2 238.5 165.0 758.8 164.3 6640.7 253.4 100 36.1 397.7 227.3 1295.7 271.2 9777.5 427.5

Fig. 2 and Fig. 3 show a graphical representation of the results previously presented in Tables V and VI. Shadowed areas represent for each tendency the maximum and minimum error bounds according to column 6 in Table IV.

Fig. 3. Execution times and errors for slow methodologies in 24hr schedule.

There are two evident things from these figures: first, they show that it is almost impossible to keep the promise of linear time growing as the number of unit increases; and second, primal and dual methods show execution times way below meta-heuristics. There is also to notice that improving the solutions delivered by LR, as ELR [14] does, considerably increases the execution times. EPL time results are shown in both figures. It can be used as a reference for comparing methodologies in these two different figures. The measurement of solution quality or “improvements” will be introduced in the next section. B. Solution Quality It is also a difficult task to measure how close the solution gets to the global optimum. Among others, the main reasons are: Total costs are not always referred to the same numerical example; usually, there is poor information about the convergence criteria; it is almost impossible to find global optimal results using exhaust enumeration even for small test systems [3]. However, we can still use relative savings as a measure of the solution quality by a given methodology with respect to another one. Coincidentally, costs obtained by LR have been commonly used in several references as point of comparison. As a consequence, the best way to establish a

6

relationship between solution qualities and total savings seems to be using LR total cost results (objective function) as reference. Table VII present the total savings in percentage (%) using some methods different from LR compared to those obtained by using it. TABLE VII TOTAL SAVINGS (%) COMPARED TO LR (PART I) #Unid. 10 20 30 40 50 60 70 80 100

DP[21] 0 0 -0.433 -0.752

CLP[21] 0 0 0.433 0.376 0.328 0.235 0.396 0.161

ELR [14] 0.327 0.651

EPL [4] 0.327 0.556

ICGA[24] -0.102 0.302

0.632

0.531

0.194

0.901

0.821

0.470

0.892 0.912

0.811 0.863

0.5982958 0.467345

TABLE VIII TOTAL SAVINGS (%) COMPARED TO LR (PART II) Units 10 20 40 60 80 100

BCGA [22] 0 0.391 0.292 0.514 0.466 0.527

EP [25] 0.225 0.457 0.417 0.662 0.609 0.590

LRGA [7] GAUC [28] DPLR [14] 0.181 0.327 0.314 0.711 0.455 0.227 0.723 0.389 0.102 0.677 0.560 0.288 0.534 0.451 0.301 0.780 0.544 0.297

but it is not compare with any existing method. As a consequence, no relative savings could be computed. C. 38-Units Detailed Analysis With the results presented above, it is possible to interpolate average execution times and relative savings for any number of available generating units in the range [1 – 100]. In that order, it is interesting comparing time increment needed to bring better solution quality for each methodology with respect to LR for a specific system dimension. In fact, the 38 unit example results presented in Fig. 5 illustrates the behavior of time versus the relative savings (respect to LR) for that specific generator number. Values were calculated interpolating results from tables V to VIII. There is no lost of generality with this example because results from the previous section could be used for comparisons of any other desired dimension. As a final remark, a small 38 unit example was chosen because it could represent two real life power systems: It has nearly the dimension of the main generating units of the northern Chilean interconnected system (SING) and the practical Taiwan electrical system (Taipower) [21].

Fig. 4 presented next, was built using the data from tables VII and VIII.

Fig. 5. Weighted Times and Reduced Costs for a 38 unit example.

Fig. 4. Costs Reduction by different methodologies with respect to LR.

Total savings presented in this section [4,7,14,22,25,28], with the exception of CLP and DP which are both taken from [21], belong to the same numerical example. It means that there is a margin of error for DP and CLP saving results in Fig. 4, but for the rest the comparison is direct. All needed data for this common numerical UC problem could be found in the next section. From Fig. 4, it is noted that there is not a single methodology that surpasses 1% in relative savings for small 10 to 100 unit examples. In fact, solution enhancement seems to be constant with respect to LR as the dimension increases. Results in [33] show some total costs for scalable 10-100 units

Unfortunately, this comparison could not include methodologies that promise total saving ranges from 0.79%2.1% and 1.05-2.15% as shown in [32] and [34] respectively due to the absence of hardware or relative costs information. As for this 38 units example concerns, it has been ratified that LR is a very efficient methodology since it will be needed as much as 1500% in increased times in order to obtain reduced costs of less than 1%. That seems to be a high price for optimality. However, 1% could represent millions of dollars for a particular utility, so it might worth waiting for that long. ELR as proposed in [14], takes over 200 weighted seconds to improve 0.63% the solution originally obtained by classic LR as presented in [21]. Again, it takes a lot of time to improve LR solutions. Results for SA were taken from [21] because the numerical examples presented in the original publication [31] were obtained using a very old IBM 8 MHz computer. Furthermore, results in [32] were also discarded because workstation information was insufficient.

7

V. CONCLUSIONS Guidelines for comparing and reporting UC results were presented in detail. There are two mayor motivations for its uses in future UC reports: more accurate performance comparisons with historical results and standards for further UC publication results. Computer benchmark indices in combination with statistical analysis may also provide powerful comparison tools. A bibliographical result comparison example using the propose methodology has been applied in detail, showing the following interesting results: it is almost impossible for any method to keep the promise of execution times growing up linearly with respect to the number of units. Savings relative to LR for any of the reviewed methods trend to remain constant as the number of units dimension increases. Finally, primal and dual methods show execution times way below metaheuristic, even when considering wide bounds of error introduced by computer benchmark diversity. Obviously, these results are valid in the context of basic UC formulations and scalability as defined in section III and IV. The IEEE RTS-96 from [44] should be the reference test system in current and future developments. Nevertheless, the example in [22] can not be forgotten because it could be very useful for unit’s scalability and, specially, for historical comparisons, as many references have already referred to it in the past. They are both encouraged to be used as benchmarks for BUCF evaluations. Future work in this area will focus on experimental experiences of the proposed methodology with different UC solution algorithms. The proposed analysis can also be easily extended to other power system optimization tasks.

[8]

VI. ACKNOWLEDGMENT

[20]

This paper has been partially supported by grant Fondecyt #1020801 and the Facultad de Ciencias Físicas y Matemáticas of Universidad de Chile. VII. REFERENCES [1]

[2]

[3]

[4]

[5]

[6]

[7]

G. B. Sheblé and G. N. Fahd, “Unit Commitment Literature Synopsis”, IEEE Transactions on Power Systems, Vol. 9, No 1, pp. 128-135, Feb 1994. N. Prasad Padhy, “Unit Commitment – A Bibliographical Survey”, IEEE Transactions on Power Systems, Vol. 19, No. 2, 1196-1205, May 2004. Allen J. Wood & Bruce F. Wallenberg, “Power Generation Operation and Control”, Second Edition. New York: Wiley & Sons, 1984 - 1996. Chapter 5. T. Senjyu, K. Shimaburu, K. Uezato and T. Funabashi, “A Fast Technique for Unit Commitment Problem by Extended Priority List”, IEEE Transactions on Power Systems, Vol. 18, No 2, pp. 882-888, May 2003. F. Zhuang and F. D. Galiana, “Towards a More Rigorous and Practical Unit Commitment by Lagrangian Relaxation”, IEEE Transactions on Power Systems, Vol. 3, No 2, pp. 763-773, May 1988. Marcelino Madrigal and Victor H. Quintana, “An InteriorPoint/Cutting-Plane Method to Solve Unit Commitment Problems”, IEEE transactions on Power Systems, Vol. 15, No 10, pp.1022-1027, Aug 2000. Chuan-Ping Cheng, Chih-Wen Liu and Chun-Chang Liu, “Unit Commitment by Lagrangian Relaxation and Genetic Algorithms”, IEEE

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

[21]

[22]

[23]

[24]

[25]

[26]

[27]

[28]

Transactions On Power Systems, Vol. 15, No. 2, pp. 707-714, May 2000. Haoyong Chen and Xifan Wang, “Cooperative Coevolutionary Algorithm for Unit Commitment”, IEEE Transactions on Power Systems, Vol. 17, No.1, pp. 128-133, Feb 2002. S. Wang, M. Shahidehpour, D. Kirschen, S. Mokhtari and G, Irizarri. “Short-Therm Generation Scheduling with Transmission and Environmental Constraints using an Augmented Lagrangian Relaxation”, IEEE Transactions on Power Systems, Vol. 10, No. 3, pp. 1294-1301, Aug 1995. C. Beltran and F.J. Heredia, “Unit Commitment by Augmented Lagrangian Relaxation: Testing Two Decomposition Methods”, Journal of Optimization Theory and Applications archive, Vol. 112, Issue 2, Feb. 2002, pp: 295 – 314. New York: Plenum Press, USA. L. Chao-An, R. B. Johnson, A. J. Svoboda, “A New Unit Commitment Method”, IEEE Transactions on Power Systems, Vol. 12 , No.1, pp. 113-119, Feb. 1997. C.L. Tseng, S.S. Oren, A.J. Svoboda and R.B. Johnson, “A Unit Decommitment Method in Power System Scheduling”, Electrical Power & Energy System, vol. 19, no. 6, pp 357-365, 1997. S. Dekarjangpetch, G. B. Sheblé and A.J. Conejo, “Auction Implementation Problems Using Lagrangian Relaxation”, IEEE Transactions on Power Systems, Vol. 14, No 1, pp. 82-88, Feb 1999. W. Ongsakul and N. Petcharaks, “Unit Commitment by Enhanced Adaptive Lagrangian Relaxation”, IEEE Transactions on Power Systems, Vol. 19, No 1, pp. 620-628, Feb 2004. Mohammad Shahidepour, Hatim Yamin, Zuyi Li. “Market Operations in Electric Power Systems”. New York: John Wiley & Sons INC Publication, 2002. T. Chung-Li, S. S. Oren, C. S. Cheng, L. Chao-An, A. J. Svoboda and R. B. Johnson, “A Transmission-Constrained Unit Commitment Method”, Proceedings of the Thirty-First Hawaii International Conference on System Sciences, Vol. 3 , 6-9, pp. 71 - 80, Jan 1998. J. Shaw, “A Direct Method for Security-Constrained Unit Commitment”, IEEE Transactions on Power Systems, Vol. 10, No 3, pp. 1329-1342, Aug 1995. Yong Fu, M. Shahidehpour, and Zuyi Li, “Security-Constrained Unit Commitment with AC Constraints”, IEEE Transactions on Power Systems, Vol. 20, No. 3, pp 1538-1550, Aug 2005. C. Murillo-Sánchez, R.J. Thomas, “Thermal Unit Commitment Including Optimal AC Power Flow Constraints”, Proceedings of the 31st Hawaii International Conference on System Sciences, Jan 1998. R. Gollmer, A. Moller, M.P. Nowak, W. Römisch and R. Schultz, “Primal and Dual Methods for Unit Commitment in a Hydro-Thermal Power System”, Proceedings of the 13th Power Systems Computation Conference, pp. 724-730, Trondheim 1999. K-Y Huang, H-T Yang & C-L Huang, “A New Thermal Unit Commitment Aproach Using Constraint Logic Programming”, IEEE Transactions on Power Systems, Vol. 13, No 3, pp. 936-945, Aug 1998. A. Kazarlis, A. G. Bakirtzis and V. Petridis, “A Genetic Algorithm Solution to the Unit Commitment Problem”, IEEE Transactions on Power Systems. Vol. 11, pp. 83-92, Feb. 1996. K. S. Swarup and S. Yamashiro, “Unit Commitment Solution Methodology Using Genetic Algorithm”, IEEE Transactions on Power Systems, Vol. 17, No 1, pp. 87-91, Feb 2002. A. Kazarlis, A. G. Bakirtzis and V. Petridis, “A Solution to the UnitCommitment Problem Using Integer-Coded Genetic Algorithm”, IEEE Transactions on Power Systems, Vol. 19, No. 2, pp. 83-92, May 2004. K. A. Juste, Kita E. Tanaka and J. Hasegawa, “An Evolutionary Programming Solution to the Unit Commitment Problem”, IEEE Transactions on Power Systems, Vol. 14, No 4, pp. 1452-1459, Nov 2000. C. Su and Y-Y Hsu, “Fuzzy Dynamic Programming: An Application to Unit Commitment”, IEEE Transactions on Power Systems, Vol. 6, No 3, pp. 1231-1237, Aug 1991. S. Saneifard, N. R. Prasad and H. A. Smolleck, “A Fuzzy Logic Approach to Unit Commitment”, IEEE Transactions on Power Systems, Vol. 12, No 2, pp. 988-995, May 1997. T. Senjyu, H. Yamshiro, K. Uezato and T. Funabashi, “A Unit Commitment Problem by Using Genetic Algorithm Based on Characteristic Classification”, Proc. IEEE/PES Winter Meeting 2002, vol. 1, pp 58-63, 2002.

8 [29] H. Sasaki, M. Watanabe, J. Kubokawa, N. Yorino and R. Yokoyama, “A Solution Method of Unit Commitment by Artificial Neural Networks”, IEEE Transactions on Power Systems, Vol. 7, No 3, pp. 974-981, Aug 1992. [30] S-J. Huang and C-L Huang, “Application of Genetic-Based Neural Networks to Thermal Unit Commitment”, IEEE Transactions on Power Systems, Vol. 12, No 2, pp. 654-660, May 1997. [31] F. Zhuang and F. D. Galiana, “Unit Commitment by Simmulated Annealing”, IEEE Transactions on Power Systems, Vol. 5, No. 1, pp 311-317, Feb 1990. [32] A. H. Mantawy, Y. Abdel-Magib and S. Z. Selim, “A Simulated Annealing Algoritm for Unit Commitment”, IEEE Transactions on Power Systems, Vol. 13, No 1, pp. 197-204, Feb 1998. [33] G. K. Purushothama and Lawrence Jenkins, “Simulated Annealing With Local Search—A Hybrid Algorithm for Unit Commitment”, IEEE Transactions on Power Systems, Vol. 18, No.1, pp. 273-278, Feb 2003. [34] A. H. Mantawy, Y. Abdel-Magib, S. Z. Selim, “Integrating Genetic Algorithm, Tabu Search and Simulated Annealing for the Unit Commitment Problem”, IEEE Transactions on Power Systems, Vol. 14, No 3, pp. 829-836, Aug 1999. [35] C. Christober Asir Rajan and M. R. Mohan, “An Evolutionary Programming-Based Tabu Search Method for Solving the Unit Commitment Problem”, IEEE Transactions on Power Systems, Vol. 19, No. 1, pp. 577-585, Feb 2004. [36] J. F. Restrepo and F. D. Galiana, “Unit Commitment with Primary Frequency Regulation Constraints”, IEEE Transactions on Power Systems, Vol. 20, No. 4, pp. 1836-1842, Nov 2005. [37] Standard Performance Evaluation Corporation, SPEC®: http://www.spec.org/ visited on July, 2004. [38] R. S. Barr, B. L. Golden, J. P. Kelly, M. G. C. Resende and W. R. Stewart, “Designing and Reporting Computational Experiments with Heuristic Methods”, Journal of Heuristics, Vol. 1, pp. 9-32, 1995 [39] Jack J. Degarra, “Performance of Various Computers Using Standard Linear Equations Software” University of Tennessee Technical Report CS-89-85, June 2004. [40] J. Dongarra, J. Bunch, C. Moller, and G. W. Steward, “LINPACK User’s Guide”, SIAM, Philadelphia, PA, 1979. [41] Reainhold P. Weiker. “Dhrystone Benchmark: Rationale for Version 2 and Measurement Rules” SIGPLAN Notices 23, 8, pp. 49-62, Aug 1988. [42] H.J. Curnow and B.A. Wichman. “A Synthetic Benchmark”. Computer Journal, Vol. 19 #1, February 1976. [43] N. G. Hall and M. E. Posner, “Generating Experimental Data for Computational Testing with Machine Scheduling Applications”, Operations Research, Vol. 49, Issue 6, pp. 854-865, Nov-Dec 2001. [44] C. Grigg, P. Wong, P. Albrecht, R. Allan, M. Bhavaraju, R. Billinton, Q. Chen, C. Fong, S. Haddad, S. Kuruganty, W. Li, R. Mukerji, D. Patton, N. Rau, D. Reppen, A. Schneider, M. Shahidehpour and C. Singh, “The IEEE Reliability Test System – 1996. A report prepared by the Reliability Test System Task Force of the Application of Probability Methods Subcommittee”, IEEE Transactions on Power Systems, Vol. 14, No.3, pp. 1010-1020, Aug 1999.

VIII. BIOGRAPHIES Frank J. Leanez was born in 1975 in Caracas, Venezuela. He received the B.Sc. degree in electrical engineering from Simon Bolivar University of Venezuela, Caracas. He is currently pursuing the M.Sc. degree at the University of Chile. He is also a Research Assistant at the University of Chile. Rodrigo Palma-Behnke was born in Chile. He received his B.Sc. and M.Sc. in electrical engineering from the Catholic University of Chile, Santiago, Chile, and his Ph.D. in 1999 from University of Dortmund, Germany. He is now working as a professor at the University of Chile. His research field is the planning and operation of electrical systems in competitive power markets and new technologies.

Suggest Documents