an iterative genetic algorithm approach - IEEE Xplore

5 downloads 0 Views 217KB Size Report
Genetic Algorithm Approach. Heloisa Teixeira Firmo and Luiz Fernando Loureiro Legey. Abstract—The generation expansion-planning problem (GEP).
IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 17, NO. 3, AUGUST 2002

901

Generation Expansion Planning: An Iterative Genetic Algorithm Approach Heloisa Teixeira Firmo and Luiz Fernando Loureiro Legey

Abstract—The generation expansion-planning problem (GEP) is a large-scale stochastic nonlinear optimization problem. To handle the problem complexity, decomposition schemes have been used. Usually, such schemes divide the expansion problem into two subproblems: one related to the construction of new plants (investment subproblem) and another dealing with the task of operating the system (operation subproblem). This paper proposes an iterative genetic algorithm (IGA) to solve the investment subproblem. The basic idea is to use a special type of chromosome, christened pointer-based chromosome (PBC), and the particular structure of that subproblem, to transform an integer constrained problem into an unconstrained one. IGA’s results were compared to those of a Branch and Bound (B&B) algorithm—provided by a commercial package—in three different case studies of growing complexity, respectively, containing 144, 462, and 1845 decision variables. These results indicate that the IGA is an effective alternative to the solution of the investment subproblem. Index Terms—Genetic algorithms (GAs), integer programming optimization methods, planning, power systems, uncertainty.

I. INTRODUCTION

T

HE generation expansion-planning problem (GEP) is of the nonlinear large-scale mixed programming type. Its main objective is to determine a minimum cost expansion plan—constituted by hydroelectric and/or thermal power plants, as well as transmission lines—which is able to meet reliably future electricity demand. In the past, there have been many attempts to deal with this problem, in a variety of ways [1]. Some of them used deterministic criteria, while other incorporated analysis of uncertainties. These uncertainties could be either technical, such as hydrological conditions and generators stoppage, or economical, like fuel prices and interest rates [2]. There are difficulties, however, in taking into account too many aspects of the problem because of the overwhelming complexity that rapidly arises. To handle this complexity, decomposition schemes have been used. Usually, such schemes divide the expansion problem into two subproblems: one related to the construction of new plants (investment subproblem) and another dealing with the task of operating the system (operation subproblem). When hydroplants are present, this decomposition scheme is all the more useful, because the system’s optimal operation depends on hydrological conditions, which are stochastic.

Manuscript received April 30, 2001. H. T. Firmo and L. F. L. Legey are with the Energy Planning Program/Federal University of Rio de Janeiro, Rio de Janeiro, RJ 21945-970, Brazil (e-mail: [email protected], [email protected]). Publisher Item Identifier 10.1109/TPWRS.2002.801036.

Some authors have experimented with genetic algorithms (GAs) in solving the GEP, but their analysis were limited to just thermal power plant systems [3]–[5]. The major problem facing GA implementation in solving real-world GEPs is finding ways to represent candidate solutions, so as to be able to manipulate them in relatively simple terms [6]–[8]. This may be especially difficult when the problem to be solved has many constraints. The challenge is to construct a representation for candidate solutions, which guarantees that new solutions obtained through the GA’s operators (selection, crossover, and mutation) will remain feasible. Unfortunately, this is seldom easily attainable and therefore one has to resort to other mechanisms to sustain feasibility.1 A first attempt by the authors to use GAs in solving the GEP can be seen in [10]. In that paper, the problem solved was just an illustrative one, but the idea of trying to find a way of building chromosomes, so as to represent the problem as an unconstrained one was already present. On the other hand, the approach of solving the operation subproblem by means of a GA had to be abandoned. The reasons for that were not only the problem’s enormous complexity, but also the fact that its characteristics lent it naturally to a linear programming approach. This paper focuses on the investment subproblem, basically for two reasons: 1) the large-scale integer programming nature of this problem (which makes it particularly difficult to solve by classical methods); 2) the fact that the operation subproblem, as mentioned above, is amenable to usual linear programming algorithms. The decomposition scheme which support this approach is based on Benders’ cuts [11]. It consists of an iterative procedure that computes a lower bound to the GEP, by solving a relaxed investment subproblem. The solution to this problem is then used as an input to the operation subproblem, which solution constitutes an upper bound to the GEP and provides sensitivity vectors to conform a new investment subproblem. The procedure goes on up to a point where the gap between the upper and lower bounds are within a specified range [1] and [2]. This paper describes an alternative approach, based on an iterative genetic algorithm (IGA). In each step of the Benders’ iteration described above, the IGA uses a special type of chromosome—christened pointer-based chromosome (PBC). The PBC is able to make use of the particular structure of the investment subproblem and to transform it from an integer constrained problem into an unconstrained one. 1See,

for example, [9] for an interesting survey of how to deal with this task.

0885-8950/02$17.00 © 2002 IEEE

902

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 17, NO. 3, AUGUST 2002

The proposed algorithm was tested in three different case studies, from Costa Rica and Brazil, of growing complexity, respectively, containing 144, 462, and 1845 decision variables. Comparisons between IGA results and those of a Branch and Bound (B&B) algorithm provided by a commercial package are also shown. II. PROBLEM FORMULATION As discussed in the previous section, the traditional objective of the GEP is to determine an investment schedule, which besides meeting load demand, minimizes the present value of both investment and operation costs. Thus, it can be formulated as follows [12].

(5) (6)

Using a decomposition technique based on Benders’ cuts [13], it is possible to solve the problem above by means of an iterative procedure—the so-called Benders’ algorithm—which combines the solutions of two subproblems, until an approximation to the solution of the original problem is within an specified range. These subproblems are shown below. C. The Investment Subproblem to this subproblem defines a candidate GEP The solution to be used in Section II-D

A. The Generation Expansion Problem

(7) (1)

subject to (8) (9)

subject to (2)

where

(3) and where number of stages in the planning horizon; number of candidate plants for expansion; discount rate at stage ; vector of investment costs at stage ; vector of expansion options at stage (when , plant will be constructed in stage , ); otherwise, vector of operation costs at time ; vector of operation variables at stage (usually the amount of energy generated by plant in stage or load curtailment at , when represents energy deficits); , , transformation matrices at stage ; , : resource vectors at stage . The inequalities [(2)] represent constraints on investment decisions and (3) represents operation constraints (generation limits, energy demands, etc.) and financial constraints. Without any loss of generality, it is possible to rewrite the equation in Section II-A, by conveniently redefining variables and parameters in order to avoid a cumbersome notation,2 as in Section II-B B. An Alternative Formulation of the Generation Expansion Problem

(4) subject to

x M

2To see how this can be done, suppose for instance that the variable is represented as a long vector, whose components are indexed from 1 to , where .

M = N 2T

vector of dual multipliers associated with the optimal solution of ; ; number of iterations performed so far by the Benders’ algorithm. D. The Operation Subproblem This subproblem estimates the operation costs associated and supplies a new set of dual multipliers used in with plan Section II-C (10) subject to (11)

The challenge in solving the equation in Section II-A (or Section II-B) is its huge size, especially when hydroplants are present and the system being modeled is large, as in Brazil’s case. Even with the decomposition scheme presented above, the magnitude of the two subproblems is very big, so that powerful algorithms have to be used. Besides its size, the equation in Section II-D does not present much difficulty, because it can be solved by the very efficient methods of linear programming. On the other hand, the equation in Section II-C is much more difficult, because of its character of mixed integer programming. The method usually employed to solve it is a B&B algorithm. This paper presents an alternative approach, by means of the IGA. One interesting point to note is that, contrary to the statement made by Bäck et al. that “in evolutionary algorithms, environmental conditions are often static” [8], the IGA has a dynamic nature which is more akin to the process of natural evolution. As

FIRMO AND LEGEY: GENERATION EXPANSION PLANNING: AN IGA APPROACH

the latter, the solution of the problem interacts with the environment, modifying it. This process is dynamic because the objective function varies in each iteration of the Benders’ algorithm. This feature of the IGA leads to still more possibilities for the fitting of the algorithm’s parameters. The reason for this relates to the fact that in each of the Benders’ algorithm iteration a different set of parameters can be used. In the version of the IGA employed here several tests were performed with the main concern of getting a better convergence of the Benders’ algorithm.

III. THE ITERATIVE GENETIC ALGORITHM POINTER-BASED CHROMOSOME

AND THE

In attempting to solve the equation in Section II-C by using GAs, the first question to be answered concerns to the structure of candidate solutions. How could those solutions be represented? The natural approach appears to be the construction of a chromosome in which a particular expansion plan is a string of genes representing a group of different plants to be built along the planning horizon. Fukuyama and Chiang [4] utilize a formulation along those lines. They define the length of the chromosome as equal to the total number of plants to be built, with each position on the string representing the time interval in which the unit begins operation. This representation implies strings of variable size, since all candidate plants are not always built. In addition, crossover and mutation operations create many infeasible individuals that will have somehow to be dealt with. Because Fukuyama and Chiang were dealing with only thermal plants and a relatively small system, computational difficulties could be overcome without much loss of efficiency. However, in hydrothermal systems, because of (8) and (9), this approach generates an enormous number of infeasible solutions. The correction of those infeasibilities creates significant inefficiency, which can be fatal in large-scale problems. Davis [6] has suggested a representation called an order-based genetic algorithm (OBGA). In the GEP, the OBGA could possibly be associated with the order of construction of candidate plants. However, the OBGA representation, when confronted with crossover and mutation operations, would lead to many infeasible solutions as well. Constraints [(8)], besides ensuring that a given plant will not be built more than once (singleness), also guarantees that the following conditions are satisfied: the beginning of operation of a particular plant are within a minimum and/or maximum period; two or more mutually exclusive plants are never built in an expansion plan; mandatory plants are always constructed; and so on. The analysis of these particular GEP’s characteristics leads to the following considerations. 1) Because is binary, when a variable in one (in)equality corresponding to a singleness constraint in (8) is equal to one, then all the other variables in the same (in)equality are necessarily equal to zero. 2) The objective of the investment subproblem is the mini, where should be greater or equal mization of , . To satisfy these conto each ditions while guaranteeing that the minimum is achieved,

903

TABLE I EXAMPLE OF THE POINTER BASED CHROMOSOME STRUCTURE

it suffices that, for a given , is equal to the greater of the values in the right-hand side of (9). Taking into account these considerations, the approach developed here proposes the PBC for representing the investment subproblem’s candidate solutions. The PBC is constructed as follows: a gene is associated with each singleness constraint in such way that if a variable in that constraint is equal to one, the (integer) value of the gene would point to the position of that variable in the constraint (in)equality. If all variables are zero, the value of the gene is also zero. Therefore, with the whole chromosome it is possible to know which variables are one and which are zero in a particular solution of the equation in Section II-C, i.e., which plants will be built at which stage, in a particular expansion plan. For this plan—materialized by the different values of the variables—it is possible to compute the maximum value of the , , and then the value of , products which is equal to this maximum. With and , the objective can be computed and so the fitness function function of the GA as well. It is important to note that the PBC allows for the transformation of a problem with many constraints into an equivalent one amenable to be solved by an evolutionary algorithm (EA) searching through a space of feasible solutions. The following example attempts to clarify how the PBC works. Suppose an optimization problem as follows: subject to

Table I shows how the PBC would represent a solution to this problem. The string [2 0 0 1 5 0] shown (vertically) corresponds , with the remaining varito the solution ables equal to zero. The data structure of the PBC allows for the utilization of any type of the usual crossover operators (one, two or many points) without generating infeasible solutions. This happens because in the crossover operation, the position of the gene in the string is maintained. The mutation operator, however, needed some reflection and had to be reelaborated to avoid the generation of in-

904

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 17, NO. 3, AUGUST 2002

feasible solutions. For this purpose an auxiliary data structure, which informs the maximum and minimum admissible values for each gene in the PBC had to be created. The number of variables gives the maximum value of a gene in the corresponding (in)equality of the singleness constraints. In the previous example, for instance, the maximum admissible values would be [3 2 5 1 6 5], whereas the minimum should be zero, for all genes but the fifth, which should be one, because it corresponds to an equation (equality constraint). In addition, in order to guarantee that each variable (i.e., the construction of a given plant at a certain time) has the same probability of being chosen in a mutation operation, it is neces). This would sary to “correct” the probability of mutation ( by the number of variables in each of the mean multiplying problem’s constraints (or, equivalently, by the maximum admissible value of a gene). In the example above, this “corrected” probability of mutation for each gene in the PBC would be, re; ; ; ; ; and spectively, . IV. PLANNING UNDER UNCERTAINTY Depending on the nature of the uncertainty, it is sometimes very difficult to assign probabilities to possible outcomes. In such a situation, one method to deal with it is to use the so-called Savage criterion [1], which can handle several future scenarios, through the minimization of the maximum regret (MMR). “Regret” in Savage’s sense is the difference between the optimal when the future is unknown and that, which is achievable when the future is known with certainty, or, to put it in another way, when a particular scenario is assumed to occur. Solving a problem by using Savage’s criterion has, therefore, two “phases.” In the first phase, one finds the optimum for each scenario that one thinks possible to occur; and in the second phase, the problem is to find a solution that minimizes the distance (regret) between this solution and the different optima found in phase 1. Because of this, the dimension of problem in phase 2 is much bigger than in phase 1.3 Phase 2 also poses a new problem, when a GA is solving the investment subproblem. Due to the structuring of the scenario tree (see Fig. 1), an “over specification” of the PBC may happen. To see the meaning of this “over specification,” once again an example will be used. This time it will be the one presented in Fig. 1. In this figure, the decision variables are supposed to refer to a same plant, the only difference between them being that the plant could be built in a certain stage, within a particular means the plant will be scenario. Thus, for example, , the plant will be built built in stage 1, for all scenarios; , the plant will be built in stage in stage 2, for scenario 1; 2 for scenarios 2 and 3; and so on. Therefore, the singleness constraints corresponding to the example are

3One should not forget, as well, the uncertainties brought by possible different hydrological conditions, which despite being amenable to the usual probabilistic methods, adds an additional complexity to the problem.

Fig. 1. Scenario tree for phase 2 of the GEP.

The over specification of the PBC (corresponding to these constraints) means the following: if the first gene is equal to 1 the next two genes must be also equal to 1. The same reasoning holds for any variable present in more than one inequality. Otherwise, there would be a representation failure in the PBC, since the value of a variable ought to be the same in whichever inequality it is present. To correct any over specification generated by crossover or mutation operations, a “filtering algorithm” was devised. This algorithm “corrects” the PBC by assuming that the “correct” value of a variable is the first one found in a top-to-bottom, left-to-right scan of the inequalities representing the singleness constraints. V. CASE STUDIES RESULTS As mentioned before, the real challenge in applying the IGA to find the solution of the investment subproblem of the GEP is its huge size. To have an idea of the magnitude of that problem, it suffices to say that in the “smaller” case studied (Costa Rica), the search space had a dimension of 2 , which is only slightly dimension problem referred to by Goldsmaller then the 2 berg et al. [14] as a large-scale GA. A. Case I: Costa Rica In the case of Costa Rica, both phases 1 and 2 of the MMR approach were studied. Four scenarios were analyzed and the following parameters were used in all runs of the IGA: • population size: 80; • number of generations in each iteration of the Benders’ algorithm: 30; • mutation rate: 0.038 per gene; • initialization with zero in all the genes; • crossover rate (one point): 0.90; • tournament selection. The problem in phase 1 had 144 decision variables and the IGA always obtained the same optimal value of the objective function as the one found by the B&B algorithm. In phase 2 of the MMR, the number of variables in the investment subproblem increased to 462, and the objective became that of minimizing the maximum regret associated with each of the four scenarios. It is worth mentioning that because of the problem’s size, the B&B algorithm had to make use of some

FIRMO AND LEGEY: GENERATION EXPANSION PLANNING: AN IGA APPROACH

TABLE II COSTA RICA’S RESULTS ( )

TABLE III BRAZIL’S RESULTS

heuristic rules to achieve a solution [15], while the IGA did not need such rules. The parameters of the IGA used in phase 2 were as follows: • population size: 100; • number of generations in each iteration of the Benders’ algorithm: 20; • mutation rate: 0.024 per gene; • initialization with zero in all the genes; • crossover rate (one point): 0.90; • tournament selection. B. Case II: Brazil The Brazilian case is enormous. It has 1845 decision variables . in phase 1, which means a search space of a dimension of 2 In this case, only three scenarios were analyzed. The best results found in the course of several runs are shown in Table III. CPU times were equivalent for the two methods. For each scenario, the total CPU time (investment plus operation subproblems) was about 1 h and 30 min, in a Pentium II 333 MHz. The parameters utilized in the Brazilian case were as follows • population size: 200; • number of generations in each iteration of the Benders’ algorithm: 40; • mutation rate: 0.002 per gene; • initialization with zero in 50% of the genes; • crossover rate (one point): 0.95; • elitism: 20% of the population; • tournament selection. However, it should be remarked that those parameters were fine-tuned for each scenario, so that they vary slightly in different runs. Without this refinement, the results were at least 10% worse as compared to the best ones. VI. CONCLUDING REMARKS EAs are not ready-to-use tools [8]. Depending on the landscape of the search space, they can always be refined and finetuned to obtain better results. This feature implies two—to a certain extent contradictory—consequences. On the one hand,

905

it demands more work and reflection from the analyst trying to apply them to a particular problem. On the other hand; however, the possibility of fine-tuning is a source of flexibility and therefore robustness of the method. EAs results should also be analyzed from another perspective [10]. Since populations are in fact a set of alternative solutions, which in terms of the value of the objective function do not differ much—at least in principle—one from another, it is possible to use them as alternatives for the final decision. This is an important source of flexibility in real world situations. In the case of the power systems expansion planning problem, this flexibility is very welcome, as it means the possibility of taking into account other aspects of the problem, besides cost minimization. These aspects might include, for instance, environmental and social concerns, among others. In hydropower-based generation systems, where usually construction times are longer and, consequently, uncertainties are greater, additional flexibility of the sort provided by the set of the best IGA solutions is even more valuable—as for example in the formulation of “indicative plans.” In such a situation, a regulatory agency operating in a decentralized environment should be able to assess alternative plans, by taking into consideration long-term consumer benefits and strategic aspects vis-à-vis short-term views. An individual agent as well, operating in a competitive electricity market and faced with many uncertainties—not only technical but also economic and political—should profit from being able to choose among similar cost alternative solutions. A possible development of the GEP model presented here would be the incorporation of multi-criteria optimization, through an adaptation of the MMR method. In this way, comparisons among alternatives which privilege, for instance, investment costs and those that assume as more important the development of a certain area of the country could be made. Theoretically, this can be done quite easily. The problem, however, is how to cope with the additional increase in the dimension of an already large problem. Another promising development could be to improve the efficiency of the algorithm, by performing a local search [16] at the end of the iterative convergence process. The basic idea that guided the methodology presented here was to tune parameters for a smaller case, in order to get insights to the problem, and then proceed to deal with a larger case. This is an interesting form of working with the scale-up problem referred to by Goldberg [17]. The Costa Rica case served as a stepping-stone to the Brazilian case. However, the parameters used in the smaller case had to be adjusted for getting an improvement of up to 15% in the performance of the IGA. The results in Tables II (Costa Rica) and III (Brazil) show also that the IGA computational efficiency gets better—when compared to the B&B algorithm—as the problem’s dimension increases. This suggests that the IGA (and the PBC) may be an appropriate tool to deal with large-scale problems. Another point worth mentioning is that the PBC’s data structure might be used in other types of problems involving integer programming of the sort studied here, i.e., those in which decision variables are either zero or one. The PBC method is particularly efficient when the structure of the problem is such

906

IEEE TRANSACTIONS ON POWER SYSTEMS, VOL. 17, NO. 3, AUGUST 2002

that one same variable is present only in a reduced number of constraints. In addition, it should be worth trying to identify the building blocks of the IGA [18]. Or to put it another way, which plants have a positive relationship of dependence among them. By knowing this, it would be possible to devise more robust decisions and development policies for the electric sector. One final note should be added. In the same way the IGA evolves by interacting with the environment, one also gets more familiar with the problem at hand and is able to notice possible improvements in the method. Although the challenge described in the present research had been very involving and stimulating, at the end of the day two clear impressions remain. The first is the fact that knowing one is dealing with a problem without a known ex ante global optimum leaves the uncomfortable sensation of an unfinished work, because the algorithm employed can, theoretically at least, always be improved. On the other hand, however, that same possibility of “constant refinement” brings forward the certainty of being part of a continuous process of learning and development, which is characteristic of evolution and, consequently, of life. ACKNOWLEDGMENT The authors wish to express their gratitude to P. de Novella and to J. P. da Costa, both from CEPEL, for their help in running the MODPIN model and for providing valuable suggestions. Of course, any remaining errors are the authors’ sole responsibility. REFERENCES [1] B. G. Gorenstin, N. M. Campodonico, J. P. Costa, and M. V. Pereira, “Power system expansion planning under uncertainty,” in IEEE/PES Winter Meeting, New York, 1992. [2] OLADE-BID, “Module of planning under uncertainty,” (in Spanish), , Quito, Ecuador, Reference Manual for the Super Olade-BID Model, 1993. [3] Y. Park, J. Won, J. Park, and D. Kim, “An improved genetic algorithm for generation expansion planning,” IEEE Trans. Power Syst., vol. 15, pp. 916–922, Aug. 2000. [4] Y. Fukuyama and H. Chiang, “A parallel genetic algorithm for generation expansion planning,” IEEE Trans. Power Syst., vol. 11, pp. 955–961, Apr. 1996. [5] J. Zhu and M. Chow, “A review of emerging techniques on generation expansion planning,” IEEE Trans. Power Syst., vol. 12, pp. 1722–1728, Nov. 1997. [6] L. Davis, Handbook of Genetic Algorithms. New York: Van Nostrand, 1991, p. 61. [7] R. Hinterding, Z. Michalewicz, and A. E. Eiben, “Adaptation in evolutionary computation: A survey,” in Proc. 4th IEEE Int. Conf. Evolutionary Computation, Indianapolis, IN, 1997, pp. 65–69.

[8] T. Bäck, U. Hammel, and H. Schwefel, “Evolutionary computation: Comments on the history and current state,” IEEE Trans. Evol. Comput., vol. 1, pp. 3–15, Apr. 1997. [9] Z. Michalewicz, “A survey of constraint handling techniques in evolutionary computation methods,” in Proc. 1995 4th Ann. Conf. Evolutionary Programming, pp. 135–155. [10] L. F. L. Legey and H. Kazay, “Generation expansion planning: A genetic algorithm approach,” in Proc. 1999 Int. Conf. Intelligent System Application to Power Systems (ISAP), pp. 239–243. [11] L. S. Lasdon, Optimization Theory for Large Systems. New York: Macmillan, 1970, pp. 370–392. [12] EPRI, “Mathematical decomposition techniques for power system expansion planning,”, Tech. Rep. EL-5209, vol. 1–5, Feb. 1988. [13] H. F. Kazay, “Generation expansion planning in the Brazilian electric sector employing genetic algorithms,” Ph.D. dissertation (in Portuguese), Energy Planning Program, Graduate Programs in Engineering, Federal Univ. of Rio de Janeiro, Rio de Janeiro, Brazil, 2001. [14] D. E. Goldberg, K. Deb, H. Kargupta, and G. Harik, “Rapid, accurate optimization of difficult problems using fast messy genetic algorithms,” Illinois Genetic Algorithms Lab., Dept. General Eng., Univ. Illinois, Urbana-Champaign, Illigal Rep. 93 004, Feb. 1993. [15] Brazilian Electric Power Research Center (CEPEL), “MODPIN: Methodology’s manual,” (in Portuguese), , Tech. Rep. Preliminary version, 1999. [16] D. E. Goldberg and S. Voessner, “Optimizing global-local search hybrids,” Illinois Genetic Algorithms Laboratory, Dept. General Eng., Univ. Illinois, Urbana-Champaign, Illigal Rep. 99 001, Jan. 1999. [17] D. E. Goldberg, “Genetic and evolutionary algorithms in the real world,” Illinois Genetic Algorithms Laboratory, Dept. General Eng., Univ. Illinois, Urbana-Champaign, Illigal Rep. 99 013, Mar. 1999. [18] G. R. Harik, “Learning linkage to efficiently solve problems of bounded difficulty using genetic algorithms,” Ph.D. dissertation, Dept. Computer Science and Engineering, University of Michigan, Ann Arbor, 1997.

Heloisa Teixeira Firmo received the B.Sc. degree in civil engineering from the Federal University of Rio de Janeiro, Rio de Janeiro, Brazil. She is currently with the Energy Planning Program at Federal University of Rio de Janeiro.

Luiz Fernando Loureiro Legey received the B.Sc. degree in electrical engineering (telecommunications) from the Catholic University of Rio de Janeiro, Rio de Janeiro, Brazil, the M.Sc. degree in electrical engineering (systems) from the Federal University of Rio de Janeiro (UFRJ), and the Ph.D. degree from the University of California, Berkeley. He is currently a Full Professor in the Energy Planning Program at Federal University of Rio de Janeiro.