A Surrogate Genetic Programming Based Model to ... - IEEE Xplore

2 downloads 0 Views 650KB Size Report
May 7, 2013 - Marcus H. S. Mendes , Gustavo L. Soares , Jean-Louis Coulomb , and João A. Vasconcelos. Evolutionary Computation Laboratory, PPGEE, ...
IEEE TRANSACTIONS ON MAGNETICS, VOL. 49, NO. 5, MAY 2013

2065

A Surrogate Genetic Programming Based Model to Facilitate Robust Multi-Objective Optimization: A Case Study in Magnetostatics Marcus H. S. Mendes , Gustavo L. Soares , Jean-Louis Coulomb , and João A. Vasconcelos Evolutionary Computation Laboratory, PPGEE, Federal University of Minas Gerais, Belo Horizonte, 31270-901, Brazil Pontifical Catholic University of Minas Gerais, Belo Horizonte, 30535-901, Brazil Grenoble Electrical Engineering Lab, Grenoble, 38402, France A common drawback of robust optimization methods is the effort expended to compute the influence of uncertainties, because the objective and constraint functions must be re-evaluated many times. This disadvantage can be aggravated if time-consuming methods, such as boundary or finite element methods are required to calculate the optimization functions. To overcome this difficulty, we propose the use of genetic programming to obtain high-quality surrogate functions that are quickly evaluated. Such functions can be used to compute the values of the optimization functions in place of the burdensome methods. The proposal has been tested on a version of the TEAM 22 benchmark problem with uncertainties in decision parameters. The performance of the methodology has been compared with results in the literature, ensuring its suitability, significant CPU time savings and substantial reduction in the number of computational simulations. Index Terms—Finite element method, genetic programming, robust optimization, surrogate model, TEAM 22 problem.

I. INTRODUCTION

T

HE solutions achieved by a conventional optimization method that does not consider uncertainties in decision parameters should be adequate in a controlled environment. However, real-world optimization problems inevitably involve imprecision, and consequently, the performance of these solutions may be lower than expected or even be impracticable. To address this problem, different robust optimization algorithms have been introduced [1]–[3]. The major drawback of most of these is computational effort, principally, in engineering optimization under uncertain conditions. This occurs for two main reasons: i) the values of the objective and constraint functions associated with the optimization problem are usually computed by time-consuming simulations (e.g., in electrical engineering, they are typically performed by the Finite Element Method (FEM) or boundary element method), and ii) the uncertainties can be considered as additional parameters, and for simulating its action by a finite set of samples, a large number of evaluations of the optimization functions is required. To tackle these hurdles, we propose an approach that uses Genetic Programming (GP) to assemble surrogate objective and constraint functions that: i) are quickly evaluated by robust multi-objective evolutionary algorithms, and ii) reduce the number of computational simulations during the optimization process. This methodology has been applied in a robust version of the multi-objective TEAM 22 problem proposed in [1] to demonstrate its suitability for engineering optimization. The primary contributions of this paper are: i) to assess the appropriateness of surrogate models based on GP for engineering; ii) to improve the Multi-Objective Genetic Algorithm (MOGA)

Manuscript received October 29, 2012; revised December 31, 2012; accepted December 31, 2012. Date of current version May 07, 2013. Corresponding author: M. H. S. Mendes (e-mail: [email protected]). Digital Object Identifier 10.1109/TMAG.2013.2238615

presented in [1] to make it faster in solving multi-objective robust optimization problems; and iii) to present a case study in magnetostatics to validate the proposal. The paper is organized as follows. GP is briefly reviewed in Section II, and Section III describes the proposed methodology. Section IV shows and discusses the results. Finally, the conclusions are given in Section V. II. GENETIC PROGRAMMING GP is an evolutionary computation technique that allows computers to solve problems automatically. It is based on the Darwinian principle of survival of the fittest and consists on the automated learning of computer programs. Through selection, crossover and mutation operations, computer programs evolve into new, hopefully better programs. During the evolutionary process, the structures dynamically change size and shape. This flexibility gives GP a powerful and symbolic expressiveness [4]. In Symbolic Regression (SR), GP is used to build an empirical mathematical model of data acquired from a system. The focus of SR, unlike traditional regression techniques, is not to find out the coefficients or weights that best fit a function defined a priori, but rather, the shape of an experimental model is revealed by the evolutionary process. The aim of SR is to discover a mathematical expression in symbolic form that best associates input and output in a given finite set of training samples [5]. That is, given a set of training values and the corresponding set of reference , the objective is to find samples , . Often, is a function such that utopic, and so an approximation of is usually obtained. In SR, each mathematical expression is typically represented by a tree structure, where internal nodes are functions and leaf nodes are terminals appropriate to the problem domain [4]. The functions may be arithmetic operators and the terminals are variables and constants. GP algorithms start with an initial population of randomly generated mathematical expressions composed of functions and

0018-9464/$31.00 © 2013 IEEE

2066

IEEE TRANSACTIONS ON MAGNETICS, VOL. 49, NO. 5, MAY 2013

Fig. 2. Case study methodology flowchart. Fig. 1. Robust configuration of the TEAM 22 problem [1].

terminals. Each expression in the population is evaluated by how well it performs in the particular problem environment. A fitness measure is computed by the sum or average of the error between the output produced by the expression and the reference value over n samples called fitness cases. In this paper, the fitness measure is the Mean Squared Error (MSE) with fitness scaling as proposed in [6]. The output of the GP algorithm is the best obtained analytical expression composed of basic functions, constants, and pertinent variables associated with the problem. To ensure that GP can be used in a specific problem, the closure and sufficiency properties must be satisfied [5]. The first property requires that each function should be able to handle all values it might receive as input. The second property states that combinations of functions and terminals must be capable of representing solutions to the problem. The primary control parameters in GP are the population size; the maximum number of generations; the maximum height of trees for expressions created during the run and for the initial random expressions; the generative method for the initial random population; the selection method; and the probabilities of crossover and mutation. The criterion used to terminate a GP run is typically the maximum number of generations or the maximum number of fitness evaluations. III. METHODOLOGY In this paper, the TEAM 22 problem is used as case study [7], [8]. There are different formulations of this problem in the literature. We considered the robust constrained multi-objective model presented in [1], which takes into account the uncertainties in design parameters. Fig. 1 shows the robust TEAM 22 problem configuration. The parameters , , and are known, and the design parameters , , and must be optimized by considering the worst case of additive uncertainties , , and , which, for example, represent rounding or measurement errors. The mathematical description of the problem is given in [1]. It is modeled as a minimization problem with two objective functions and one constraint. The first objective function computes the stray field (evaluated along 22 points in the lines a and

b as shown in Fig. 1 [7]). The second objective function measures the percent deviation from 180 MJ to the computed energy. The constraint ensures that the magnetic field must not violate a certain physical condition that guarantees superconductivity. In [1], the optimization functions , , and are computed by FEM simulations. Our proposal is to utilize GP to obtain surrogate functions for , , and in terms of the optimization parameters ( , , and ). Next, we replace the FEM simulations by the surrogate functions. This is carried out in the same MOGA proposed in [1] to solve the robust version of the TEAM 22 problem. Thus, we can compare the performance of the MOGA using FEM (the baseline algorithm) to the MOGA assisted by surrogate functions. The case study methodology flowchart is shown in Fig. 2. The MOGA and GP algorithms were implemented in Matlab code. The sample data were obtained by using the finite element method (triangular elements, first order, ). We performed 450 simulations with random values for , , and chosen from ranges [2.6, 3.4], [0.408, 2.2] and [0.1, 0.4], respectively. The data were divided between training (360 samples) and validation (90 samples) sets. In this study, all computational experiments were performed using a core i3 CPU running at 2.13 GHz with 4 GB of RAM memory. To ensure the statistical significance and reliability of the surrogate model, we computed the Wilcoxon rank sum test. The Matlab function ranksum, with a confidence level of 95%, was used to perform the hypothesis test. It indicates there is no significant difference between two sets of data if the p-value of the test is greater than or equal to 0.05. IV. RESULTS A. Assembling the Surrogate Model The GP algorithm was performed ten times using the GP parameters listed in Table I. The results for training and validation for the best individual (lower MSE in training) of the best and worst runs are presented in Table II, Table III, and Table IV. Additionally, the mean and the standard deviation of all of the best individuals of the ten runs are shown. In results, the values of the MSE, the coefficient of determination R2, and the p-value statistics are shown. We highlight that all obtained expressions

MENDES et al.: SURROGATE GENETIC PROGRAMMING BASED MODEL

TABLE I GP PARAMETERS

2067

TABLE IV GP RESULTS FOR

TABLE II GP RESULTS FOR

Fig. 3. The original and surrogate robust frontiers.

TABLE III GP RESULTS FOR

are statistically valid according to the Wilcoxon rank sum test (p-value 0.05) and have R2 very close to 1. B. Solving the Robust TEAM 22 Problem In this study, the MOGA proposed in [1] is used to solve the robust TEAM 22 problem. It utilizes binary code, one-cut-point crossover, bit-change mutation, roulette selection, niche technique, and elitism. The crossover and mutation rates are 1 and 0.001, respectively. There are 50 individuals in the population and 50 generations, as in [1]. The experiments solved the robust TEAM 22 problem using the MOGA with 20 samples for the uncertainty parameters. These samples are used to compute the worst case scenario approximation of a particular solution; for more details, see [1]. Since the worst case induced by the uncertainty parameters can be better computed using a larger number of samples, we chose 20, which is the maximum value used in [1]. The values for ,

, and are chosen from ranges [0, 0.03], [0, 0.01] and [0, 0.01], respectively. In [1], only FEM simulations were used to calculate the optimization functions in MOGA. In this paper, during the optimization process, we used the GP surrogate models in all generations of the MOGA, except in the last one, where the individuals were evaluated by FEM. The best surrogate models obtained by GP were selected to assist the MOGA in the computation of , , and . We clarify that in our proposal, the MOGA, assisted by GP, is performed times to achieve the surrogate robust frontier. After each run, all nondominated solutions (regarding the objective values) are stored in an archive. During the archive update, any dominated individuals are removed from the archive. The surrogate robust frontier is formed by the archive solutions after all runs. In this study, we define . Fig. 3 shows the comparison between the surrogate and original frontiers obtained for the robust TEAM 22 problem. The results indicate that our approach attains a surrogate robust frontier close to that obtained by the baseline algorithm. We emphasize that, despite most of the surrogate robust solutions are dominated by the solutions in the original robust frontier, both frontiers lead to good quality robust minimizers. At end of this section, to verify this point, we will show that the solutions selected in both frontiers are comparable in terms of relevant criteria as the volume of material and stored energy. The selection of the solutions is performed using the same filtering procedure described in [1] and [10] to identify one robust minimizer as the solution to the TEAM 22 problem. Another relevant issue to analyze is the computational cost, which includes the costs of the finite element computation,

2068

IEEE TRANSACTIONS ON MAGNETICS, VOL. 49, NO. 5, MAY 2013

and yielding a stored energy of 180.36 MJ, . The robust minimizers (1) and (2) have and yield in a volume of material very similar, with a tiny advantage to solution (1). In addition, the stored energy using solutions (1) and (2) has small deviation ( 1%) from 180 MJ, 181.41 (0.78%) and 180.36 (0.2%), respectively.

TABLE V COMPARISON SUMMARY

V. CONCLUSION the construction of the surrogate model, and the optimization process. The first cost is incurred by the 450 FEM simulations needed to feed the GP algorithm. Each evaluation spends 3.5 seconds on average, and so this cost is estimated to be 1575s. The second cost involves the GP algorithm. It spends 2700s on average to assemble each surrogate optimization function. Thus, this cost is estimated to be 8100s. The third cost is the time spent by the MOGA. According to [9], this cost is usually negligible when using surrogate functions. However, our approach uses FEM simulations in the last generation; therefore, the cost of these simulations must be counted. Our approach performs 5000 FEM evaluations, i.e., 50 individuals, 20 samples of uncertainty, and 5 runs. Thus, this cost is estimated to be 17500s. Hence, the total computational cost of the MOGA assisted by GP surrogate functions is estimated to be 27175s. A total of 5450 FEM evaluations were required. For the baseline MOGA, which uses only FEM to perform the optimization functions computation, the first and second costs are not applicable. However, the third cost involves computing 50000 FEM simulations, i.e., 50 individuals, 20 samples of uncertainty and 50 generations. Consequently, the estimated cost is 175000s. As a result, the MOGA assisted by the GP surrogate model spends about 15.5% of the CPU time, and performs 10.9% of FEM evaluations, as that required by the baseline MOGA. Table V summarizes the results. Finally, the results from the optimization process must be filtered to identify one robust minimizer as the solution to the TEAM 22 problem. As in [1] and [10], we discarded the solutions that have the magnetic flux density and chose the one with the smaller volume of material: . Thus, from the surrogate robust frontier, we selected

(1) yielding a stored energy of 181.41 MJ, and . For comparison, from the original robust frontier in [1] and using the filtering procedure explained previously, it was selected

(2)

Our proposal was tested on the robust TEAM 22 benchmark problem and its performance was compared with results presented in literature. The GP technique proved to be useful to construct reliable surrogates that assisted the MOGA for the optimal design of an electromagnetic device under uncertainty conditions. The MOGA assisted by GP was able to achieve a robust frontier close to the original frontier and provide a robust solution in agreement with that found in the literature. In addition, the use of the GP surrogate model achieved significant CPU time savings and a considerable reduction in the number of computational simulations. We note that the presented methodology is extensible to other problems in diverse fields, and we hope that our findings will motivate further research in robust approaches assisted by GP. ACKNOWLEDGMENT This work was supported by the Brazilian agencies CAPES and CNPq. REFERENCES [1] G. L. Soares, R. L. S. Adriano, C. A. Maia, L. Jaulin, and J. A. Vasconcelos, “Robust multi-objective TEAM 22 problem: A case study of uncertainties in design optimization,” IEEE Trans. Magn., vol. 45, no. 3, pp. 1028–1031, Mar. 2009. [2] G. Avigad and J. Branke, “Embedded evolutionary multi-objective optimization for worst case robustness,” in Proc. GECCO’08, 2008, pp. 617–624. [3] G. L. Soares, F. G. Guimarães, C. A. Maia, J. A. Vasconcelos, and L. Jaulin, “Interval robust multi-objective evolutionary algorithm,” in Proc. IEEE Congress on Evolutionary Computation, Trondheim, Norway, 2009, pp. 1637–1643. [4] J. R. Koza, Genetic Programming—On the Programming of Computers by Means of Natural Selection. Cambridge, MA: MIT Press, 1992. [5] R. Poli, W. B. Langdon, and N. F. McPhee, A Field Guide to Genetic Programming [Online]. Available: http://www.gp-field-guide.org.uk 2008 [6] M. Keijzer, “Scaled symbolic regression,” Genetic Programming and Evolvable Machines, vol. 5, pp. 259–269, 2004. [7] P. Alotto, A. V. Kuntsevich, C. Magele, G. Molinari, C. Paul, M. Repetto, and K. Richter, “Multiobjective optimization in magnetostatics: A proposal for a benchmark problem,” IEEE Trans. Magn., vol. 32, pp. 1238–1241, 1996. [8] P. Alotto, U. Baumgartner, F. Freschi, M. Jaindl, A. Kostinger, C. Magele, W. Renhart, and M. Repetto, “SMES optimization benchmark extended: Introducing pareto optimal solutions into TEAM 22,” IEEE Trans. Magn., vol. 44, pp. 1066–1069, 2008. [9] L. Lebensztajn, C. A. R. Marretto, M. C. Costa, and J.-L. Coulomb, “Kriging: A useful tool for electromagnetic device optimization,” IEEE Trans. Magn., vol. 40, no. 2, pp. 1196–1199, Mar. 2004. [10] F. G. Guimarães, F. C. F. Pinto, R. R. Saldanha, H. Igarashi, and J. A. Ramírez, “A multi-objective proposal for the team benchmark problem 22,” IEEE Trans. Magn., vol. 42, no. 4, pp. 1471–1474, 2006.