A Hybrid Local Search Operator for Multiobjective ...

0 downloads 0 Views 374KB Size Report
In this work, a hybrid operator that transforms the original multiobjective problem ..... of solutions to apply the operator β = 5, the number of clusters created with ...
A Hybrid Local Search Operator for Multiobjective Optimization Alan D´ıaz-Manr´ıquez, Gregorio Toscano-Pulido and Ricardo Landa-Becerra Information Technology Laboratory, CINVESTAV-Tamaulipas Parque Cient´ıfico y Tecnol´ogico TECNOTAM Km. 5.5 carretera Cd. Victoria-Soto La Marina ´ Cd. Victoria, Tamaulipas 87130, MEXICO ({adiazm,gtoscano}@tamps.cinvestav.mx)

Abstract—In recent years, the development of hybrid approaches to solve multiobjective optimization problems has become an important trend in the evolutionary computation community. Despite hybrid approaches of mathematical programming techniques with multiobjective evolutionary algorithms are not very popular, when both fields are successfully coupled, results are impressive. However, the main objective of this sort of hybridization relays on the needing of several executions of the mathematical approach in order to obtain a sample of the Pareto front, raising with this, the number of fitness function evaluations. However, the use of surrogate models has become a recurrent approach to diminish the number of function evaluations. In this work, a hybrid operator that transforms the original multiobjective problem into a set of modified goal programming models is proposed. Furthermore, a local surrogate model is used instead of the real function in the hybrid operator. The goal programming model with the surrogate is optimized by a direct search method. Additionally, a standalone algorithm that uses the hybrid operator is here proposed. The new algorithm is validated using several test problems and performance measures commonly adopted in the specialized literature. Results indicate that the proposed operator gives rise to an effective algorithm, which produces results that are competitive with respect to those obtained by two well-known multiobjective evolutionary algorithms.

I. I NTRODUCTION The simultaneous satisfaction of a set of conflicting objective functions arises in many engineering an science areas. For this reason, the operation research community has developed several approaches to tackle this sort of problems [1]. These approaches can produce high quality solutions under certain conditions, even in some difficult problems [2, 3]. However, they also have some known issues e.g., they usually produce a single solution and therefore it is required to execute the algorithm several times in order to produce a desired number of trade-off solutions (requiring a high number of the objective function). Additionally, they usually assume certain conditions that are rarely available in nature. Evolutionary Algorithms (EAs) are population-based techniques that make a multidimensional search, finding more than one solution within an execution. Furthermore, EAs are able to locate solutions close to the global optimum even in highly rugged search spaces. Therefore, it was not coincidence that the EAs have been successfully applied to an important variety of difficult multiobjective optimization problems (MOPs). However, despite their success, when optimizing computationally expensive real-world optimization problems, even these approaches can

fail when the number of fitness function evaluations required to give acceptable results exceeds the available computation time. Therefore, the EA community have proposed several ways to improve the efficiency of their existing approaches. Some of the major developments to achieve this goal are: hybrid algorithms [4]–[9], informed operators [10]–[14] and the use of surrogate models [15]–[19]. Among the hybrid approaches, well worth noting that the use of mathematical programming and MOEAs have produce successful approaches. For example, Landa et al. [20] could successfully deal with difficult bi-objective optimization problems by coupling an -constraint approach with an evolutionary algorithm. Ranjithan et al. proposed CMEA [21]. This approach also hybridizes the -constraint method with an evolutionary algorithm. However, in this approach, the authors try to reduce the computational cost of each independent optimization, recycling the final population of each optimization as the initial population of the next one. Baykasoglu et al. coupled a basic tabu search algorithm with a goal programming method [22]. The authors state that every optimization technique which works with more than one solution can effectively be used for solving MOPs. The results obtained by the authors were similar to previously reported results. Gen et al. showed that a genetic algorithm can be used for solving goal programming models [23]. One common issue of the previous approaches is the excessive number of evaluations needed to obtain a set of trade-off solutions. Therefore, it is possible to obtain better approaches if we adopt the use of surrogate models in order to diminish the number of evaluations functions needed to produce acceptable results. In this work, we propose a hybrid local search approach that can be seen as an asexual hybrid operator that transforms the original multiobjective problem in a set of modified goal programming models. These models are then optimized using a numerical method (Nelder-Mead). In this regard, the use of Nelder-Mead will increase drastically the number of function evaluations. Therefore, we decided to replace the real function with local surrogate models in order to avoid the excessive number of function evaluations. Finally, to validate the hybrid operator, a standalone algorithm is proposed. The remainder of this work is organized as follows. Details of the hybrid operator and the standalone algorithm are given in Section II. A comparative study with respect to other

algorithms is presented in Section III. Finally, conclusions and future work are given in Section IV. II. H YBRID OPERATOR The hybrid operator here proposed can be an alternative or a complement to the traditional genetic variation operators. Additionally, we say that our approach is a local search approach since our original motivation was to develop an algorithm that after accepting a single solution, it could generate new nondominated solutions. Therefore, the single solution enters in an improvement cycle that is conducted by the use of a modified goal programming technique.

needs to be penalized if the obtained solution is in an opposite location to the desired direction (defined). For example, Figure ~ and the goal 2 supposes that the solution to be improved is A ~ direction is d1 , if B is evaluated, this solution will not be penalized because it is located in the same side of the search ~ will be only the distance direction, so that, the evaluation of B to the search direction distB . Conversely, if the solution to ~ this solution need to be penalized because it is evaluate is C, located in the opposite direction, and its evaluation will be the distance to the search direction distC and the distance to the perpendicular direction of the search direction dist0C , so that, ~ = distC + dist0 . F~ (C) C

A. Modified goal programming In the original goal programming (GP) method was expressly created to handle multiobjective problems where the Decision Maker (DM) specifies optimistic aspiration levels (goals) for each objective function that he/she wishes to attain. Thus, the goal programming formulation tries to minimize the absolute deviation from the goals to the objectives. The simplest form of this method may be formulated as follows: k X

|fi (~x) − Gi |,

subject to x ∈ Q

dist'C C

distB

B

d1

distC

min

f2

A

(1)

i=1

f

where Gi denotes the goal set by the decision maker for the ith objective function fi (~x), the goal vector is denoted by G = [G1 , G2 , . . . , Gk ]T , and Q represents the feasible region. Then, the criterion is to minimize the sum of the absolute values of the differences between goals values and currently achieved values. However, we would like to direct the search to specific regions of the search space. Therefore, a modified goal programming (modified-GP) method is here proposed. In the modified-GP, the DM assigns a goal that can be seen as a direction in which any solution represents an improvement with respect to convergence and/or diversity in the original multiobjective problem. Figure 1 shows the difference between the GP and the modified-GP. The goal vector restricts the search to a defined location (goal) in the objective space, while the goal direction allows the approach to search for a solution in a suitable direction. Initial solution Goal vector

f2

Goal direction

G2 G1 Fig. 1.

f1

Difference between the goal vector and the goal direction

In the modified-GP, the goal direction can be seen as a straight line. Therefore, the objective function tries to minimize the distance to the goal direction. However, this distance

Fig. 2.

Example of evaluation of solutions in modified-GP

The modified-GP may be formulated as follows: min g(~x) , where: ( g(~ x) =

|DIST (f (~ x), di )| |DIST (f (~ x), di )| + DIST (f (~ x), d0i )

if DIST (f (~ x), d0i ) < 0 otherwise

DIST refers to the directed distance from a point to a line, di is the goal direction and d0i is the perpendicular direction of the goal direction di . B. Goal directions The key feature required for the hybrid operator is the improvement of the existing solutions. To achieve this purpose, three goals for the classical GP are proposed. The three proposed goals are shown in Figure 3. It is obvious that goals G2 and G3 improve the convergence, whilst goals G1 and G4 can improve the diversity. However, we would like to direct the search to specific directions. Therefore, we propose to replace the goal vector with goal directions. Figure 4 shows the four directions proposed in the proposed modified-GP. The goal directions d2 and d3 were proposed in order to surpass G2 and G3 , respectively. It is worth noticing, however, that we are not interested in accomplish G2 and G3 , but to use their directions and go beyond their positions. Therefore, these directions emphasize the search for solutions in order to improve the algorithm’s convergence. However, it is certainly more difficult to create directions that passes through G1 and G4 , because it is necessary the degree of inclination of each direction. In order to avoid this inclination parameter, the up

f2

d2 = d4 = x − f1 (~x)

~ and f2 (X) ~ are the objective values of the where f1 (X) solution that is desired to be improved. Since d3 is in the same direction than d1, and similarly d2 is in the same direction than d4 , then only two equations are used. However, modified-GP can penalize solutions according to their search direction. With the above procedure the hybrid operator will create four new solutions on each execution.

G1 X

G2

G3

G4

f1 Fig. 3.

C. Optimization of the modified-GP with Nelder-Mead

Goals proposed for a classical Goal Programming

f2

d1

G1 X

d2

d4

G2 G3 d3

G4

f1 Fig. 4.

(3)

Search Directions of the hybrid operator

(d1 ) and reverse (d4 ) directions were proposed. Then, the goal G1 can be achieved with the collaboration of the up direction (d1 ) and d2 . Moreover, the goal G4 can be achieved with the collaboration of the reverse direction (d4 ) and d3 . In Figure 5, we show how a combination of d4 and d3 can achieve the G4 goal.

EAs has been successfully applied to solve difficult optimization problems. However, since we intend to use the proposed operator as a local search operator, then an additional search approach to achieve the aspiration points is needed. In this work, we decided to use Nelder-Mead [24], since it is a reliable and efficient direct search method [25]. However, the main disadvantage on using this technique is the number of evaluations performed on each execution. For this sake, we replaced the original function f (~x) with a local surrogate model s(~x), and only the new solutions resulting from the optimization process (conducted with Nelder-Mead) will be evaluated in the original function f (~x). To create the local surrogate model is necessary to maintain a repository of previously evaluated solutions R. Therefore, in order to select the solutions to create the surrogate model, we use a simple clustering algorithm (k-means)1 in R, such that, we can find the solutions which belong to the same cluster (region) of the parent solution (~x). Algorithm 1 shows the procedure for the hybrid operator. D´ıaz et. al [26] performed a comparative study of several surrogate models, according their results, the Radial Basis Function (RBF) is a surrogate model with a robust behavior, therefore, we decided to adopt it to perform the experimentation of this article. Algorithm 1 Hybrid-Operator(~x, R)

f2

X

Input: Solution ~ x, Repository R Output: New solutions N S NS = ∅ s(~ x)=Local-Surrogate-Model(~ x, R) d =Create search directions for each Search direction di do ~ ←Solve optimization problem for di in s(~ Y x) ~ {Add the solution to the new solutions created} N~S ← N~S ∪ Y end for

d4 d3 G4

D. A standalone algorithm for the hybrid operator

f1 Fig. 5.

Example of the combinations of goal directions

Since the directions are seen as straight lines and each of them is represented with an equation of the form Ax + By + C = 0. Hence, we have: d1 = d3 = y − f2 (~x)

In order to evaluate the effectiveness of the proposed operator, a simple stand-alone algorithm based on the hybrid operator was proposed. The proposed algorithm works as follows: first an initial population of solutions P0 of size |P | is created using a latin hypercube2 [27]. Then, the main procedure of the algorithm begins with the selection of the solutions that 1 The

clustering algorithm is performed in the variable space latin hypercube is a statistical method for a stratified sampling that can be applied to multiple variables 2A

(2)

our hybrid operator will try to improve. Therefore, β solutions are selected in P according to their dominance rank and their crowding distance. Then the hybrid operator is applied to those solutions. The resulting solutions from this procedure will be stored in P . Since P is a fixed-size archive, then dominance rank and crowding distance will be applied again in order to maintain the file size. The Standalone algorithm pseudocode is shown in Algorithm 2.

HV = volume(

Algorithm 2 Standalone Algorithm

|Q| [

vi )

(6)

i=1

Input: |P | (size of population), tmax (number of iterations), β (percentage of solutions to apply the operator) Output: Ptmax (Final population) 1: P0 ←GenerateRandomPopulation() 2: for t ← 0 to tmax do 3: S ←Choose β solutions of Pt (with dominance rank and crowding distance) 4: for j ← to |S| do 5: N ewSolutions ←Hybrid-Operator(Sj , Pt ) 6: N S ← N S ∪ N ewSolutions 7: end for 8: Pt ← Pt ∪ N S 9: F ←fast-non-dominated-sorting(Pt ) 10: Pt+1 = ∅ 11: repeat 12: Assign crowding distance on Fi 13: Pt+1 ← Pt+1 ∪ Fi 14: until |Pt+1 | + |Fi | ≤ |Pt | 15: Crowding distance on Fi 16: Pt+1 ← Pt+1 ∪(Choose the first (|P | − |Pt+1 |) individuals of Fi ) 17: end for

III. D ESIGN OF EXPERIMENTS AND COMPARISON OF RESULTS

The proposed approach was evaluated using 6 test functions, the well-known ZDT test suite [28] and the Kursawe test problem [29]. In order to assess the performance of the proposed approaches, we decided to take three performance measures from the specialized literature. 1) The Averaged Hausdorff Distance (∆p ) [30] between the obtained Pareto front (PF) and the true Pareto front PF ∗ is composed of slight modifications of generational distance and the inverted generational distance (called, GDp and IGDp ), which leads to a more fair indicator. The GDp performance indicator is stated as follow: n

GDp =

work, p = 2 was used. In this metric, we will prefer smaller values of ∆p , since this represents a closer result to the true Pareto front. 2) Hypervolume (HV ). The HV computes the area covered for all the solutions in the approximated Pareto front Q with the help of a reference point W . Equation (6) shows the mathematical definition of HV .

1X p d n i=1 i

! p1 (4)

The (IGDp ) is similar to the GDp but uses the true Pareto front as reference; therefore, it compares each of the PF elements with respect to the front produced by an algorithm. Finally, the ∆p indicator refers to the maximum of GDp and IGDp . ∆p = max(GDp , IGDp )

(5)

where p is the key to handle the outlier trade-off: the smaller p, the higher the averaging effect and the lower the influence of single outliers. On the other hand, if p is increased, outliers influence the value of ∆p . In this

where, for each solution i ∈ Q, a hypercube vi is constructed using the reference point W . Therefore, HV is the union of the volume of all the hypercubes. 3) Spacing (S). The spacing is computed with the relative distance between two consecutive solutions in the approximated Pareto front (Q). Equation 7 refers to the mathematical definition of the spacing. v u |Q| u 1 X ¯2 (di − d) (7) S=t |Q| i=1 Pk j i | is the miniwhere di = minj∈Q∧j6=i m=1 |fm − fm mal distance between two solutions and d¯ is the mean of the previously measured distances. Small values are preferred for this indicator. The parameters used for the standalone algorithm were hand-tuned using a trial-and-error approach. The selected parameters are an initial population of |P | = 100, the number of solutions to apply the operator β = 5, the number of clusters created with k-means are 4. In order to validate our proposal, we chose the two following approaches: 1) NSGA-II which is, by far, the most popular Pareto-based MOEA, and 2) MOEA/D, a more recent MOEA, based on decomposition, who has been found to be very effective. The parameters used in NSGA-II and MOEA/D were taken from their original proposals. Both approaches use a population size of 100, a mutation rate of 1/N , where N is the number of variables, a crossover rate of 0.8, a crossover index of 15, and a mutation index of 20. Additionally, MOEA/D uses T = 20 as the size of the neighborhood. Since we would like to investigate the behavior of our proposal with respect to the compared approaches, we decided to measure the online convergence. For this sake, we performed 31 independent executions, and we measured ∆p every 100 evaluations during 2, 000 function evaluations. Figure 6 shows the results obtained by the compared approaches in all the adopted problems. In this figures, x-axis shows the number of evaluations whilst y-axis shows the average performance according to ∆p . In problems ZDT1, ZDT2, ZDT3 and ZDT4, we can be observed that our proposed standalone algorithm outperformed the NSGA-II and MOEA/D approaches. From this figure, it is also clear that our standalone algorithm required approximately 400 evaluations in most of the problems to obtain a good results, while NSGA-II and MOEA/D failed to approximate the true Pareto front within

Problem ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 KURSAWE

NSGA-II 113.79±1.13 102.95±2.43 120.51±1.06 1.94±8.14 72.81±2.63 656.90±3.44

MOEA/D 107.88±2.78 101.41±2.80 114.78±3.04 41.10±24.35 102.82±4.40 657.16±3.90

Standalone 120.59±0.04 118.92±2.85 126.90±1.62 113.68±13.17 114.97±1.54 661.71±0.36

TABLE I H YPERVOLUME (HV) OF THE 3 MOEA S IN THE 6 TEST FUNCTIONS

proposal was capable to converge to its true Pareto front using only 2, 000 evaluations. Finally, the standalone algorithm behaved well for the rest of the problems (including Kursawe). For this reasons, it can be said that the proposed hybrid operator helped to the standalone algorithm to accelerate its convergence and also to explore different regions of the search space. A. Statistical analysis

Problem ZDT1 ZDT2 ZDT3 ZDT4 ZDT6 KURSAWE

NSGA-II 0.05±0.02 0.08±0.04 0.06±0.02 4.62±4.31 0.15±0.10 0.12±0.02

MOEA/D 0.07±0.03 0.03±0.05 0.07±0.04 41.69±112.49 0.33±0.25 0.30±0.13

Standalone 0.02±0.00 0.06±0.07 0.04±0.02 0.17±0.63 0.34±0.23 0.18±0.08

TABLE II S PACING (S) OF THE 3 MOEA S IN THE 6 TEST FUNCTIONS

2, 000 evaluations. Moreover, in ZDT6, the performance of MOEA/D was slightly better than the standalone algorithm. However, it can be seen that the proposed approach had a faster convergence that the MOEA/D. Finally, in Kursawe the performance of all the algorithms were similar. Additionally, the three approaches were executed for 2, 000 evaluations and it was measured the hypervolume. In order to have more confident results, each MOEA was executed 31 times. The application of the Hypervolume performance measure to the results obtained by the three approaches is shown in Table I. From these results, it is easy to see that the proposed approach outperformed the others in all the test problems. It is important to note, that despite ZDT4 test problem is a difficult problem the proposed approach could manage to outperform the other algorithms. We decided to adopt a third performance measure so that we could be able to reach more general conclusions. Spacing (S) is a performance indicator that has been widely used in the specialized literature. Table II shows the results of the applications of this indicator to the algorithms’ outputs. Results show that the dispersion was competitive with respect to the obtained by other approaches in most of the problems. In ZDT6 the spacing reached for our proposal was the worst. However, it is necessary indicate that the proposed algorithm was the algorithm that achieved the best convergence with respect to hypervolume (see Table I). Since performing a visual comparison can also be sometimes helpful, we decided to plot the median of the executions with respect to the Hypervolume performance measure. The obtained Pareto fronts are shown in Figure 7. From this figure, it is easy to see that the proposed approach could successfully achieve the true Pareto front. Additionally, despite ZDT4 was a very difficult test function for MOEA/D and NSGA-II, our

Additionally, a statistical analysis was performed in order to verify the statistical differences among the proposed approach, the NSGA-II and MOEA/D . The comparison methodology is as follows: first, the Shapiro-Wilk test was performed to verify the normality of the data distributions. If both samples are normal distributed, the homogeneity of their variances is verified with a Bartlett’s test. If both variances are homogeneous then an ANOVA test is performed to verify the statistical significance, otherwise a Welch’s test is used. For non-normally distributed data, the Kruskal-Wallis test was performed to verify the statistical significance. In all the test a significance level of 0.05 was used. The results are presented in Tables III and IV. The values marked with “+” suggest a statistically significant difference in performance in favor of our proposal, while the values marked with “-” suggest a difference in favor of the state-of-the-art algorithms. Additionally, the values in blank suggest a similar performance. The obtained results confirm the above discussion. The proposed approach showed a good general performance for all the instances, when both, HV and ∆p performance indicators were measured. However, in Kursawe, the NSGA-II and MOEA/D outperformed our proposal in ∆p with few evaluations, this is because Kursawe can be solved easily in 2, 000 evaluations by these algorithms. IV. C ONCLUSIONS The purpose of this paper was to propose a hybrid operator that uses a mathematical programming technique with a classical numerical method. The adopted mathematical programming technique was the goal programming method. However, to control the search direction a modified-GP was proposed. This modified-GP was solved using the Nelder-Mead algorithm. In order to decrease the number of evaluations needed by the Nelder-Mead algorithm, the real function was replaced with the use of local surrogate models. The proposed operator was embedded into a simple standalone algorithm in order to evaluate its performance. The algorithm was compared in six test problems borrowed from the literature. Results showed that the new operator was able to guide the search to the true Pareto front in all the adopted problems. Moreover, the proposed algorithm was compared with respect to NSGA-II and MOEA/D (two popular MOEAs), results showed that while the state-of-the-art algorithms could not convert using 2, 000 evaluations, the standalone algorithm reached the Pareto fronts of most of the adopted problems within only approximately 400 evaluations. As future work, we would like to explore the hybridization of the hybrid operator here proposed with an evolutionary algorithm.

NSGA−II MOEA−D Standalone algorithm

NSGA−II MOEA−D Standalone algorithm

ZDT1

ZDT2

3

4 3 ∆p

∆p

2 2

1 1 0

0

500

1000 Evaluations

1500

0

2000

0

500

NSGA−II MOEA−D Standalone algorithm

1000 Evaluations

1500

2000

NSGA−II MOEA−D Standalone algorithm

ZDT3

ZDT4

3

2000 1500 ∆p

∆p

2

1000

1 500

0

0

500

1000 Evaluations

1500

0

2000

0

500

NSGA−II MOEA−D Standalone algorithm ZDT6

2000

KURSAWE 4

6

3 ∆p

∆p

1500

NSGA−II MOEA−D Standalone algorithm

8

4 2 0

1000 Evaluations

2 1

0

500

1000 Evaluations

0

500

1000 Evaluations

1500

2000

ZDT1

ZDT2

ZDT3

ZDT4

ZDT6

KUR

400 800 1200 1600 2000

400 800 1200 1600 2000

400 800 1200 1600 2000

400 800 1200 1600 2000

∆p of NSGA-II, MOEA/D and standalone algorithm in the six test problems.

400 800 1200 1600 2000

Standalone algorithm/NSGA-II Standalone algorithm/MOEA/D

0

2000

400 800 1200 1600 2000

Fig. 6.

1500

+ + + + + + + + + +

+ + + + + + + + + +

+ + + + + + + + + +

+ + + + + + + + + +

+ + + + + + + + + +

+ + + + + + +

TABLE III HV: S TATISTICAL ANALYSIS

ZDT1

ZDT2

2.5

2.5 Pareto front NSGA−II MOEA/D Standalone

2

Pareto front NSGA−II MOEA/D Standalone

2

f

2

1.5

f2

1.5

1

1

0.5

0.5

0

0 0

0.1

0.2

0.3

0.4

0.5 f

0.6

0.7

0.8

0.9

1

0

0.1

0.2

0.3

0.4

1

0.5 f

0.6

0.7

0.8

0.9

1

1

(a) ZDT1

(b) ZDT2

ZDT3

ZDT4

3

50 Pareto front NSGA−II MOEA/D Standalone

2.5

Pareto front NSGA−II MOEA/D Standalone

45 40

2 35 1.5

30 f2

f2

1

25 20

0.5

15 0 10 −0.5

5

−1

0 0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

0

0.1

0.2

0.3

f

1

0.4

0.5 f

0.6

0.7

0.8

0.9

1

1

(c) ZDT3

(d) ZDT4

ZDT6

KURSAWE

7

2 Pareto front NSGA−II MOEA/D Standalone

6

Pareto front NSGA−II MOEA/D Standalone

0

4

−4 f2

−2

f2

5

3

−6

2

−8

1

−10

0 0.2

0.3

0.4

0.5

0.6 f1

(e) ZDT6 Fig. 7.

0.7

0.8

0.9

1

−12 −20

−19

−18

−17 f1

−16

−15

(f) Kursawe

Pareto fronts for (a): ZDT1, (b): ZDT2, (c): ZDT3, (d): ZDT4, (e): ZDT6, (f): Kursawe using 2, 000 evaluations

−14

ZDT2

ZDT3

ZDT4

ZDT6

KUR

400 800 1200 1600 2000

400 800 1200 1600 2000

400 800 1200 1600 2000

400 800 1200 1600 2000

400 800 1200 1600 2000

400 800 1200 1600 2000

Standalone algorithm/NSGA-II Standalone algorithm/MOEA/D

ZDT1

+ + + + + + + + + +

+ + + + + + + + + +

+ + + + + + + + + +

+ + + + + + + + + +

+ + + + + + + + + +

+ + - - -

-

TABLE IV ∆p : S TATISTICAL ANALYSIS

R EFERENCES [1] K. Miettinen, “Nonlinear multiobjective optimization.” Kluwer Academics Publishers, Boston, Masachusetts, 1998. [2] R. Landa Becerra, “Use of domain information to improve the performance of an evolutionary algorithm,” Ph.D. dissertation, Computer Science Department, CINVESTAV-IPN, Mexico City, Mexico, June 2007. [3] R. L. Becerra and C. A. C. Coello, “Epsilon-constraint with an efficient cultured differential evolution,” in GECCO (Companion), 2007, pp. 2787–2794. [4] X. Hu, Z. Huang, and Z. Wang, “Hybridization of the MultiObjective Evolutionary Algorithms and the Gradient-based Algorithms,” in Proceedings of the 2003 Congress on Evolutionary Computation (CEC’2003), vol. 2. Canberra, Australia: IEEE Press, December 2003, pp. 870–877. [5] G. Xiaofang, “A hybrid simplex multi-objective evolutionary algorithm based on preference order ranking,” in Computational Intelligence and Security (CIS), 2011 Seventh International Conference on, dec. 2011, pp. 29–33. [6] A. G. Hern´andez-D´ıaz, L. V. Santana-Quintero, C. Coello Coello, R. Caballero, and J. Molina, “A New Proposal for Multi-Objective Optimization using Differential Evolution and Rough Sets Theory,” in 2006 Genetic and Evolutionary Computation Conference (GECCO’2006), M. K. et al., Ed., vol. 1. Seattle, Washington, USA: ACM Press. ISBN 1-59593-186-4, July 2006, pp. 675–682. [7] L. V. Santana-Quintero, N. Ram´ırez-Santiago, C. A. Coello Coello, J. Molina Luque, and A. G. Hern´andez-D´ıaz, “A New Proposal for Multiobjective Optimization Using Particle Swarm Optimization and Rough Sets Theory,” in Parallel Problem Solving from Nature - PPSN IX, 9th International Conference, T. P. Runarsson, H.-G. Beyer, E. Burke, J. J. Merelo-Guerv´os, L. D. Whitley, and X. Yao, Eds. Reykjavik, Iceland: Springer. Lecture Notes in Computer Science Vol. 4193, September 2006, pp. 483–492. [8] K. Harada, J. Sakuma, and S. Kobayashi, “Uniform Sampling of Local Pareto-Optimal Solution Curves by Pareto Path Following and its Applications in Multi-objective GA,” in 2007 Genetic and Evolutionary Computation Conference (GECCO’2007), D. Thierens, Ed., vol. 1. London, UK: ACM Press, July 2007, pp. 813–820. [9] O. Schuetze, A. Lara, and C. A. Coello Coello, “Evolutionary Continuation Methods for Optimization Problems,” in 2009 Genetic and Evolutionary Computation Conference (GECCO’2009). Montreal, Canada: ACM Press, July 8–12 2009, pp. 651–658, iSBN 978-1-60558325-9. [10] S. Jun and K. Shigenobu, “Extrapolation-Directed Crossover for Realcoded GA: Overcoming Deceptive Phenomena by Extrapolative Search,” in Proceedings of the 2001 Congress on Evolutionary Computation (CEC 2001), vol. 1. Seoul, Korea: IEEE Press, 2001, pp. 655 –662 vol. 1. [11] K. Anderson and Y. Hsu, “Genetic Crossover Strategy Using an Approximation Concept,” in Proceedings of the 1999 Congress on Evolutionary Computation, 1999. CEC 99., vol. 1. Washington D.C., USA: P. J. Angeline, Z. Michalewicz, M. Schoenauer, X. Yao, and A. Zalzala,Eds., 1999, p. 533. [12] Q. Zhou and Y. Li, “Directed Variation in Evolution Strategies,” IEEE Transactions on Evolutionary Computation, vol. 7, no. 4, pp. 356 – 366, aug. 2003. [13] Q. Zhang, J. Sun, and E. Tsang, “An Evolutionary Algorithm with Guided Mutation for the Maximum Clique Problem,” IEEE Transactions on Evolutionary Computation, vol. 9, no. 2, pp. 192 – 200, april 2005. [14] A. Talukder, M. Kirley, and R. Buyya, “The Pareto-Following Variation Operator as an Alternative Approximation Model,” in IEEE Congress on Evolutionary Computation, 2009. CEC ’09. , may 2009, pp. 8 –15.

[15] A. Ratle, “Optimal Sampling Strategies for Learning a Fitness Model,” in Proceedings of 1999 Congress on Evolutionary Computation, vol. 3, Washington D.C., July 1999, pp. 2078–2085. [16] M. El-Beltagy and A. Keane, “Evolutionary Optimization for Computationally Expensive Problems using Gaussian Processes,” in Proceedings of International Conference on Artificial Intelligence. CSREA, 2001, pp. 708–714. ¨ [17] M. Emmerich, A. Giotis, M. Ozdenir, T. B¨ack, and K. Giannakoglou, “Metamodel-Assisted Evolution Strategies,” in Parallel Problem Solving from Nature, ser. Lecture Notes in Computer Science, no. 2439. Springer, 2002, pp. 371–380. [18] D. Bueche, N. Schraudolph, and P. Koumoutsakos, “Accelerating Evolutionary Algorithms with Gaussian Process Fitness Function Models,” IEEE Trans. on Systems, Man, and Cybernetics: Part C, 2004, in press. [19] R. G. Regis and C. A. Shoemaker, “Local Function Approximation in Evolutionary Algorithms for the Optimization of Costly Functions,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 5, pp. 490–505, 2004. [Online]. Available: http://dx.doi.org/10.1109/TEVC. 2004.835247 [20] R. L. Becerra and C. A. C. Coello, “Solving hard multiobjective optimization problems using epsilon-constraint with cultured differential evolution,” in PPSN, 2006, pp. 543–552. [21] S. R. Ranjithan, S. K. Chetan, and H. K. Dakshina, “Constraint methodbased evolutionary algorithm (cmea) for multiobjective optimization,” in Proceedings of the First International Conference on Evolutionary Multi-Criterion Optimization, ser. EMO ’01. London, UK, UK: Springer-Verlag, 2001, pp. 299–313. [Online]. Available: http://dl.acm. org/citation.cfm?id=647889.739246 [22] A. BAYKASOGLU, S. OWEN, and N. GINDY, “A taboo search based approach to find the pareto optimal set in multiple objective optimization,” Engineering Optimization, vol. 31, no. 6, pp. 731–748, 1999. [Online]. Available: http://www.tandfonline.com/doi/abs/10.1080/ 03052159908941394 [23] M. Gen and R. Cheng, Genetic algorithms and engineering optimization. Wiley-interscience, 1999, vol. 7. [24] J. C. Lagarias, J. A. Reeds, M. H. Wright, and P. E. Wright, “Convergence properties of the nelder-mead simplex method in low dimensions,” SIAM Journal of Optimization, vol. 9, pp. 112–147, 1998. [25] G. E. Box and N. R. Draper, Evolutionary operation: a statistical method for process improvement. Wiley New York, 1969, vol. 25. [26] A. D´ıaz-Manr´ıquez, G. Toscano-Pulido, and W. Gomez-Flores, “On the selection of surrogate models in evolutionary optimization algorithms,” in Evolutionary Computation (CEC), 2011 IEEE Congress on, june 2011, pp. 2155 –2162. [27] M. D. McKay, R. J. Beckman, and W. J. Conover, “A Comparison of Three Methods for Selecting Values of Input Variables in the Analysis of Output from a Computer Code,” Technometrics, vol. 21, no. 2, pp. 239–245, 1979. [28] E. Zitzler, K. Deb, and L. Thiele, “Comparison of Multiobjective Evolutionary Algorithms: Empirical Results,” Evolutionary Computation, vol. 8, no. 2, pp. 173–195, Summer 2000. [29] F. Kursawe, “A variant of evolution strategies for vector optimization,” in Parallel Problem Solving from Nature. 1st Workshop, PPSN I, volume 496 of Lecture Notes in Computer Science. Springer-Verlag, 1991, pp. 193–197. [30] O. Sch¨utze, X. Esquivel, A. Lara, and C. Coello, “Using the Averaged Hausdorff Distance as a Performance Measure in Evolutionary MultiObjective Optimization,” to appear in IEEE Transactions on Evolutionary Computation, vol. 16, no. 4, pp. 504–522, aug. 2012.

Suggest Documents