Adaptive Differential Evolution for Multi-objective Optimization

0 downloads 0 Views 192KB Size Report
Multi-objective Optimization. *. Zai Wang, Zhenyu Yang, Ke Tang. ⋆⋆. , and Xin Yao. ⋆⋆⋆. Nature Inspired Computation and Applications Laboratory (NICAL),.
Adaptive Differential Evolution for Multi-objective Optimization Zai Wang, Zhenyu Yang, Ke Tang , and Xin Yao   Nature Inspired Computation and Applications Laboratory (NICAL), Department of Computer Science and Technology, University of Science and Technology of China, Hefei, Anhui 230027, China {wangzai,zhyuyang}@mail.ustc.edu.cn, [email protected], [email protected] http://nical.ustc.edu.cn

Abstract. No existing multi-objective evolutionary algorithms (MOEAs) have ever been applied to problems with more than 1000 real-valued decision variables. Yet the real world is full of large and complex multiobjective problems. Motivated by the recent success of SaNSDE [1], an adaptive differential evolution algorithm that is capable of dealing with more than 1000 real-valued decision variables effectively and efficiently, this paper extends the ideas behind SaNSDE to develop a novel MOEA named MOSaNSDE. Our preliminary experimental studies have shown that MOSaNSDE outperforms state-of-the-art MOEAs significantly on most problems we have tested, in terms of both convergence and diversity metrics. Such encouraging results call for a more in-depth study of MOSaNSDE in the future, especially about its scalability.

1

Introduction

Multi-objective Optimization Problems (MOPs) often involve several incommensurable and competing objectives which need to be considered simultaneously. In the past decade, using evolutionary techniques to tackle MOPs has attracted increasing interests and a number of effective multi-objective evolutionary algorithms (MOEAs) have been proposed [3,4]. For MOEAs, how to generate new individuals (i.e., what reproduction operator we should use) is one of the most important issues. One general approach to devising effective reproduction operators for MOEA is to adopt advanced single objective optimization algorithms to MOPs, and there exist several successful attempts in this direction [3,11]. 

 

This work is partially supported by the National Natural Science Foundation of China (Grant No. 60428202), The Fund for Foreign Scholars in University Research and Teaching Programs (Grant No. B07033) and an EPSRC Grant (EP/D052785/1) on “SEBASE: Software Engineering By Automated SEarch”. Corresponding author. Xin Yao is also with CERCIA, the School of Computer Science, University of Birmingham, Edgbaston, Birmingham B15 2TT, U.K.

Y. Shi et al. (Eds.): MCDM 2009, CCIS 35, pp. 9–16, 2009. c Springer-Verlag Berlin Heidelberg 2009 

10

Z. Wang et al.

Differential evolution (DE) is a simple yet effective algorithm for single objective global optimization problems [5]. It conventionally involves several candidate mutation schemes control parameters, e.g., population size NP, scale factor F and crossover rate CR. These control parameters, as well as mutation schemes, are usually problem dependent and highly sensitive, which often make DE difficult to be utilized in practice. To overcome such disadvantages, we have proposed a DE variant, namely self-adaptive differential evolution with neighborhood search (SaNSDE), in [1]. Three adaptation mechanisms are utilized in the SaNSDE: adaptation for the selection of mutation schemes, adaptations for controlling scale factor F and crossover rate CR. As a result, no parameters fine tune is needed in the algorithm. Empirical studies showed that SaNSDE not only significantly outperformed the original DE on standard benchmark problems [1], but also obtained promising performances on large-scale problems with 1000 dimensions [2]. Due to SaNSDE’s outstanding performance in single objective optimization [1], it is natural to ask whether it will benefit to MOPs as well. For this purpose, we extend SaNSDE in this paper by introducing the Pareto dominance concept into its fitness evaluation. An external archive is also adopted in the proposed algorithm, namely MOSaNSDE, in order to boost its performance. The effectiveness of MOSaNSDE was evaluated by comparing MOSaNSDE to three well-known MOEAs on nine benchmark problems. The rest of this paper is organized as follows. Section 2 summarizes the multiobjective optimization problems and the SaNSDE algorithm. Section 3 describes the new MOSaNSDE algorithm. Section 4 presents the simulation results of MOSaNSDE and the comparison with three other competitive MOEAs. Section 5 concludes this paper briefly.

2 2.1

Preliminaries Multi-objective Optimization Problem

A general multi-objective optimization problem with m conflicting objectives can be described as follow: max/min y = f (x) = (f1 (x), f2 (x), ..., fm (x)) subject to x = (x1 , x2 , ..., xn ) ∈ X y = (y1 , y2 , ..., ym ) ∈ Y

(1)

where x is decision vector and X is the decision space, y is the objective vector, and Y is the objective space. As the objectives of MOPs are conflicting, there might not exist a unique solution which is optimal with respect to all objectives. Instead, there are usually a set of Pareto optimal solutions that are nondominated with one another. The Pareto solutions together make up the so called Pareto-front. In the context of MOPs, we aim at finding a set of nondominated solutions that involve good convergence to the Pareto-front and distribution along it.

Adaptive Differential Evolution for Multi-objective Optimization

2.2

11

Self-adaptive Neighborhood Differential Evolution (SaNSDE)

Differential evolution (DE), proposed by Storn and Price in [5], is a populationbased algorithm which employs a random initialization and three reproductive operators (i.e. mutation, crossover and selection) to evolve its population until a stopping criterion is met. Individuals in DE are represented as a D-dimensional vector xi , ∀i ∈ {1, 2, ..., NP}, where D is the number of dimension variables and N P is the population size. The classical DE is summarized as follow: – Mutation: vi = xi1 + F · (xi2 − xi3 )

(2)

where i1 , i2 , i3 are different integers randomly selected from [1, NP] and they are different with the vector index i, while F is a positive scaling factor. – Crossover:  vi (j), if Uj (0, 1) < CR ui (j) = (3) xi (j), otherwise. where ui (j) is the value of j th dimension of the offspring vector ui , Uj (0, 1) is a uniform random number between 0 and 1, CR ∈ (0, 1) is the crossover rate. – Selection:  ui , if f (ui ) < f (xi ) xi = (4) xi , otherwise. where xi is the offspring of xi for the next generation. Although the original DE performs well on a large variety of problems, it lacks the Neighborhood Search (NS) operator. Thus, Yang et al. borrowed the idea of neighborhood search from another major branch of evolutionary algorithms, evolutionary programming, and proposed the SaNSDE. SaNSDE is similar to the original DE except that Eq. (2) is replaced by the following Eq.(5):  di · |N (0.5, 0.3)|, if U (0, 1) < SC (5) vi = xi1 + di · |δ|, otherwise. where di = (xi2 − xi3 ) is the differential variation, N (0.5, 0.3) is a Gaussian random number with mean 0.5 and standard deviation 0.3, and δ denotes a Cauchy random variable with scale parameter t = 1. SC is the selection criterion to guide which random number (Gaussian or Cauchy) should be used, thus the main parameters of SaNSDE are SC and CR instead of F and CR of original DE. The idea behind SaNSDE is to adapt SC and CR throughout the optimization process via some learning scheme. Concretely, SaNSDE divides the optimization process into several learning periods, each of which consists of a predefined number of generations. Assume the kth learning period has finished and we need to update SC and CR for the next learning period. Let the number of offsprings generated with Gaussian distribution and Cauchy distribution that successfully replaced their parents during the kth learning period be nsg and

12

Z. Wang et al.

nsc, respectively. Let the number of offsprings generated with Gaussian distribution and Cauchy distribution that failed to replace their parents during the kth learning period be nfg and nfc. The SC is updated as Eq. (6): SC =

nsg · (nsc + nf c) nsc · (nsg + nf g) + nsg · (nsc + nf c)

(6)

In SaNSDE, the value of CR is randomly drawn form a Gaussian distribution with mean CRm (initialized as 0.5) and stand deviation 0.1. Within each learning period, the values of CR for each individual will be changed every five generations. At the end of the learning period, CRm is updated to the average of the CR values of the offsprings that have successfully survived to the next generation. Then the next learning period begin. In [1], the length of each learning period was defined as 20 generations, we adopt the same setting in this paper.

3

New Algorithm: MOSaNSDE

Algorithm 1. Pseudo-Code of MOSaNSDE Set the parent population P = φ, the external archive A = φ, and the generation counter t = 0. Initialize the population P with NP individuals P = {p1 , p2 , ..., pNP } and set A = P. while t < tmax (i.e. the terminate generation number) do Update the parameters SC and CR after each learning period. for i=1:N P do Using SaNSDE to generate an offspring individual ci based on pi . If pi dominate ci , ci is rejected. If pi is dominated by ci , pi is replaced by ci and update the archive A. If pi and ci are nondominated by each other, the less crowded one with A will be selected as the new pi . 10: end for 11: Update the archive A. 12: Set t = t + 1. 13: end while 14: the nondominated population in A are the solutions.

1: 2: 3: 4: 5: 6: 7: 8: 9:

From the experimental studies in [1], it can be observed that SaNSDE outperforms not only the original DE, but also several state-of-the-art DE variants on a set of test problems. The advantages of SaNSDE has also been detailedly explained in [1]. Hence we extend SaNSDE to multi-objective optimization problems, due to the success of the similar vein in [3,11]. Algorithm 1 presents the pseudo code of MOSaNSDE. Next, we briefly summarize the major steps of the algorithm. First of all, an initial population is randomly generated according to uniform distribution, then an external archive A is established to store the nondominated solutions found as far. SaNSDE serves as the reproduction operator to generate new solutions. Both the population and the external archive evolve throughout the optimization process. In each generation, an offspring individual will replace its parent if the former dominates the latter. Otherwise, the parent individual will be preserved. In case the two individuals are nondominated by each other, crowding distance [4] between the two solutions and those in the external archive will be calculated, and the one with larger crowding distance will survive. The external archive is updated following several rules. If a new

Adaptive Differential Evolution for Multi-objective Optimization

13

solution is nondominated by any solution in the archive, it will be inserted into the archive. At the same time, those solutions (if any) in the archive that are dominated by the new solution will be removed. When the size of the archive exceeds a predefined value, truncation is required. We first calculate the crowding distance of each individual in the archive, then sort them in descending order. Those individuals with smallest crowding distance will be discarded.

4 4.1

Simulation Results Experimental Settings

We evaluated the performance of the new algorithm on nine widely used test problems (seven bi-objective problems and two 3-objective problems) in the MOEA literature. Three of the bi-objective problems, i.e., SCH, FON, KUR, were firstly proposed by Schaffer, Fonseca and Kursawe in [6,7,8], respectively. The other four bi-objective problems (ZDT1-3, ZDT6) were proposed by Zitzler et al. in [9]. The two 3-objective problems, DTLZ1 and DTLZ2, were proposed by Deb et al. in [10]. Due to the space constraint, we will not list the detailed characteristics of them in this paper, readers can get the explicit formulation of them in the original publications. In the experiments, we compared our MOSaNSDE with other three wellknown MOEAs, including the Nondominated Sorting Genetic Algorithm II (NSGA-II) [4], the Multi-Objective Particle Swarm Optimization (MOPSO) [11], and the Pareto Archived Evolution Strategy (PAES) [12]. These algorithms have been widely used in the literature of MOPs and provide a good basis for our comparative study. For each compared algorithm, 250 generations are simulated per run on all of the test problems. The parameters of MOSaNSDE were set as follows: population size NP = 50 for bi-objective problems and NP = 150 for 3-objective problems, archive size Nmax = 100 for bi-objective problems and Nmax = 300 for 3-objective problems, and “learning period” of SC and CR are both set to be 20 for all test problems. NSGA-II uses the real-coded format with a population size of 100 for bi-objective problems and 300 for 3-objective problems, with crossover rate 0.9 and mutation rate 1/n (n is the number of the decision variables). We also set the parameters for distributions as ηc = 20 and ηm = 20, which are the same as the settings in [4]. For MOPSO, the number of particles was set to 50 for bi-objective problems and 150 for 3-objective problems, the size of repository is 100 for bi-objective problems and 300 for 3-objective problems, and the number of divisions was set to 30. PAES adopted the (1 + λ) scheme with an archive size of 100 for bi-objective problems and 300 for 3-objective problems, while the grid depth was set to 4 for all the test problems. The goal of solving the MOPs are twofold: 1) the solutions obtained should converge as close to the true Pareto-optimal set as possible. 2) the solutions should remain a certain degree of diversity. Based on the above two goals, two metrics have been proposed to measure MOEAs’ performance [4]:

14

Z. Wang et al.

– Convergence Metric (γ). This metric calculates the average distance between the obtained nondominated solutions and the actual Pareto-optimal set, it can be calculated as follows: i=N γ=

i=1

di

N

where di is the Euclidean distance between the ith solution of the N obtained solutions and its nearest neighbor on the actual Pareto-optimal front. A smaller value of γ indicates a better convergence performance. – Spread Metric (Δ). This metric was proposed by Deb et al. in [4] and it measures how well the obtained nondominated solutions distributed: N −1 M e m=1 dm + i=1 |di − d| Δ= M e m=1 dm + (N − 1)d where dem is the Euclidean distance between the extreme solutions of the obtained solutions and the boundary solutions of the actual Pareto set. The parameter di is the Euclidean distance between the two neighboring solutions. d is the mean value of all di ’s. Same as γ, smaller the value of Δ means the better performance. 4.2

Results

We ran all the MOEAs for 30 times independently, and then calculated the means and variances of two metrics (i.e, the convergence and diversity metric). The results are presented in Tables 1 and 2. The best results among the four algorithms are shown in bold. Furthermore, with the purpose to observe the statistical differences between the results obtained by MOSaNSDE and the results obtained by other three algorithms, we employ the nonparametric Wilcoxon sum tests. For each test problem, the Wilcoxon test was carried out between MOSaNSDE and the best one in the three compared algorithms. The h values presented in the last row of Table 1 and Table 2 are the results of Wilcoxon tests, where “1” indicates that the performances of two algorithms are statistically different with 95% certainty, while h = 0 means they are not statistically different. From Tables 1 and 2, we can find that MOSaNSDE converged same as or a little better than other three representative algorithms on the two simple test functions (SCH, FON) and a 3-objective problem (DTLZ1). MOSaNSDE achieved the same best results as MOPSO and PAES on ZDT2 with respect to the convergence metric, while they are all much better than the NSGA-II. On the other five test functions (KUR, ZDT1, ZDT3, ZDT6 and DTLZ2), MOSaNSDE significantly outperformed other three algorithms with respect to the convergence metric. Concerning the diversity metric, it can be observed that MOSaNSDE spread significantly better than other three algorithms on all test functions except the SCH and DTLZ1, on which the performances of four algorithms are comparable.

Adaptive Differential Evolution for Multi-objective Optimization

15

Table 1. MOEAs compared based on Convergence Metric (γ) (mean in the first rows and variance in the second rows, the results which are significantly better than other three algorithms are emphasized in boldface) SCH

FON

KUR

ZDT1

MOSaNSDE 0.006923 0.002248 0.023275 0.001161 2.31E-07 6.47E-08 5.08E-06 2.92E-08

ZDT2

ZDT3

ZDT6

DTLZ1

DTLZ2

0.001432 0.003181 0.006405 0.075387 0.023694 4.54E-08 2.95E-08 6.78E-07 3.67E-03 4.85E-07

NSGA-II

0.008068 0.003165 0.061022 4.32E-07 6.29E-08 5.75E-05

0.072084 9.20E-04

MOPSO

0.007322 0.002454 0.030052 4.28E-07 5.37E-08 2.73E-05

0.018577 0.0017045 0.130576 0.330672 0.378510 0.186092 7.23E-05 5.92E-04 5.54E-05 7.73E-01 4.18E-02 7.35E-06

PAES

0.008004 0.002221 0.984901 5.93E-07 4.67E-08 2.84E-01 0 0 1

0.004046 7.10E-05 1

h

0.052184 0.310079 0.415028 0.065297 0.042863 3.67E-04 7.34E-03 6.38E-02 6.57E-02 1.36E-05

0.001612 0.021562 1.450573 0.096420 0.05796 5.39E-07 7.22E-05 3.02E-01 3.26E-03 5.62E-06 0 1 1 0 1

Table 2. MOEAs compared based on Diversity Metric (Δ) (mean in the first rows and variance in the second rows, the results which are significantly better than other three algorithms are emphasized in boldface) SCH

FON

KUR

ZDT1

ZDT2

ZDT3

ZDT6

DTLZ1

DTLZ2

MOSANSDE 0.344681 0.230678 0.382288 0.246235 0.261846 0.497681 0.325468 1.169830 0.553207 1.33E-03 2.63E-04 1.17E-04 2.83E-03 7.46E-04 2.86E-04 6.07E-02 3.72E-02 9.20E-04 NSGA-II

0.423661 0.397886 4.65E-03 2.12E-03

0.632615 0.675597 6.67E-03 1.73E-03

0.957422 7.82E-02

0.791458 1.54E-03

1.064076 1.569836 0.953721 3.32E-02 3.92E-02 5.14E-02

MOPSO

0.557639 0.568493 6.10E-04 6.74E-03

0.586673 0.580741 2.57E-03 3.65E-03

0.650889 7.97E-02

0.543900 1.88E-03

0.963582 0.852471 1.352095 5.22E-04 4.7E-03 6.24E-03

PAES

0.802243 0.571838 2.45E-03 7.05E-03 0 1

0.675707 0.821802 9.58E-03 6.06E-02 1 1

0.839597 3.39E-02 1

0.750043 3.95E-03 1

0.873567 1.069328 0.772964 9.25E-02 8.17E-03 7.28E-02 1 0 1

h

5

Conclusions

In this paper, we extended our previous single-objective algorithm, SaNSDE, to the multi-objective optimization field and proposed a new MOEA, namely MOSaNSDE. The self-adaptation utilized in SaNSDE make possible controlling the sensitive parameters of DE via statistical learning experience during evolution. Consequently, MOSaNSDE is also capable of adapting its control parameters effectively. Experimental studies on nine benchmark problems showed that MOSaNSDE performed comparably or significantly better than three wellknown MOEAs, in terms of both convergence and diversity metrics. Recently, scaling up MOEAs to large-size problems has emerged as the most challenging research topic in the field of evolutionary multi-objective optimization [13,14]. Given SaNSDE’s superior performance on high-dimensional single-objective optimization problems, MOSaNSDE might also be a potential tool for MOPs with many decision variables. This issue, as well as scaling up MOSaNSDE to MOPs with many objectives, will be the major foci of our future investigation.

16

Z. Wang et al.

References 1. Yang, Z., Tang, K., Yao, X.: Self-adaptive Differential Evolution with Neighborhood Search. In: Proceedings of the 2008 Congress on Evolutionary Computation, pp. 1110–1116. IEEE Press, Hong Kong (2008) 2. Yang, Z., Tang, K., Yao, X.: Large Scale Evolutionary Optimization Using Cooperative Coevolution. Information Sciences 178, 2985–2999 (2008) 3. Knowles, J.D., Corne, D.W.: Approximating the Nondominated Front Using the Pareto Archived Evolution Strategy. Evolutionary Computation 8, 149–172 (2000) 4. Deb, K., Pratap, A., Agarwal, S., Meyarivan, T.: A Fast and Elitist Multiobjective Genetic Algorithm: NSGA-II. IEEE Transactions on Evolutionary Computation 6, 182–197 (2002) 5. Storn, R., Price, K.: Differential Evolution - A Simple and Efficient Heuristic Strategy for Global Optimization over Continuous Spaces. Journal of Global Optimization 11, 341–359 (1997) 6. Schaffer, J.D.: Multiple Objective Optimization with Vector Evaluated Genetic Algorithms. In: Proceedings of the First International Conference on Genetic Algorithms, pp. 93–100 (1987) 7. Fonseca, C.M., Fleming, P.J.: Multiobjective Optimization and Multiple Constraint Handling with Evolutionary Algorithms-Part II: Application Examples. IEEE Transcations on System, Man and Cybernetics, Part A 28, 8–47 (1998) 8. Kursawe, F.: A Variant of Evolution Strategies for Vector Optimization. In: Schwefel, H.-P., M¨ anner, R. (eds.) PPSN 1990. LNCS, vol. 496, pp. 193–197. Springer, Heidelberg (1991) 9. Zitzler, E., Deb, K., Thiele, L.: Comparison of Multiobjective Evolutionary Algorithms: Empirical Results. Evolutionary Computation 8, 173–195 (2000) 10. Deb, K., Thiele, L., Laumanns, M., Zitzler, E.: Scalable Multiobjective Optimization Test Problems. In: Proceedings of the Congress on Evolutionary Computation, pp. 825–830 (2002) 11. Coello, C.A.C., Pulido, G.T., Lechuga, M.S.: Handling Multiple Objectives with Particle Swarm Optimization. IEEE Transcations on Evolutionary Computation 8, 256–279 (2004) 12. Knowles, J.D., Corne, D.W.: The Pareto Archived Evolution Strategy: A New Baseline Algorithm for Pareto Multiobjective Optimization. In: Proceedings of the Congress on Evolutionary Computation, pp. 98–105 (1999) 13. Praditwong, K., Yao, X.: How Well Do Multi-objective Evolutionary Algorithms Scale to Large Problems. In: Proceedings of the 2007 IEEE Congress on Evolutionary Computation (CEC 2007), Singapore, pp. 3959–3966 (2007) 14. Khare, V., Yao, X., Deb, K.: Performance Scaling of Multi-objective Evolutionary Algorithms. In: Fonseca, C.M., Fleming, P.J., Zitzler, E., Deb, K., Thiele, L. (eds.) EMO 2003. LNCS, vol. 2632, pp. 376–390. Springer, Heidelberg (2003)

Suggest Documents