Ranking Methods for Many-Objective Optimization | SpringerLink

5 downloads 20265 Views 346KB Size Report
Cite this paper as: Garza-Fabre M., Pulido G.T., Coello C.A.C. (2009) Ranking Methods for Many-Objective Optimization. In: Aguirre A.H., Borja R.M., Garciá ...
Ranking Methods for Many-Objective Optimization Mario Garza-Fabre1, Gregorio Toscano Pulido1 , and Carlos A. Coello Coello2 1

2

CINVESTAV-Tamaulipas. Km. 6 carretera Cd. Victoria-Monterrey Cd. Victoria, Tamaulipas, 87261, Mexico [email protected], [email protected] CINVESTAV-IPN, Departamento de Computaci´on, Av. IPN No. 2508 Col. San Pedro Zacatenco, M´exico, D.F. 07360, Mexico [email protected]

Abstract. An important issue with Evolutionary Algorithms (EAs) is the way to identify the best solutions in order to guide the search process. Fitness comparisons among solutions in single-objective optimization is straightforward, but when dealing with multiple objectives, it becomes a non-trivial task. Pareto dominance has been the most commonly adopted relation to compare solutions in a multiobjective optimization context. However, it has been shown that as the number of objectives increases, the convergence ability of approaches based on Pareto dominance decreases. In this paper, we propose three novel fitness assignment methods for many-objective optimization. We also perform a comparative study in order to investigate how effective are the proposed approaches to guide the search in high-dimensional objective spaces. Results indicate that our approaches behave better than six state-of-the-art fitness assignment methods.

1 Introduction Evolutionary algorithms (EAs) are metaheuristics inspired on the mechanism of natural selection, which have been found to be very effective to solve optimization problems. In order to emulate natural selection, it is imperative to implement a function which measures the fitness of the individuals, such that the emulation can favor those individuals that compete most effectively for resources. Usually, when dealing with a singleobjective optimization problem such fitness function is directly related to the function to be optimized. However, when the problems involve the simultaneous satisfaction of two or more objectives (the so-called “multiobjective optimization problems”, or MOP for short), such relation is not straightforward, since it is required an additional mechanism to map the multiobjective space into a single dimension in order to allow a direct comparison among solutions; this mechanism is known as the fitness assignment process1 [12]. Pareto dominance (PD) has been the most commonly adopted relation to establish preferences among solutions in Multiobjective Optimization (MOO). Multiobjective Evolutionary Algorithms (MOEAs) based on PD have been successfully used to 1

In this study, we will use indistinctly the terms fitness and rank to refer to the value which expresses the quality of solutions and which allows to compare them with respect to each other.

A. Hern´andez Aguirre et al. (Eds.): MICAI 2009, LNAI 5845, pp. 633–645, 2009. c Springer-Verlag Berlin Heidelberg 2009 

634

M. Garza-Fabre, G. Toscano Pulido, and C.A. Coello Coello

solve problems with two or three objectives. However, when the number of objectives increases, the convergence ability of such approaches can be affected (negatively) [17,13,11]. The main reason is that the proportion of nondominated solutions (i.e., equally good solutions in the multiobjective context) increases exponentially with the number of objectives. As a consequence, it is not possible to impose preferences among individuals for selection purposes and the search process weakens since it is performed practically at random. Despite the considerable volume of research on evolutionary multiobjective optimization (EMO, for short), most of the MOEAs proposed so far have been validated using only two or three objectives. However, since real-world applications may involve more than 3 objectives, it should be clear how important is to devise alternative approaches to rank solutions when dealing with a higher number of objectives. MOPs having more than 3 objectives are referred to as many-objective optimization problems in the specialized literature [8]. In this paper, three novel fitness assignment methods for multiobjective optimization are proposed. We performed a comparative study in order to investigate how effective are the proposed approaches to guide the search process in high dimensional objective spaces. We also included six state-of-the-art methods in such study. The remainder of this document is structured as follows: the statement of the problem and the PD’s drawbacks are described in Section 2. Section 3 describes some state-ofthe-art alternatives for fitness assignment in MOO. We present our proposed techniques in Section 4. In Section 5, the results of an empirical study about the convergence properties of the studied approaches are detailed. Finally, Section 6 provides our conclusions as well as some possible directions for future research.

2 Background In this study, we assume that all the objectives are to be minimized and are equally important. Here, we are interested in solving many-objective problems with the following form: Minimize f(Xi ) = ( f1 (Xi ), f2 (Xi ), . . . , fM (Xi ))T subject to Xi ∈ F

(1)

where Xi is a decision vector (containing our decision variables), f(Xi ) is the M-dimensional objective vector (M > 3), fm (Xi ) is the m-th objective function, and F is the feasible region delimited by the problem’s constraints. 2.1 Pareto Dominance (PD) PD was proposed by Vilfredo Pareto [16] and it is defined as follows: given two solutions Xi , X j ∈ F , we say that Xi Pareto-dominates X j (Xi ≺P X j ) if and only if: ∀m ∈ {1, 2, ..., M} : fm (Xi ) ≤ fm (X j ) ∧ ∃m ∈ {1, 2, ..., M} : fm (Xi ) < fm (X j )

(2)

Ranking Methods for Many-Objective Optimization

635

otherwise, we say that X j is nondominated with respect to Xi . In this study, we use Goldberg’s nondominated sorting method [10] to rank solutions according to PD. The general idea of this approach is the following: first, identify and isolate the nondominated solutions. Then, assign the best rank position to them. Next, from the remainder of the population, identify and isolate the new nondominated solutions and assign the next rank position to them. This process will be repeated until the entire population obtains a rank. Even though PD has been widely used to develop MOEAs, the performance of such approaches is known to deteriorate as the number of objectives is increased [17,13,11]. The main reason is that the proportion of nondominated solutions increases with the number of objectives. With the aim of clarifying this point, in Figure 1, we show how the number of nondominated solutions grows in a MOEA while the search progresses. This experiment was performed using two well-known scalable test problems, namely DTLZ1 and DTLZ6 [5]. The data in Figure 1 corresponds to the mean of 31 runs of a basic MOEA (described later in Section 5.2) using a population of 100 individuals. (b) DTLZ6

90 80 70 5 objectives 10 objectives 15 objectives 20 objectives 30 objectives 50 objectives

60 50 40 30 0

1

2 3 Generation

4

5

Pareto−nondominated solutions (%)

Pareto−nondominated solutions (%)

(a) DTLZ1 100

100 98 96 94 5 objectives 10 objectives 15 objectives 20 objectives 30 objectives 50 objectives

92 90 88 86 0

1

2 3 Generation

4

5

Fig. 1. Proportion of Pareto-nondominated solutions in a randomly generated population

From Figure 1, we can clearly see that an increment of the number of objectives raises the number of nondominated individuals. In other words, PD is useful when there are a few objectives, but it is less suitable as the dimensionality of the objective space increases. For example, let Xi and X j be candidate solutions for a 20-objective MOP. If Xi is better than X j in 19 objectives, but X j is superior than Xi in the remainder objective, both would be equally good solutions if PD is used to compare them. However, a decision maker could say that Xi has a better performance than X j . PD does not take into consideration the number of improved (or decreased) objectives nor the magnitude of such improvements (or decreases). Nevertheless, these issues are crucial in the human decision-making process and may lead to several degrees of dominance [9,14].

3 State-of-the-Art Approaches Weighted Sum Approach (WS). The most natural way to assign fitness in the multiobjective context, is by combining all objectives into a single one using any combination

636

M. Garza-Fabre, G. Toscano Pulido, and C.A. Coello Coello

of arithmetical operations. WS is the most commonly used of such approaches. This method expresses the quality of solutions by adding all the objective functions together using weighting coefficients. Thus, the fitness of a solution Xi is given by: WS(Xi ) =

M

∑ wm fm (Xi )

(3)

m=1

where wm is the weighting coefficient denoting the relative importance of the m-th objective2 . Since WS is a range-dependent method, it requires the normalization of the objective values before being applied. Average Ranking (AR). This method was proposed by Bentley and Wakefield [1]. AR selects one objective and builds a ranking list using the fitness of each solution for such objective. This procedure is performed for all objectives. The ranking positions of a solution Xi are given by the vector R(Xi ) = (r1 (Xi ), r2 (Xi ), . . . , rM (Xi ))T , where rm (Xi ) is the rank of Xi for the m-th objective. Once the vector R(Xi ) is calculated for each solution Xi , its global rank is given by: AR(Xi ) =

M

∑ rm (Xi )

(4)

m=1

Maximum Ranking (MR). MR was also proposed by Bentley and Wakefield [1]. As described for the above AR method, the vector of ranking positions R(Xi ) is calculated for each individual Xi in the population. Then, MR ranks individuals according to their best ranking position. Formally: M

MR(Xi ) = min rm (Xi ) m=1

(5)

MR tends to favor extreme solutions, i.e., it prefers solutions with the best performance for some objectives but without taking into account their assessment in the rest of the objectives. This issue is a drawback when searching, for example: if in a 20-objective MOP, Xi is the solution with the best performance with respect to the first objective, but it is the worst solution for the remainder 19 objectives, Xi would be classified with the best rank by MR. Relation (FR). In this alternative dominance relation, proposed by Drechsler et al. [7], a solution Xi is said to dominate another solution X j (xi ≺ f X j ) if and only if: |{m : fm (Xi ) < fm (X j )}| > |{n : fn (X j ) < fn (Xi )}| for m, n ∈ {1, 2, . . ., M}

(6)

Since FR is not a transitive relation (consider solutions Xi =(8,7,1), X j =(1,9,6) and Xk =(7,0,9); it is clear that Xi ≺ p X j ≺ p Xk ≺ p Xi ), its authors proposed to rank solutions as follows: to use a graph representation for the relation, where each solution is 2

Since we assume that all objectives are equally important, the weighting coefficients of equation (3) were omitted for this study.

Ranking Methods for Many-Objective Optimization

637

a node and the preferences are given by edges, in order to identify the Strongly Connected Components (SCC). An SCC groups all elements which are not comparable to each other (as the cycle of solutions in the above example). A new cycle-free graph is constructed using the obtained SCCs, such that it would be possible to establish an order by assigning the same rank to all solutions that belong to the same SCC. Preference order ranking (PO). di Pierro et al. [6] proposed a strategy that ranks a population according to the order of efficiency of the solutions. An individual Xi is consideredefficient of order k if it is not Pareto-dominated by any other individual for  any of the Mk subspaces where are considered only k objectives at a time. Efficiency of order M for a MOP with exactly M objectives simply corresponds to the original Pareto optimality definition. If Xi is efficient of order k, then it is efficient of order k + 1. Analogously, if Xi is not efficient of order k, then it is not efficient of order k − 1. Given these properties, the order of efficiency of a solution Xi is the minimum k value for which Xi is efficient. Formally: M

order(Xi ) = min(k | isEfficient(xi , k)) k=1

(7)

where isEfficient(Xi , k) is to be true if Xi is efficient of order k. The order or efficiency can be used as the rank of solutions. The smaller the order of efficiency, the better an individual is. The authors proposed to use this strategy in combination with Goldberg’s nondominated sorting [10] described in Section 2.1. The resulting ranking scheme is as follows [6]: first apply nondominated sorting to the population. Next, solutions forming each of the obtained nondominated layers are ranked according to their order of efficiency, in such a way that solutions in the second nondominated layer are penalized by adding the worst obtained rank in the first nondominated layer to their ranks, and so on.

4 Proposed Approaches Global Detriment (GD). In order to calculate the quality of solutions, we propose a method which takes into account how significant is the difference among solutions. Individuals are compared pairwise in the objective space. The global fitness of a solution is obtained by accumulating the difference by which it is inferior to every other solution, with respect to each objective. The fitness of a solution Xi is then calculated as follows: M

GD(Xi ) =

∑ ∑ max( fm (Xi ) − fm(X j ), 0)

(8)

X j =Xi m=1

Using GD, Xi is said to be better than X j if it holds that GD(Xi ) < GD(X j ). This method requires the objective values to be normalized within the same range. Profit (PF). The proposed PF method expresses the solutions’ quality in terms of their profit. Formally, the fitness of a solution Xi is given by: PF(Xi ) = max (gain(Xi , X j )) − max (gain(X j , Xi )) X j =Xi

X j =Xi

(9)

638

M. Garza-Fabre, G. Toscano Pulido, and C.A. Coello Coello

where gain(Xi , X j ) is the gain of Xi with respect to X j : gain(Xi , X j ) =

M

∑ max( fm (X j ) − fm (Xi ), 0)

(10)

m=1

PF is a maximization criteria and thus, we say that Xi outperforms X j if and only if PF(Xi ) > PF(X j ). This method requires the objective values to be normalized within the same range. Distance To The Best Known Solution (GB). This method uses an ideal reference point in order to rank the population. Such reference point will be referred to as GBEST and it must dominate to the whole population. GBEST is composed by the best known objective value for each dimension. Once that GBEST is calculated, then the fitness of a solution will be assigned with respect to its closeness to the GBEST point (we used the Euclidean distance for this sake). Thus, we will prefer individuals with a small GB fitness value. This method requires the objective values to be into the same range.

5 Experimental Results In order to investigate the properties of each of the approaches described in Sections 3 and 4, we performed a series of comparisons. Problems DTLZ1, DTLZ3 and DTLZ6 [5] were selected for our experimental study. Due to space limitations we’ve only included these three test cases. These test functions can be scaled to any number of objectives and decision variables. In these test problems, the total number of variables are n = M + k − 1, where M is the number of objectives. The used k values were 5 for DTLZ1, and 10 in the case of DTLZ3 and DTLZ6. In this study, we tested them with 5, 10, 15, 20, 30 and 50 objectives. However, since PO becomes computationally expensive as the number of objectives increases, we only applied it for instances with up to 20 objectives. 5.1 Ranking Distribution Corne and Knowles [2] proposed to measure the relative entropy of the distribution of ranks induced by a method in order to analyze its effectiveness. They described this measure as follows: consider a population of N ranked solutions (there are at most N ranks, and at least 1). The relative entropy of the distribution of ranks D is give by:

re(D) =

∑ r

D(r) D(r) log( ) N N log(1/N)

(11)

where D(r) denotes the number of solutions with rank r. re(D) tends to 1 as we approach to the ideal situation in which each solution has a different rank. On the other hand, when all individuals share the same ranking position, re(D) is equal to zero. Thus, it is supposed that a ranking method providing a richer ordering would lead to a better performing optimization scheme.

Ranking Methods for Many-Objective Optimization

639

1 0.8 0.6

(a) 5 objectives

(b) 20 objectives

PF

GB

FR

PF

GB

GD

FR

PO

AR

MR

PD

WS

PF

GB

GD

FR

PO

0

AR

0 MR

0.2

0 PD

0.2

GD

0.4

0.2

AR

0.4

MR

0.6

PD

0.4

WS

0.6

ENTROPY

1 0.8

ENTROPY

1 0.8

WS

ENTROPY

L´opez Jaimes et al. [15] performed a similar study. To measure the scalability of ranking methods, they proposed to analyze if the shape and range of the histograms of the ranking distributions is maintained when the number of objectives is increased. As stated by Corne and Knowles, L´opez Jaimes et al. argued that a ranking method will favor the selection process if it is able to generate a richer range of ranks. Figure 2 shows the relative entropy of the ranking distribution produced by the studied methods. For each experiment, a population of 1000 decision vectors were randomly generated and evaluated for DTLZ1, DTLZ3 and DTLZ6 test problems. Each experiment was run 31 times. Due to space limitations, and the similarity of the results obtained for all test problems, the data was averaged to show the global performance for all test functions using 5, 20 and 50 objectives.

(c) 50 objectives

Fig. 2. Average relative entropy of ranks distribution of 1000 randomly generated solutions, for DTLZ1, DTLZ3 and DTLZ6 problems

From Figure 2, we can remark that our proposed methods, as well as the WS and AR approaches, present an stable behavior with the augmentation of the number of optimization criteria. As expected, we can clearly see how PD performance deteriorates with the increment in the number of objectives. Even though we only include PO results for instances with up to 20 objectives, from smaller instances we can see that its entropy results will be (relatively) slightly higher than that of PD. FR performed the worst for this experiment. Its entropy was the lowest in all cases. This property of FR is due to its mechanism, since it is not a transitive relation, for cycles identification. It is usually the case that almost all individuals in the population become part of a cycle, an thus, they share the same ranking position. 5.2

A Generic Multiobjective Evolutionary Algorithm

We implemented a generic multiobjective evolutionary algorithm (MOEA) in order to evaluate the behavior provided by each of the studied ranking mechanisms. The implemented MOEA uses binary tournament selection based on the ranking position of solutions. Simulated binary crossover (ηm = 15) and polynomial mutation (ηc = 20) are used as variators. The adopted crossover and mutation rates were set to 1.0 and 1/n, respectively, where n is the number of decision variables. The combination of parent and children populations is ranked (according to the used method) in order to select the best solutions to form the new parent population (elitism) [3]. If it is the case that in the next ranking position there are more individuals than the available capacity, then,

640

M. Garza-Fabre, G. Toscano Pulido, and C.A. Coello Coello

the required individuals are randomly selected. We used a population size of 100 and 300 generations for all our experiments. In order to avoid alterations in the behavior of the studied methods, this MOEA does not incorporate any additional mechanism to maintain diversity in the population. Figure 3 shows the described MOEA’s workflow.

#"$

"  

   

    

         

    

    

   



  

 %

!       

    

Fig. 3. Base algorithm workflow

5.3 Convergence Properties All ranking methods were incorporated into the MOEA described in Section 5.2 in order to investigate their convergence ability as the number of objectives increases. The different studied approaches were applied to the normalized objective values: fm (Xi ) = f m (Xi )−GMINm GMAXm −GMINm for m = 1, 2, ..., M, where GMAXm and GMINm are the maximum and minimum known values for the m-th objective. Since we know a priori that for the adopted set of test problems GMINm = 0, we simply normalized the objectives as folf m (Xi ) for m = 1, 2, ..., M. Futhermore, this latter normalization method lows: fm (Xi ) = GMAX m mantains the proportionality of the data. Deb et al. [4] proposed a convergence measure in which the average distance of the obtained approximation set from the the Pareto-optimal front is computed. Rather than using the average distance, we computed the distance from the best converged solution (the solution which presents the minimum distance) to the Pareto-optimal frontier. Since equations defining the true Pareto front are known for all the test problems adopted, this measure was determined analytically. Tables 1, 2 and 3 show the obtained results when the different MOEA’s configurations were applied to problems DTLZ1, DTLZ3 and DTLZ6, respectively. The data in these tables corresponds to the average and standard deviation of the convergence measure, at the final of the search, for 31 trials of each experiment. From Table 1, we can highlight that, for DTLZ1, our proposed GB, GD and PF, and the WS approaches achieved the best results for most instances of this problem. These approaches showed robustness as the number of the objective was increased. AR reached good convergence levels for small instances of this problem. However, its performance was gradually deteriorated as the dimensionality of the problem was increased. The MR results confirm some of its drawbacks mentioned in Section 3, since it was the method which performed the worst for this problem.

Ranking Methods for Many-Objective Optimization

641

Table 1. Final convergence for DTLZ1 problem

PD WS AR MR FR PO GD PF GB

5 Obj. 0.637 ± 0.555 0.000 ± 0.000 0.000 ± 0.000 37.85 ± 14.64 0.001 ± 0.001 0.496 ± 0.331 0.000 ± 0.000 0.000 ± 0.000 0.000 ± 0.000

10 Obj. 3.313 ± 2.545 0.000 ± 0.000 0.000 ± 0.000 27.10 ± 7.857 0.016 ± 0.047 0.824 ± 0.731 0.000 ± 0.000 0.000 ± 0.000 0.000 ± 0.000

15 Obj. 2.357 ± 1.489 0.004 ± 0.023 0.005 ± 0.023 22.81 ± 6.854 0.102 ± 0.145 0.841 ± 0.755 0.000 ± 0.000 0.000 ± 0.000 0.000 ± 0.000

20 Obj. 2.147 ± 1.160 0.000 ± 0.000 0.019 ± 0.041 21.91 ± 7.283 0.198 ± 0.256 0.657 ± 0.628 0.000 ± 0.000 0.000 ± 0.001 0.000 ± 0.000

30 Obj. 1.130 ± 0.789 0.003 ± 0.016 0.107 ± 0.117 14.28 ± 7.615 0.634 ± 0.710 0.006 ± 0.023 0.006 ± 0.022 0.000 ± 0.000

50 Obj. 1.458 ± 0.925 0.014 ± 0.052 0.353 ± 0.259 10.77 ± 5.471 2.543 ± 2.713 0.018 ± 0.030 0.011 ± 0.026 0.030 ± 0.043

Table 2. Final convergence for DTLZ3 problem

PD WS AR MR FR PO GD PF GB

5 Obj. 59.71 ± 19.96 0.078 ± 0.245 0.920 ± 1.199 496.1 ± 145.3 14.11 ± 34.73 35.77 ± 15.75 0.171 ± 0.369 0.094 ± 0.261 0.110 ± 0.295

10 Obj. 260.6 ± 82.82 0.140 ± 0.333 3.470 ± 2.344 290.8 ± 243.5 25.27 ± 21.41 33.46 ± 18.47 0.205 ± 0.471 0.043 ± 0.176 0.041 ± 0.176

15 Obj. 244.3 ± 103.2 0.144 ± 0.332 9.891 ± 6.485 175.2 ± 189.4 179.9 ± 85.30 24.74 ± 10.84 0.309 ± 0.535 0.173 ± 0.371 0.098 ± 0.265

20 Obj. 204.9 ± 86.09 0.239 ± 0.489 18.21 ± 8.368 88.15 ± 111.0 179.6 ± 88.61 22.50 ± 9.913 0.336 ± 0.530 0.236 ± 0.489 0.151 ± 0.334

30 Obj. 191.7 ± 103.2 0.823 ± 1.277 50.08 ± 15.78 45.24 ± 41.57 328.3 ± 129.2 0.695 ± 0.686 0.340 ± 0.529 0.693 ± 0.891

50 Obj. 164.9 ± 81.59 1.507 ± 1.010 61.35 ± 22.84 38.24 ± 11.55 416.3 ± 133.7 1.695 ± 1.206 1.388 ± 1.086 1.419 ± 1.300

Table 3. Final convergence for DTLZ6 problem

PD WS AR MR FR PO GD PF GB

5 Obj. 5.787 ± 0.503 0.080 ± 0.030 0.081 ± 0.032 6.505 ± 2.207 5.347 ± 2.238 4.152 ± 0.631 0.085 ± 0.027 0.080 ± 0.029 0.086 ± 0.035

10 Obj. 8.011 ± 0.404 0.081 ± 0.025 10.00 ± 0.000 8.481 ± 0.859 9.821 ± 0.195 9.773 ± 0.154 0.082 ± 0.028 0.076 ± 0.030 0.085 ± 0.027

15 Obj. 8.036 ± 0.472 0.089 ± 0.029 10.00 ± 0.000 8.494 ± 0.837 9.380 ± 0.245 9.939 ± 0.036 0.089 ± 0.027 0.079 ± 0.029 0.084 ± 0.034

20 Obj. 8.154 ± 0.483 0.085 ± 0.031 10.00 ± 0.000 8.558 ± 0.832 9.821 ± 0.084 9.976 ± 0.018 0.085 ± 0.029 0.081 ± 0.033 0.078 ± 0.028

30 Obj. 8.466 ± 0.410 0.087 ± 0.034 10.00 ± 0.000 8.289 ± 0.745 9.693 ± 0.108 0.081 ± 0.033 0.089 ± 0.036 0.088 ± 0.034

50 Obj. 8.529 ± 0.323 0.091 ± 0.029 10.00 ± 0.000 8.074 ± 0.586 9.573 ± 0.137 0.096 ± 0.027 0.089 ± 0.033 0.090 ± 0.022

Most of the methods failed to converge to the DTLZ3’s optimal frontier (Table 2). Our proposed GB and PF methods reached the best results, followed by WS and GD. In this test function MR and FR obtained the worst performing results. Finally, from Table 3 we can observe that, DTLZ6’s optimal frontier was also difficult to reach. Our PF approach showed a remarkable performance for most instances

642

M. Garza-Fabre, G. Toscano Pulido, and C.A. Coello Coello PD WS AR MR FR PO GD PF GB



✗ 0

Generaci´ on

300

(a) DTLZ1 PD WS AR MR FR PO GD PF GB



✗ 0

Generaci´ on

300

(b) DTLZ3 PD WS AR MR FR PO GD PF GB



✗ 0

Generaci´ on

300

(c) DTLZ6 Fig. 4. Online convergence for the 20-objective DTLZ1, DTLZ3 and DTLZ6 problems

(objectives) in this problem, followed by GB, WS and GD. The method which performed the worst for this test problem was AR, since it obtained the poorest convergence value in most cases; this method seems to be the most affected by the problems difficulties. As general observations, we can highlight that most of the proposed methods were able to guide the search, since they showed robustness as the dimensionality of the different test problems was increased. Results obtained by WS indicates that this simple and efficient approach performs better than some more elaborated ones, since its achieved convergence was nearly the best in most cases. As expected, results confirm that PD’s performance deteriorates with the increase in the number of objectives. We consider that MR and FR had the worst performance among the studied approaches. Additionally, we decided to investigate the convergence speed achieved by the MOEA in Section 5.2 while using the different ranking schemes of our interest. Figure 4 shows the average convergence as the search progresses for the different included

Ranking Methods for Many-Objective Optimization

643

DTLZ problems, considering 31 independent trials of each experiment. Due to space limitations, we only show results for the 20-objective instances, since 20 is the maximum number of objectives for which we performed experiments for all the studied approaches. The data shown in Figure 4 was plotted in logarithmic scale, in order to highlight the differences in the results obtained using each method. The interpretation of these figures is as follows: each row corresponds to one of the studied methods, and each column represents the search progress in 10-generation steps. The convergence performance is indicated by a grey-scale square, where darker squares refer to the best achieved convergence values. From Figure 4, we can conclude that the proposed GD, PF and GB, and the WS approaches presented an accelerated convergence, since during the first 150 generations they reached relatively good values for the convergence measure. These results are in agreement with respect to the ranking distributions discussed in Section 5.1, since the methods with the best convergence properties are those which were identified in Section 5.1 to induce a fine-grained discrimination among solutions.

6 Conclusions and Future Work Since the performance of Pareto-based MOEAs deteriorates as the number of objectives increases, it is necessary to identify alternative approaches to establish preferences among solutions in many-objective scenarios. In this paper, three novel approaches of this sort were proposed. We performed a comparative study in order to investigate how effective are the proposed approaches to guide the search in high dimensional objective spaces. We also included six state-of-the-art methods in such study. Our experimental results indicate that the proposed methods were able to guide the search process towards the optimum for three well-known scalable test functions. From these results, we can clearly note that our proposed approaches GD, PF and GB were the ones which perform the best. However, it is not possible to select a single approach as the best. Also, one of our main findings is that a simple and efficient aggregation approach was able to perform better than some more sophisticated ranking schemes, since its results were nearly the best in almost all of our experiments. In this paper, we focused on the convergence properties of different multiobjective ranking approaches. However, an important issue of MOEAs is to converge to a set of well-spread solutions. As part of our future work, we want to extend our experiments to investigate how good is the distribution of the solutions achieved by each of the studied methods. Also, we would like to characterize our methods in terms of the kind of solutions that they tend to prefer, since some methods could favor extreme solutions while some other approaches could focus on solutions which are on the knee of the Pareto front.

Acknowledgements The first author acknowledges support from CONACyT through a scholarship to pursue graduate studies at the Information Technology Laboratory at CINVESTAV-IPN. The

644

M. Garza-Fabre, G. Toscano Pulido, and C.A. Coello Coello

second author gratefully acknowledges support from CONACyT through project 90548. Also, this research was partially funded by project number 51623 from “Fondo Mixto Conacyt-Gobierno del Estado de Tamaulipas”. Finally, we would like to thank to Fondo Mixto de Fomento a la Investigaci´on cient´ıfica y Tecnol´ogica CONACyT - Gobierno del Estado de Tamaulipas for their support to publish this paper.

References 1. Bentley, P.J., Wakefield, J.P.: Finding Acceptable Solutions in the Pareto-Optimal Range using Multiobjective Genetic Algorithms. In: Chawdhry, P.K., Roy, R., Pant, R.K. (eds.) Soft Computing in Engineering Design and Manufacturing. Part 5, June 1997, pp. 231–240. Springer, London (1997) (Presented at the 2nd On-line World Conference on Soft Computing in Design and Manufacturing (WSC2)) 2. Corne, D., Knowles, J.: Techniques for Highly Multiobjective Optimisation: Some Nondominated Points are Better than Others. In: Thierens, D. (ed.) 2007 Genetic and Evolutionary Computation Conference (GECCO 2007), July 2007, vol. 1, pp. 773–780. ACM Press, London (2007) 3. Deb, K., Agrawal, S., Pratab, A., Meyarivan, T.: A Fast Elitist Non-Dominated Sorting Genetic Algorithm for Multi-Objective Optimization: NSGA-II. KanGAL report 200001, Indian Institute of Technology, Kanpur, India (2000) 4. Deb, K., Mohan, R.S., Mishra, S.K.: Towards a Quick Computation of Well-Spread ParetoOptimal Solutions. In: Fonseca, C.M., Fleming, P.J., Zitzler, E., Deb, K., Thiele, L. (eds.) EMO 2003. LNCS, vol. 2632, pp. 222–236. Springer, Heidelberg (2003) 5. Deb, K., Thiele, L., Laumanns, M., Zitzler, E.: Scalable Test Problems for Evolutionary Multiobjective Optimization. In: Abraham, A., Jain, L., Goldberg, R. (eds.) Evolutionary Multiobjective Optimization. Theoretical Advances and Applications, pp. 105–145. Springer, USA (2005) 6. di Pierro, F., Khu, S.T., Savi´c, D.A.: An Investigation on Preference Order Ranking Scheme for Multiobjective Evolutionary Optimization. IEEE Transactions on Evolutionary Computation 11(1), 17–45 (2007) 7. Drechsler, N., Drechsler, R., Becker, B.: Multi-objective Optimisation Based on Relation favour. In: Zitzler, E., Deb, K., Thiele, L., Coello Coello, C.A., Corne, D. (eds.) EMO 2001. LNCS, vol. 1993, pp. 154–166. Springer, Heidelberg (2001) 8. Farina, M., Amato, P.: On the Optimal Solution Definition for Many-criteria Optimization Problems. In: Proceedings of the NAFIPS-FLINT International Conference 2002, June 2002, pp. 233–238. IEEE Service Center, Piscataway (2002) 9. Farina, M., Amato, P.: A fuzzy definition of optimality for many-criteria optimization problems. IEEE Transactions on Systems, Man, and Cybernetics Part A—Systems and Humans 34(3), 315–326 (2004) 10. Goldberg, D.E.: Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley Publishing Company, Reading (1989) 11. Hughes, E.J.: Evolutionary Many-Objective Optimisation: Many Once or One Many? In: 2005 IEEE Congress on Evolutionary Computation (CEC 2005), September 2005, pp. 222–227. IEEE Service Center, Edinburgh (2005) 12. Hughes, E.J.: Fitness Assignment Methods for Many-Objective Problems. In: Knowles, J., Corne, D., Deb, K. (eds.) Multi-Objective Problem Solving from Nature: From Concepts to Applications, pp. 307–329. Springer, Berlin (2008) 13. Khare, V.R., Yao, X., Deb, K.: Performance Scaling of Multi-objective Evolutionary Algorithms. In: Fonseca, C.M., Fleming, P.J., Zitzler, E., Deb, K., Thiele, L. (eds.) EMO 2003. LNCS, vol. 2632, pp. 376–390. Springer, Heidelberg (2003)

Ranking Methods for Many-Objective Optimization

645

14. K¨oppen, M., Yoshida, K.: Substitute Distance Assignments in NSGA-II for Handling Many-Objective Optimization Problems. In: Obayashi, S., Deb, K., Poloni, C., Hiroyasu, T., Murata, T. (eds.) EMO 2007. LNCS, vol. 4403, pp. 727–741. Springer, Heidelberg (2007) 15. L´opez Jaimes, A., Santana Quintero, L.V., Coello Coello, C.A.: Ranking methods in manyobjective evolutionary algorithms. In: Chiong, R. (ed.) Nature-Inspired Algorithms for Optimisation, pp. 413–434. Springer, Berlin (2009) 16. Pareto, V.: Cours d’Economie Politique. Droz, Gen`eve (1896) 17. Purshouse, R.C., Fleming, P.J.: Evolutionary Multi-Objective Optimisation: An Exploratory Analysis. In: Proceedings of the 2003 Congress on Evolutionary Computation (CEC 2003), December 2003, vol. 3, pp. 2066–2073. IEEE Press, Canberra (2003)

Suggest Documents