A Distance Function-Based Multi-Objective Evolutionary Algorithm Wei-Chun Chang
Alistair Sutcliffe
Richard Neville
Centre for HCI Design Department of Computation UMIST, PO Box 88, Manchester, M60 1QD
[email protected]
Centre for HCI Design Department of Computation UMIST, PO Box 88, Manchester, M60 1QD
Centre for HCI Design Department of Computation UMIST, PO Box 88, Manchester, M60 1QD
Abstract
A multi-objective evolutionary algorithm (MOEA) approach is presented in this paper. The algorithm (DFBMOEA) aims to improve convergence of Paretobased MOEAs to the true Pareto optimal set/Pareto front and remove decision maker interaction from the process. A novel distance function is used as a fitness function for MOEA. A range equalisation function and a reference vector are utilised to eliminate the prior knowledge required from decision makers. A 0 & 1 knapsack problem [27] was tested to demonstrate the performance of our approach compared to two leading MOEAs: Non-dominated Sorting Genetic Algorithm II (NSGA II) and Strength Pareto EA II (SPEA II). The results show that our approach produced a set of effective Pareto optimal solutions that are comparable to the two leading MOEA s.
1
INTRODUCTION
In this paper, we described a novel MOEA that utilises a distance function-based method (DFBM) to solve MOPs. Our first goal is to optimise the objective values that are incommensurable; i.e. the range of possible values may vary widely. In other words, for different objectives, the range of possible objective values may be orders of magnitude different. The second goal is to remove the interaction of decision makers from the process. The DFBM is a simple and effective solution for optimisin g MOPs. However, as stated by Coello et al. [1], this type of approach introduces some deficiencies in optimising multi-objective problems. First of all, the method may be criticised for producing a single solution instead of a Pareto optimal set. Secondly, prior knowledge is vital to the method to decide the target vector in the objective space, but this may not be available in real world multi-objective problems. Thirdly, different ranges of objective values may lead to the bias-searching path, and produce biased solutions toward objectives with a large range of possible values or higher maxima.
For example, reliability and cost are two objectives which cover different ranges. Normally, reliability is bound to the range of [0, 1], and cost is bound to the range of [0, N] (i.e. N >> 1 ). The optimis ation process will be driven toward the individuals that provide the greatest improvement in cost. 1.1
DISCUSSIONS
Several questions are discussed here to identify the main objectives of this research. Some of these questions are commonly studied in the MOO. •
First of all, the main goals of this research are how to improve convergence of Pareto-based MOEAs to the true Pareto optimal set/Pareto front and remove decision maker interaction from the process.
•
The performance measurement of the MOEA is a potential research area when dealing with MOPs. Due to the time limitation, we can not study and apply suitable performance measurements to compare our approach with others. We will imp rove this in the future study.
•
Another good question considers the dynamic size of the Pareto vector to collect optimal solutions. The question is “how do we maintain the dynamic size? And what sort of data structure do we use?“. The answer is that a data structure of dynamic array is used to implement the dynamic size policy in collecting Pareto solutions during the evolutionary process. This question also raises another interesting issue about the overflow problem of dynamic size policy.
•
A question regarding the well distributed Pareto set is raised. In this preliminary design, the question has not been addressed clearly. We shall discuss the issue in a future study.
1.2
RELATED WORK
Multi-objective problems (MOP) were first discussed by Rosenberg in 1967 [16] . Since then, Schaffer was first to implement Multi-objective Optimisation (MOO) in a Vector Evaluated Genetic Algorithm (VEGA) [17]. Two types of classification are provided to identify the
approaches of MOEAs. First of all, the classifications of MOO techniques use a hierarchy based on Operations Research studies, where MOEAs can be divided into a priori, interactive, and a posteriori applications. The main focus is on the decision maker to provide the vital information before, in between, or after the MOO process [1, 23]. Alternatively, the classification presented by Fonseca and Fleming provides a possible way to classify or group current MOEA approaches [6]. Based on their study, current evolutionary approaches to MOO can be divided into three groups: plain aggregation, population-based non-Pareto, and Pareto-based approaches. These three are the main approaches used to handle the MOO problem. Coello Coello et al. also provided a more detailed review and criticisms of MOEA approaches [1]. Most recently, Corne et al. argued that major difference between singleobjective and multiple-objective optimisation [2]. The advantages of MOO compared to single-objective optimisation are discussed; the history and current state of the field are reviewed. 1.2.1
Distance Function Based Methods (DFBM)
Hans [9] proposed a distance function to optimise multiobjective problems in a highly accurate system. The pure distance function (eq. 1) is a directional searching technique that requires a vector of goal objectives. The optimis ation is driven toward the shortest distance between any candidate solutions and the goal vector. 1
n r r r (1) D = ∑ f ( x) − g . k k k = 1 where, fk(x)| k=1 ...n is the objective vector to be optimised. g k| k=1 ...n is the target vector of all objectives, which has to be specified by the decision makers. Usually, a Euclidean metric r = 2 is chosen in this equation. Wienke et al. [25] proposed an integration approach to combine a target vector with GA in analytical chemistry. The target vector they proposed is similar to the goal vector in Hans’ approach [9]. The advantage of Wienke’s method over Hans’ is improved computational efficiency by using a distance function that is applicable in on-line analytical instrumentation. A key contribution of their approach was to demonstrate the ability of their DFBM for an on-line optimisation techniques that used in analytical chemistry. They demonstrated the advantages of computational expenditure and speed of DFBM in the experimental results.
1.2.2
Optimal Pareto Solutions
The general definition of MOP is formulated by Osyczka [14].
r * * * Find the vector x * = [ x1 , x2 ,..., xn ]T which will satisfy rthe m inequality constraints: g i ( x ) ≥ 0, j = 1,2,..., m the p equality constraints r hi ( x ) = 0, j = 1, 2,..., pn
and optimize the rvector function r r
r f ( x) = [ f 1( x ), f 2 ( x ),..., f k ( x)]T r where x = [ x1 , x2 ,..., xn ]T is the vector
of decison
variables Following the definition, a set of pareto optimal solutions can be collected through the evolutionary process. Fonseca and Fleming [5] proposed an alternative approach that applied the concept of Pareto dominance as the fitness evaluation. Based on their approach, all nondominated individuals are assigned rank 0; the ranks of other individuals were assigned a value equal to the number of solutions dominating them. Another method, the Non-dominated Sorting GA II (NSGA II) proposed by Deb et al. [3, 20], is based on non-dominated sorting with a niche-speciation method to explore the Pareto front simultaneously. The NSGA approach assigns fitness based on membership in one of several "fronts" found in the population. Fonseca and Fleming, and others, also used sharing in their MOEAs. The sharing method [7] was applied to these individual chromosomes. A comparative case study of MOEAs by Zitzler and Thiele [26, 27] compared the Strength Pareto EA II (SPEA II) with other MOEAs, such as NSGA II and Pareto Envelope-based Selection Algorithm (PESA). The approach utilised several features to achieve the optimisation goal. A secondary and continuous updated population is maintained for storing non-dominated solutions; the fitness assessment of each individual depends on the number of external non-dominated points that dominate it; the Pareto dominance relationship is used to preserve the population diversity; and a clustering procedure is utilised to reduce the non-dominance set. Jaszkiewicz proposed a multiple objectives genetic local search (MOGLs) approach [11]. One of the most important mechanisms in the MOGL that is similar to our approach is to normalise the different ranges of objective values . The original equations are formulated by Steuer [21]. In his book, three philosophies are given to argue the rescale of objective functions: 1. Normalisation [22]: the family of Lp-Norm is applied to normalise the scale. The equations are given as follows: 1 p n p v p = ∑ v i , p ∈ {1, 2 ,3 ...... } ∪ {∞ } i =1 By giving the extreme case, p=8 , the component of largest magnitude totally dominates the normalisation.
v
∞
= max {vi } i =1, 2 ,..., n
2. Use of 10 raised to an appropriate power: rescale the value by 10 raised to an appropriate power. 3. The application of range equalis ation factors: by using a range equalis ation factor, π
i
=
1 Ri
k 1 ∑ j = 1 R j
−1
Where Ri is the range width of the ith criterion value over the efficient set. The drawback of these three methods of normalis ing or rescaling objective functions is that prior knowledge is necessary for getting the largest magnitude out of the objective function, which is impossible in most real world cases. Although the distance function is computational efficiency [25], several problems have been noted. We try to overcome these problems by proposing a new MOEA approach to direct the optimisation search toward either a single solution or Pareto optimal set effectively. The paper is structured in four sections. The algorithm of this approach is described in section two. Section three presents the experimental delimitation. Finally, brief conclusions, significance of the research and future work are discussed in section four.
2
A DISTANCE FUNCTION -BASED MOEA (DFBMOEA)
The DFBMOEA combines the characteristics of distance function-based and Pareto-based methods. The distance function (eq. 1) is associated with a weighting vector as the fitness assessment function that optimises the shortest distance between the candidate chromosome and reference vector (see section 2.3) in an objective space. The weighting vector is to adjust the direction of the searching path by favouring certain objectives. One of the key components of DFBMOEA is its use of a range equalisation function that binds the ranges of all objective values to the a [0, 1] scale. The optimum value of each dimension in the reference vector is then defined as either 0 or 1 depending on the minimisation (0) or maximis ation (1) required by each objective. This effectively normalises all the objectives so they all span the same scale [0, 1], and hence makes it simple to select a minimum, or a maximum. The next component is a secondary population that collects the Pareto optimal solutions. 2.1
THE DFBMOEA ALGORITHM
The equation used in the DFBMOEA to evaluate the fitness of each individual is listed in eq. 2. D
=
r n r ∑ w k f k (x ) − g k k = 1
1 r .
(2)
where,Wk=1..n is the weighting vector for adjusting searching direction. fk(x)| k=1 ...n is the objective vector to be optimised. g k| k=1 ...n is the target vector of all objectives, which has to be specified by the decision makers. Usually, a Euclidean metric r = 2 is chosen in this equation. The difference in our equation compared to the traditional DFBM is the weighting vector “w k=1..n “, which is used to control the searching direction. The effect of adjusting the weighting vector in searching the Pareto optimal solutions is analysed in the section 3.3. The algorithm of DFBMOEA is presented below:
BEGIN 1 gen=0; 2 P(gen) = Initialise ( N ); 3 Transformation (O(gen)); 4 F(gen) = Evaluate (O(gen)); 5 VPareto [] = P(gen); 6 while (Criterion not met) do { 7 P’(gen) = Crossover (P(gen)); 8 P’(gen) = P’(gen) + Mutation (P(gen)); 9 Transformation (O’(gen)); 10 F(gen) = Evaluate (O’(gen)); 11 P”(gen) = P(gen) + P’(gen); 12 P(gen+1)= EliteSelect(P”(gen)); 13 Pareto (VPareto [], P(gen+1) ); 14 gen = gen + 1; END while END where “gen” is the generation number, N is the population size, P(gen) is the population vector of current generation, O(gen) is the objective vector of P(gen), F(gen) is the fitness of P(gen). Vpareto [] is the Pareto vector which maintains the Pareto solutions with dynamic size, P’(gen) and P”(gen) are the temporary population vectors which store the offspring generated by GA operators. After the random initialis ation of population size N (line 2), the equalis ation function transforms the values of each objective value into a [0, 1] scale for P(gen) (line 3). The fitness evaluation of the initial population takes place before the evolutionary process is entered (line 4). A Pareto vector VPareto [] is used to collect the Pareto optimal set and is initialised in line 5 by the initial population. The evolutionary process then enters an tieration phase to search for optimisation solutions. Termination depends on the generation number (line 6). The new generation is selected using an elite policy. To maintain the Pareto optimal set (line 12), VPareto[] is updated after the next generation h as been selected using the definition of Pareto dominance (line 13). The size of VPareto[] is dynamically varied in order to accommodate an unknown number of Pareto optimal solutions if the solutions survive the Pareto dominance theory given in section 1.2. 2.2
THE RANGE EQUALISATION FUNCTION
The size of the ranges the objectives cover may vary greatly and this can cause bias searching for an optimisation solution when a distance function is applied. An example of this is given below: Example: If a multi-objective problem contained two objectives x and y that need to be optimised using the evolutionary process, then it is assumed that objective x is bound to the range [0, ℜ +], and objective y is bound to the range [0, 1]. Based on the mathematical relation of aggregation in the distance function, the improvements in objective y will be constrained by the numerically larger objective x. The example with two objectives is illustrated in Figure 1. To overcome the problem, the ranges of
objective values need to be normalised to the same range in order to evolve all solutions equitably.
in Figures 2 and 3. The values of all coefficients are presented in the diagrams. The interaction of decision makers can be eliminated by using the sigmoid function. The goal vectors that are vital in DFBMs are discussed in the next. 2.3
Figure 1. An example of biased search in objective space from the distance function approach. D1 < D2, chromosome 1 will survive due to the smaller distance value when compared to chromosome 2. In reality, chromosome 2 provides a better solution than chromosome 1 with respect to objective 2
The range equalisation function [8, 10] we applied in DFBMOEA is expressed in (eq. 3); it can transform a real number into the range of [0, 1]. The function is known as the squashing function or sigmoid function in Neural Networks. It is used to “squash“ different ranges of objective values to the same range. It allows the input variable to be unbounded ( x ∈ ( −∞ , ∞ ) ) and the output variable to be bounded in a specified range ( T ( x) ∈ [0,1] ). The sigmoid function is listed below. T (x ) = (
1 − x
ρ
− C)* D
The two scale ranges, x ∈ ℜ and x ∈ ℜ + are illustrated 1 x? R + ? = 1000 C = 0.5 D=2
0.5
x? R ? = 1000 C=0 D=1
0.5
0
0
0
5000
10000
Figure 2 Range equalis ation example I, x ∈ℜ +
-10000
0
10000
Figure 3 Range equalisation example II,
x ∈ℜ
The range equalisation function guides the stochastic search through a search space with normalised ranges in all objectives. A reference vector (either 0 for minimisation or 1 for maximis ation in all dimensions) is automatically obtained, and this reduces the prior knowledge required from decision makers. The reference vector simply replaces the goal vector that is required in the DFBM or other similar approaches [9, 11, 25]. The minimisation of an objective drives the value towards 0, and maximisation of an objective drives it towards 1. In the case of two objectives, four reference vect ors appear in the search space after the range transformation (see Figure 4). When optimis ing two objectives we can use one of the four reference vectors in the objective space. The selection of one of the four reference vectors is dependent on several factors. For example, if objective 1 requires minimis ation, and objective 2 requires maximis ation, then reference vector (0, 1) is selected.
(3)
.
1+ e where T ( x) ∈ [ 0,1] is the bounded range after the normalisation. x ∈ ℜ , x is the value to be transformed. ?, C, and D are a set of coefficients that control the normalisation process. ? is used to control the slope of normalisation curve. It can be determined by sampling objective values through several generations of preprocess runs. There is no need to request prior knowledge from users. C is an range transformation coefficient. It is determined by the range of x. i.e. C = 0.5, if x ∈ ℜ +, or C = 0, if x ∈ ℜ . D is a re -scale coefficient. It is determined by coefficient C. i.e. D = 1, if C = 0, or D = 2, if C = 0.5.
1
THE GOAL VECTORS VS. THE REFERENCE VECTORS
Figure 4. Four reference vectors appear in 2-objective space
3
EXPERIMENT EVALUATION
The experiments we undertook utilise the multi-objective 0 & 1 knapsack problem as benchmarks [11, 15, 27]. The details of the experimental settings of EA operators are given in Table 1. The descriptions of benchmarks are listed in section 3.1. The results are presented in section 3.2. Further observations on weighting adjustment and subspecies are discussed in section 3.3. The configurations of EA parameters are selected for simplicity. For example, the crossover mating selection applies a roulette wheel policy. The empirical studies of crossover and mutation role from Spear are applied to decide the rate [18, 19]. The next generation selection
Table 1: The Configurations of EA Operators Operators
Parameters
Policy
size
750
Initialisation
Random
mating rate
0.9
Selection
Roulette Wheel
Mutation
Probability
0.01
Selection
Next Gen.
Elite
Termination
Criterion
1000 generations
Crossover
2 knapsacks, 750 items 30000
29000
k2
Population
utilizes the elite policy to keep the best performance chromosomes. BENCHMARK 0 & 1 KNAPSACK PROBLEM
A single objective 0 & 1 knapsack problem with constraint capacity is known to be an NP-hard problem [13]. This problem which has been used as a benchmark in a number of EC publications [11, 27], involves placing the most profitable items into the knapsack without exceeding its capacity. This is a single objective (profit) optimis ation with constraint rule (capacity). It can be easily transformed to multi-objective optimis ation by adding the physical weight capacity as another objective to be minimized. The extra knapsack is added in order to increase the complexity of the problem. Two knapsacks with 750 items each means that there are four objectives (two profit objectives and two physical weig ht objectives in two knapsacks). The optimis ation goal is to make maximum profit for both knapsacks, and keep the physical weight under capacity. The data set and comparison results were selected from Zitzler and Thiele’s papers [26, 27]. Permission to use the test data and comparison results was granted by the authors. 3.2
EXPERIMENTAL RESULTS
The result of optimising 0 & 1 knapsack problem set is demonstrated in Figure 5. Two algorithms, NSGA II and SPEA II, we re selected and compared to DFBMOEA in searching the Pareto optimal set. In order to adjust the searching direction, we applied three sets of weighting
28000 DFBMOEA
27000
NSGA2 SPEA2 26000 26000
27000
28000
k1
29000
30000
Figure 5. The experimental results of multi-objective 0 & 1 knapsack problem. The results obtained by running 1000 generations 3.3
CONCLUSIONS
The results from the benchmark have demonstrated the validity of our approach when handling multi-objective optimization in practical problems. DFBMOEA outperforms the other two MOEA approaches in benchmark testing. Two interesting points were discovered after the analysis of the results. The first point we discovered is the effect of weighting in DFBMOEA. In our approach, a conventional weighted vector is implemented to adjust the search direction toward the Pareto front. Different sets of weightings (see table 2) were applied in order to explore wider angles of searching direction. Each set of weightings contributed a sub-set of Pareto optimal solutions in the overall Pareto optimal set in DFBMOEA (see Fig ure 6). This approach may be improved by introducing the method of dynamic weighted aggregation proposed by Jin et al. [12]. The 30000
Sensitivity Analysis of Weighting Adjustment
Table 2. Weighting sets applied in 0 &1 knapsack benchmark. ∑ W = 1 setX k
Profit (K1)
W2
28500
1 .. 4
Wset I
Wset II
W set III
0.30
0.27
0.24
Weight (K2)
0.23
0.23
0.23
Profit (K3)
0.24
0.27
0.30
Weight (K4)
0.23
0.23
0.23
knapsack 2
3.1
(see Table 2) in the DFBMOEA. The analysis of the weighting effect is discussed in section 3.3. The results of DFBMOEA outperform the other two MOEAs in finding the Pareto optimal set.
W1
W3
3 sets of Weightings for 4 objectives
W1: k1=0.24, k2=0.23,k3=0.3, k4=0.23 W2: k1=0.27, k2=0.23,k3=0.27, k4=0.23 W3: k1=0.3, k2=0.23,k3=0.24, k4=0.23
27000 27000
Knapsack 1
28500
30000
Figure 6. The sensitivity analysis with respect to different sets of weightings when searching for the Pareto optimal set
Table 3. The example patterns of Subspecies identified in the Pareto set by using weighting set I Subspecies
Starting Bit No. (from left)
Pattern 1
1
Pattern 2
121
Pattern 3
195
Bit stream identified 110111001010111001010101001011001110100101010101101011111100010100 000111011111101011111101011010101101111111 000011011101010001111111111101 101111100011001001111110011011011100111010110011100000111011011111 111111101001000100111111000100011010111101000110111011111101110010 101100101011111011111110011011111011111111110110010111101000001001 111101110110000011101101111101011111111101111
dynamic weighting adjustment enables the searching approach to adapt the weighting during the evolutionary process. We intend to investigate this in order to improve our DFBMOEA algorithm. Through the observation analysis at the gene level of survivors, we discovered that some subspecies patterns existed in most of the Pareto optimal solutions. An example patterns of subspecies identified in the Pareto optimal solutions that used weighting set I (see Table 2) are presented in Table 3. for example, in pattern 1, the length is 102 bits started from bit one. The issue is related to the building block-based EAs, which has highlighted by Veldhuizen and Lamont when extending the concept to MOPs in their Multi- Objective mGA (MOMGA) [23, 24]. A future potential investigation would be to identify and preserve these subspecies during the evolutionary process; which may lead to designing a more efficient approach.
4
CONCLUDING REMARKS
The new DFBMOEA approach combines a distance function, a range equalisation function, a secondary Pareto population with dynamic size, and a weighting vector to overcome the limitations in previous MOEAs. Our results illustrate that DFBMOEA optimis es multiobjective problems effectively, and outperforms other comparison MOEAs in collecting the Pareto optimal set. 4.1
THE SIGNIFICANCE OF DFBMOEA
In summary, the DFBMOEA uses the characteristics of the distance function-based techniques and the concept of Pareto dominance. The significant contribution of DFBMOEA to MOPs is as follows. •
•
In DFBMOEA, we preserved the computational advantages of conventional distance function methods, and overcame the problems for which it is criticised, i.e. the acquisition of target vector from decision makers. We identified the problem of biased search and proposed a solution. The range equalisation function, in transforming all objectives into the scale of [0, 1], avoids the need for acquiring prior knowledge from decision makers. Moreover, the transformation also
•
eliminates the problem of biased searching (see Section 2.2). To prevent the loss of search diversity when using the distance function, the weighting vector used in DFBMOEA simply increases the search diversity towards the Pareto front. The effect of adjusting the weightings is discussed in section 3.3.1.
Finally, the DFBMOEA approach is a simple but effective MOEA approach which does not require prior knowledge of target vectors to collect the Pareto optimal set in MOO problems. 4.2
FUTURE STUDY
For future study, several research directions are worth exploring to make further improvements to the performance of DFBMOEA. A self-adapting weighting technique is a potential area to study for performance improvement. The analysis of the Pareto optimal set has discovered some subspecies patterns. The information could be used to modify the architecture of our approach, and lead to more efficient computation and performance improvement. More test benchmarks are needed to test the functionalities of our approach. Deb [4] has suggested some useful benchmarks and methods for testing new MOEAs. Another important issue is to select a suitable performance measurement to validate our research in the MOPs. Moreover, enhanced population diversity is needed in the future improvement of this approach. Finally, the empirical study of different policies of EA operators in DFBMOEA may also contribute to more promising results and performance. References [1]
[2]
C. A. Coello Coello, D. A. Van Veldhuizen, and G. B. Lamont, Evolutionary algorithms for solving multi-objective problems (genetic algorithms and evolutionary computation): Plenum Pub Corp, 2002. D. W. Corne, K. Deb, P. J. Fleming, and J. D. Knowles, "The good of the many outweights the good of the one: Evolutionary multi-objective optimization," in Newsletter of the IEEE Neural Network Society, 2003.
[3]
[4]
[5]
[6]
[7]
[8] [9]
[10]
[11]
[12]
[13]
[14]
K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, "A fast and elitist multiobjective genetic algorithm: NSGA -II," IEEE Transactions on Evolutionary Computation, vol. 6, 2002. K. Deb, L. Thiele, M. Laumanns, and E. Zitzler, "Scalable multi-objective optimization test problems," The 2002 IEEE World Congress on Computer Intelligence, Congress on Evolutionary Computation, Honolulu, Hawaii, USA, 2002. C. M. Fonseca and P. J. Fleming, "Genetic Algorithms for Multiobjective Optimization: Formulation, Discussion and Generalization," Genetic Algorithms: Proceedings of the Fifth International Conference, San Mateo, CA, 1993. C. M. Fonseca and P. J. Fleming, "An Overview of Evolutionary Algorithms in Multiobjective Optimization," Evolutionary Computation, vol. 3, pp. 1-16, 1995. D. E. Goldberg and J. Richardson, "Genetic algorithms with sharing for multimodal function optimization," Proceedings of The Second International Conference on Genetic Algorithms, Genetic Algorithms and Their Applications, San Mateo, CA, 1987. K. Gurney, An introduction to Neural Networks: Univ College of London Press, 1997. A. E. Hans, "Multicriteria optimization for highly accurate systems," in Mutlicriteria Optimization in Engineering and Sciences, W. Stadler, Ed. New York: Plenum Press, 1988, pp. 309-352. S. S. Haykin, Neural Networks: A comprehensive foundation, 2nd ed: Prentice Hall, 1998. A. Jaszkiewicz, "On the performance of multiple-objective genetic local search on the 0/1 knapsack problem - a comparative experiment," IEEE Transactions on Evolutionary Computation, vol. 6, pp. 402-412, 2002. Y. Jin, M. Olhofer, and B. Sendhoff, "Dynamic Weighted Aggregation for Evolutionary MultiObjective Optimization: Why Does It Work and How?," Proceedings of the Genetic and Evolutionary Computation Conference (GECCO'2001), 2001. J. Lee, S. Sahni, and E. Shragowitz, "A hypercube algorithm for the 0/1 knapsack problem," Journal of Parallel & Distributed Computing, vol. 5, pp. 438-456, 1988. A. Osyczka, Multicriterion Optimization in Engineering. Chichester: Ellis Horwood Limited, 1984.
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
G. R. Raidl, "An improved genetic algorithm for the multiconstrained 0-1 knapsack problem," Proceedings of the 5th IEEE International Conference on Evolutionary Computation, 1998. R. S. Rosenberg, "Simulation of genetic populations with biochemical properties." Michigan: University of Michigan, 1967. J. D. Schaffer, "Some experiments in machine learning using vector evaluated genetic algorithms," in Electrical Engineering. Nashville, NC: Vanderbilt, 1984. W. M. Spears, "Recombination Parameters," in The Handbook of Evolutionary Computation,, O. U. Press, Ed.: IOP Publishing and Oxford University Press, 1997. W. M. Spears, "The Role of Mutation and Recombination in the Evolutionary Algorithms," in Computer Science. Fairfax, VA, USA: George Mason University, 1998, pp. 240. N. Srinivas and K. Deb, "Multiobjective Optimization Using Nondominated Sorting in Genetic Algorithms," Evoltuionary Computation, vol. 2, pp. 221-248, 1994. R. E. Steuer, Mutliple criteria optimisation: Theory, computation, and application. New York: John Wiley & Sons, 1986. R. E. Steuer, "Norms and metrics," in Mutliple criteria optimisation: Theory, computation, and application. New York: John Wiley & Sons, 1986, pp. 42-45. D. A. Van Veld huizen, "Multiobjective evolutionary algorithms: classifications, analyses, and new innovations," in Department of Electrical and Computer Engineering. Wright-Patterson AFB, Ohio: Air Force Institute of Technology, 1999. D. A. Van Veldhuizen and G. B. Lamont, "Multiobjective evolutionary algorithm test suites," Proceedings of the 1999 ACM symposium on Applied computing, San Antonio, Texas, United States, 1999. D. Wienke, C. Lucasius, and G. Kateman, "Multicriteria target vector optimization of analytical procedures using a genetic algorithm Part I. Theory, numerical simulations and application to atomic emission spectroscopy," Analytica Chimica Acta, vol. 265, pp. 211-225, 1992. E. Zitzler, M. Laumanns, and L. Thiele, "SPEA2: Improving the strength Pareto evolutionary algorithm for multiobjective optimization," EUROGEN 2001: Evolutionary Methods for Design, Optimisation and Control with Applications to Industrial Problems, Athens, Greece, 2001.
[27]
E. Zitzler and L. Thiele, "Multiobjective evolutionary algorithms: a comparative case study and the strength Pareto approach," IEEE Transactions on Evolutionary Computation, vol. 3, pp. 257-271, 1999.