Soft Comput DOI 10.1007/s00500-016-2286-8
METHODOLOGIES AND APPLICATION
Many-objective optimization with dynamic constraint handling for constrained optimization problems Xi Li1,2 · Sanyou Zeng1 · Changhe Li1 · Jiantao Ma2
© Springer-Verlag Berlin Heidelberg 2016
Abstract In real-world applications, the optimization problems are usually subject to various constraints. To solve constrained optimization problems (COPs), this paper presents a new methodology, which incorporates a dynamic constraint handling mechanism into many-objective evolutionary optimization. Firstly we convert a COP into a dynamic constrained many-objective optimization problem (DCMaOP), which is equivalent to the COP, then the proposed manyobjective optimization evolutionary algorithm with dynamic constraint handling, called MaDC, is realized to solve the DCMaOP. MaDC uses the differential evolution (DE) to generate individuals, and a reference-point-based nondominated sorting approach to select individuals. The effectiveness of MaDC is verified on 22 test instances. The experimental results show that MaDC is competitive to several state-ofthe-art algorithms, and it has better global search ability than its peer algorithms. Keywords Constrained optimization · Many-objective optimization · Dynamic constraint optimization · Referencepoint-based nondominated sorting Communicated by V. Loia.
B B
Sanyou Zeng
[email protected] Changhe Li
[email protected] Xi Li
[email protected]
1
School of Computer Science, China University of Geosciences, Wuhan 430074, Hubei, People’s Republic of China
2
School of Information Engineering, Shijiazhuang University of Economics, Shijiazhuang 050031, Hebei, People’s Republic of China
1 Introduction Constrained optimization problems (COPs) are commonly encountered in the disciplines of science and engineering application. During the past decades, the researchers have widely adopted evolutionary algorithms (EAs), which are effective stochastic search techniques inspired by nature, to solve COPs (Montes and Coello 2011; Kramer 2010; Cai et al. 2013; Sarker et al. 2014). In recent years, with the development of the multi-objective and adaptive evolutionary theories and methodologies, more and more new kinds of methods have been developed to solve COPs. The Pareto-dominance-based selection is one of the most important strategies in multi-objective evolutionary optimization. Coello used it early to deal with constrained optimization problems (Coello 2000). A new version of the niched-Pareto genetic algorithm (NPGA) was proposed by Coello and Montes (2002). This approach adopts a dominance-based selection scheme to integrate the constraint violation into the fitness function value and introduces an additional parameter, called selection ratio (Sr ), to control the diversity of the population. Venkatraman and Yen (2005) proposed a genetic algorithm-based two-phase framework for solving COPs. In the framework, phase one, which is named as constraint satisfaction algorithm, focuses on finding the feasible solutions, in this phase, the objective function is entirely disregarded. Phase two ,which is named as constrained optimization algorithm, is activated once a feasible solution is located. In this phase, a COP is treated as a bi-objective problem, the original objective and the violation objective are simultaneously optimized. The authors also elaborated the importance of the balance of exploration and exploitation for COPs and analyzed that the dominance-based rank is helpful for exploration, while the elitism preserving strategy is helpful for exploitation. Hsieh
123
X. Li et al.
et al. (2011) presented a new method to solve COPs, which is based on the well-known nondominated sorting genetic algorithm II (NSGA-II) (Deb and Meyarivan 2002). The algorithm uses the -comparison (Takahama and Sakai 2006) to select individuals and a penalty method to guide the solutions to the -feasible space. The -comparison is a new dominance relationship, it constructs a relaxed constraint condition for solutions. When both the violation values of two solutions are less than , the general Pareto dominance is employed; otherwise, the solution with smaller violation value dominates the other. A hybrid constrained optimization EA (HCOEA) was presented by Wang et al. (2007). Thereafter, it was extended to a dynamic version (Wang and Cai 2012). The algorithm employs two search models: a global search and a local search. In the global model, a Paretodominance-based tournament selection among parents and offspring is employed to promote population diversity. In the local model, a parallel search in subpopulations is implemented to accelerate convergence. The penalty function is a classical method for solving COPs, but the determination of penalty parameters is difficult. To address the issue, Deb and Datta (2010) proposed another kind of hybrid algorithms— the combined bi-objective optimization and penalty function method. Firstly, the algorithm constructs a bi-objective optimization problem and computes an appropriate value for the penalty parameter by solving the problem, then the local search problem with the estimated penalty parameter is solved to locate the global optimal solution. Zeng et al. (2011) used not only multi-objective optimization technologies but also dynamic constrained mechanisms to deal with constraints for COPs. The authors converted a COP to a dynamic constrained bi-objective optimization problem, and adopted a dynamic constrained bi-objective optimization algorithm to solve the problem. As presented above, the multi-objective optimization technology has been introduced into constrained evolutionary algorithms for solving COPs in many studies. Among these studies, problems generally are considered as bi-objective optimization problems, an original objective and a violation objective formulated from constraints functions. The two objectives are evolved simultaneously to keep a balanced optimization between the objective and the constraint violation. In this paper, we convert a COP into a manyobjective optimization problem. m+1 objectives are involved in the new problem (m is the number of constraints of the original problem), i.e., each constraint is converted into a violation objective function. So many-objective optimization techniques can be incorporated into our method to maintain diversity. Besides, we adopt the dynamic constraint handling mechanism to deal with constraints (Zeng et al. 2011; Li et al. 2015). In the beginning, the constrains boundaries are broadened and then shrunk gradually to the original boundaries with the evolution proceeding. This way the problem
123
seems to be no constraint, thus the many-objective optimization algorithm is able to work well without the affection of the constraints. The proposed many-objective optimization evolutionary algorithm with dynamic constraint handling, called MaDC, uses DE as a search engine and the reference-pointbased nondominated sorting approach (Deb and Jain 2014) as the selection of individuals. The paper is organized as follows. In Sect. 2, we discuss how to convert a constrained optimization problem into an equipotent dynamic constrained many-objective optimization problem. Section 3 describes the implementation of MaDC. The experimental results are provided in Sect. 4. Finally, Sect. 5 presents the conclusions and the future works.
2 Problem conversion This paper presents an improved version of the method proposed in (Li et al. 2015). Li et al. (2015) converted a COP into the a dynamic constrained multi-objective optimization problem with m objectives, m is no more than four. This paper first converts a COP into a constrained many-objective optimization problem (CMaOP) whose number of objectives is equal to the number of constraints of the problem plus one. Subsequently, the CMaOP is converted into a dynamic constrained many-objective optimization problem (DCMaOP), i.e., a series of CMaOPs. This way, the original COP is solved by solving the equivalent DCMaOP. 2.1 Constrained optimization problem A constrained optimization problem (COP), without loss of generality, is formulated as follows (for minimization): minimi ze y = f (x) subject to g(x) = (g1 (x), g2 (x), . . . , g p (x)) ≤ 0 h(x) = (h p+1 (x), h p+2 (x), . . . , h m (x)) = 0 (1) wher e x = (x1 , x2 , . . . , xn ) ∈ X X = {x|l ≤ x ≤ u} l = (l1 , l2 , . . . , ln ) u = (u 1 , u 2 , . . . , u n ) x = (x1 , x2 , . . . , xn ) ∈ X ⊂ Rn is the decision vector containing n variables, X is a space in Rn , called the decision space. l is the lower boundary of the decision space and u is the upper boundary of the decision space. There are p inequality constraints g(x) ≤ 0 and m − p equality constraints h(x) = 0. If an inequality constraint is equal to zero at any point x ∈ X, it is considered active. All equality constraints are active constraints. Usually, equality constraints are transformed into inequality constraints: g j (x) = |h j (x)| − δ ≤ 0,
j = p + 1, . . . , m
(2)
Many-objective optimization with dynamic constraint
where δ is a positive tolerance value, e.g., δ is set to 0.0001 in the paper. If a solution vector x = (x1 , x2 , . . . , xn ) satisfies all constraints g(x) ≤ 0, it is feasible, otherwise it is infeasible. A feasible set SF contains all feasible solutions of a constrained optimization problem: SF = {x|x is f easible, x ∈ X}
(3)
minimi ze y = ( f (x), ϕ1 (x), ϕ2 (x), . . . , ϕm (x)) (5) subject to g = g(x) = (g1 (x), g2 (x), . . . , gm (x)) ≤ 0 where ϕi (x) (i = 1, 2, . . . , m) is the constraint violation function of gi (x) , which is defined as:
2.2 Multi-objective optimization problem In this paper, a COP is converted to a CMaOP and is solved with the many-objective optimization technology. In this section, we give the description of the multi-objective optimization problem and relevant concepts. Generally, a multi-objective optimization problem (MOP) with n decision variables and M objectives can be formulated as: minimi ze y = f(x) = ( f 1 (x), f 2 (x), . . . , f M (x)) wher e x = (x1 , x2 , . . . , xn ) ∈ X ⊂ Rn y = ( f1 , f2 , . . . , f M ) ∈ Y ⊂ R M
while others are not. If we only use the sum of them, some information will be lost. Thus, in this paper, each constraint is converted to an objective. A CMaOP converted from a COP is stated as follows:
(4)
In multi-objective optimization, the comparison between two solutions depend primarily on the Pareto dominance relationship. Given two vectors a = (a1 , a2 , . . . , a p ), b = (b1 , b2 , . . . , b p ), a is said to Pareto dominates b if and only if ai ≤ bi for every index and a j < b j for at least one index. For two solutions x1 and x2 , if the objective vector f(x1 ) Pareto dominates f(x2 ), then x1 Pareto dominates x2 (denoted as x1 ≺ x2 ). A solution x∗ is Pareto optimal (nondominated) solution if there exists no other solution which Pareto dominates x∗ . Usually, the concept of non-dominated gives a set of solutions for a MOP instead of a single solution. The set is called the Pareto set (non-dominated set). The corresponding objective vector set in the objective space is called the Pareto front (PF) (Coello 2006). If a MOP involves a large number of objectives, in general greater than three, it is known as many-objective optimization problem (Bader and Zitzler 2011). Many-objective optimization is challenging, and its difficulty severely increased with the increase of the objectives. There have been a number of attempts to solve many-objective optimization problems (Deb and Jain 2014; Ma et al. 2014; Asafuddoula et al. 2015). 2.3 Converting a COP to a CMaOP In recent years, the advantage of multi-objective optimization techniques has been taken for dealing with constraints in a lot of studies (Zeng et al. 2011; Wang and Cai 2012; Cai et al. 2013). Generally, the constraints are added together as one objective, so a COP is converted to a bi-objective problem. We know that these constraints usually have different characteristics, for example, some constraints are easy to meet,
ϕi (x) = max{gi (x), 0}, i = 1, 2, . . . , m
(6)
The transformed problem defined in Eq. (5) has the same feasible set SF and the same optimal solution x∗ as the original problem defined in Eq. (1). The CMaOP is equivalent to the COP, which would be solved by the way of solving the CMaOP with a constrained many-objective optimization algorithm. 2.4 Converting a CMaOP to a DCMaOP Many-objective evolutionary algorithms (MaOEAs) for solving CMaOPs will face the same difficulties of handling constraints as EAs solve the COPs. MOEAs are able to solve a MOP without constraints well, the discussion and testification have been illustrated in many studies (Zhang and Li 2007; Wang et al. 2013; Jara 2014). If we make the constraints very loosely as if there are no constraints, then a MaOEA could be directly applied and have the efficient search ability as long as the population is always feasible. In this paper, the dynamic constraint handling method proposed by Zeng et al. (2011) is adopted to address this issue. The method changes the constraints boundaries to make most individuals feasible. At the beginning, the original boundary of the CMaOP in Eq. (5) is broadened to e(0) , which is large enough to guarantee all individuals in the initial population P(0) feasible: (0)
(0)
(0) e(0) = (e1 , e2 , . . . , em )
ei(0) = max {gi (x)}, i = 1, 2, . . . , m x∈P(0)
(7)
Subsequently, the broadened boundary e(0) gradually reduces as the population evolves. The reduction is small enough so that we have always a nearly feasible population. Eventually, the boundary is able to shrink to the original boundary 0. This process constructs a DCMaOP, which is a sequence of CMaOPs {C Ma O P (τ ) }, τ = 0, 1, 2, . . . , S. The DCMaOP is formulated as follows:
123
X. Li et al.
C Ma O P (0) C Ma O P (1)
min y = ( f (x), ϕ1 (x), ϕ2 (x), . . . , ϕm (x)) (0)
st : g(x) ≤ e min y = ( f (x), ϕ1 (x), ϕ2 (x), . . . , ϕm (x)) st : g(x) ≤ e(1) (8)
............ min y = ( f (x), ϕ1 (x), ϕ2 (x), . . . , ϕm (x))
C Ma O P (S)
st : g(x) ≤ e(S) = 0
where e(0) ≥ e(1) ≥ · · · ≥ e(S) = 0, e(τ ) (τ = 0, 1, 2, . . . , S) is called elastic constraint boundary, τ is called environment state (Li et al. 2015). The boundary e(τ ) at every environment state can be stated as follows: (τ ) ) e(τ ) = (e1(τ ) , e2(τ ) , . . . , em (τ )
= Ai e
−( Bτ )2 i
− ε, i = 1, 2, . . . , m
(9)
where ε is a positive close-to-zero parameter to control the convergence speed of the elastic boundary, Ai and Bi are (0) (S) constants which can be computed using ei and ei . At the initial state τ = 0, the equation of the elastic boundary is formulated: (0)
ei
= max {gi (x)} = Ai − ε, i = 1, 2, . . . , m x∈P(0)
(10)
At the final state τ = S, the elastic boundary goes back to 0 by the equation: (S)
ei
= 0 = Ai e
−( BS )2 i
− ε, i = 1, 2, . . . , m
(11)
According to Eqs. (10) and (11), Ai and Bi can be calculated by: Ai = max {gi (x)} + ε, i = 1, 2, . . . , m x∈P(0)
Bi = ln
S max {gi (x)}+ε x∈P(0)
, i = 1, 2, . . . , m
(12)
ε
Regarding the elastic constraint boundary, if a solution satisfies the inequality g(x) ≤ e, it is said to be e-feasible, otherwise, it is said to be e-infeasible. Pareto domination is defined without considering the constraints (see Sect. 2.2). Given two solutions x1 and x2 , x1 is said to e-constrained Pareto dominates x2 (denoted as x1 ≺e x2 ) in a DCMaOP, if and only if they meet one of the following three cases: – x1 is e-feasible and x2 is e-infeasible;
123
3 Algorithm description This section introduces the MaDC algorithm in detail.
ei
– Both are e-feasible, and y(x1 ) Pareto dominates y(x2 ); – Both are e-infeasible, and ϕ(x1 ) Pareto dominates ϕ(x2 ).
3.1 Framework of MaDC In this subsection, we present the main framework of MaDC (Algorithm 1). The algorithm randomly generates an initial population. It conducts the genetic operators of DE to generate the offspring population; then, it chooses individuals from the parent population and offspring population to construct the next generation based on the reference-point-based nondominated sorting approach (Deb and Jain 2014). The iterative processes are continued until the stopping condition is met. Note that the constraint boundary will shrink with the evolution process, but it does not change until the whole population is e-feasible. If some individuals in a population are e-infeasible at a state, the algorithm would iterate the evolution infinitely to achieve a e-feasible population, Max G is the maximal number of generations, which is set to end the iteration. The algorithm combines two selection strategies: the nondominated-rank-based selection and the reference-pointbased selection. The former is to ensure the convergence, and the latter is to enhance the distribution of the population aided by a set of well-distributed reference points. Algorithm 1 Framework of MaDC Input
Z: reference points Max G: maximal number of generations S: number of environment states F, C R, Pm : DE parameters Output x∗ : optimal solution Step 1 : Set population size N = size(Z). Step 2 : Randomly generate an initial population P0 = {x1 , x2 , ...x N }. Step 3 : Set e = e(0) , τ = 0, g = 0. Step 4 : If all individuals in the current population are e-feasible, then change elastic constraint boundary e = e(τ +1) , τ = τ + 1. Step 5 : Use DE to generate an offspring population Sg from Pg . Step 6 : Use the reference-point-based nondominated sorting approach to select individuals from Sg and Pg to construct next population Pg+1 . Step 7 : g = g + 1, if τ < S and g < Max G, then goto Step 4. step 8 : Output x∗ .
3.2 Generating offspring population The procedures of generating the offspring population in Algorithm 1 are completed with three genetic operators: an
Many-objective optimization with dynamic constraint
affine mutation, a crossover and an uniform mutation. Algorithm 2 presents the detail of the three operators. Algorithm 2 Procedures of the generation of offspring Input
P: parent population F, C R, Pm Output S: offspring population Step 1 : Set S = Φ. Step 2 : i = 1. Step 3 : Select the ith individual vector xi from P, conduct three genetic operators: Step 3.1: Affine Mutation: vi = xa + F(xb − xc ) xa , xb , xc ∈ P are randomly selected three individual vectors. Step 3.2: Crossover: Generate trial vector ui = (u i1 , u i2 , . . . , u in ) based on the base vector xi = (xi1 , xi2 , . . . , xin ) and the mutant vector vi = (vi1 , vi2 , . . . , vin ) : vi j i f rr nd < C R or j = jr nd ui j = (13) xi j i f rr nd ≥ C R or j = jr nd jr nd is a random integer within [1, n], rr nd is a random real number within [0, 1). Step 3.3: Uniform Mutation: r nd Real(0, 1) i f rr nd < Pm ui j = (14) ui j other wise Step 4 : Add ui into S, i = i + 1. Step 5 : If i < N , then goto step 3. Step 6 : Output S.
3.3 Creating next population Algorithm 3 presents the detailed description of Step 6 in Algorithm 1. It adopts the reference-point-based nondominated sorting method to select individuals to create the next population. Firstly, all solutions in the combined population are sorted into different nondomination levels (F1 , F2 , and so on). Then, the solutions selected from each nondomination level one by one are added into the next population Pt+1 . If the size of Pt+1 including of the last selected level (lth level) exceeds the population size N, a part of individuals need to be chosen at the lth level based on association between solutions and the predefined reference points. The association operation of a solution s with the reference points is as follows:
Algorithm 3 Procedures of the generation of next population Input C: combined population Z: reference points N : population size Output P: next population Step 1 : (F1 , F2 , . . .) = Non-dominated-sort( C ). Step 2 : Set P = Φ, i = 1. Step 3 : Psi ze = size(P) + size(Fi ), do: Step 3.1: If Psi ze is exactly equal to N , move all individuals from Fi into P, and goto Step 9; Step 3.2: If Psi ze is less than N , move all individuals from Fi into P, i = i + 1, and goto Step 3. Step 4 : Conduct the association operators between the solutions of P with the reference points. For each reference point, count the number of the associated solutions: associated-count(r). Step 5 : Select a reference point r, who has a minimum number of the associated solutions. Step 6 : Sr = {s|s is the associated solution o f r , s ∈ Fi }, do: Step 6.1: If size(Sr ) = 0, then temporarily remove r at the current generation, goto Step 5; Step 6.2: If associated-count(r) = 0, then select the solution s ∈ Sr , who is closest to r; Step 6.3: If associated-count(r) = 0, then randomly select a solution s ∈ Sr . Step 7 : Move s from Fi into P, associated-count(r)= associatedcount(r)+1. Step 8 : If size(P) is less than N , goto Step 5. Step 9 : Output P.
i = 1, 2, . . . , M
(15)
(2) Creating a series of reference lines by joining the origin with each of the reference points. (3) Calculating the perpendicular distances from the normalized objective function vector of solution s to each of the reference lines: r rT ˆ · f) · )|| ||r|| ||r|| fˆ = ( fˆ1 (s), fˆ2 (s), . . . , fˆM (s))
ˆ r) = ||(fˆ − ( d(f,
(16)
where || · || represents the length of a vector. If the reference line who joins the origin with reference points r is ˆ closest to objective vector f(s), then r is considered as the associated point of solution s, and s is the associated solution of reference point r.
4 Experiments and results 4.1 Test problem set
(1) Normalizing the objective functions of the solution s based on: fˆi (s) =
f i (s) − min { f i (x)} x∈Q(t)
max { f i (x)} − min { f i (x)}
x∈Q(t)
x∈Q(t)
The Performance of MaDC is tested on the 24 benchmark functions proposed by Liang (2006). In the test suite, problem g20 has no feasible solutions and the feasible space of problem g22 is very narrow, MaDC and peer algorithms referred in this paper cannot solve the two problems. The character-
123
X. Li et al. Table 1 Detailed characteristics of the 22 test problems
n
ρ (%)
f (x∗ )
LI
LE
Quadratic
13
0.0111
−15.000000
9
0
Nonlinear
20
99.9971
−0.80361910
0
0
Polynomial
10
0.0000
−1.00050010
0
0
Problem
Type of fun.
g01 g02 g03
NE
a
cn
0
0
6
9
2
0
1
2
0
1
1
1
g04
Quadratic
5
52.1230
−30665.5386
0
0
6
0
2
6
g05
Cubic
4
0.0000
5126.496714
2
0
0
3
3
5
g06
Cubic
2
0.0066
−6961.813875
0
0
2
0
2
2
g07
Quadratic
10
0.0003
24.306209
3
0
5
0
6
8
g08
Nonlinear
2
0.8560
−0.09852504
0
0
2
0
0
2
g09
Polynomial
7
0.5121
680.63005737
0
0
4
0
2
4
g10
Linear
8
0.0010
7049.248020
3
0
3
0
6
6
g11
Quadratic
2
0.0000
0.74990000
0
0
0
1
1
1
g12
Quadratic
3
4.7713
−1.00000000
0
0
1
0
0
1
g13
Nonlinear
5
0.0000
0.053941514
0
0
0
3
3
3
g14
Nonlinear
10
0.0000
−47.76488845
0
3
0
0
3
3
g15
Quadratic
3
0.0000
961.71502228
0
1
0
1
2
2
g16
Nonlinear
5
0.0204
−1.90515525
4
0
34
0
4
38
g17
Nonlinear
6
0.0000
8853.533874
0
0
0
4
4
4
g18
Quadratic
9
0.0000
0.86602540
0
0
13
0
6
13
g19
Nonlinear
15
33.4761
32.65559295
0
0
5
0
0
5
g21
Linear
7
0.0000
193.72451007
0
0
1
5
6
6
g23
Linear
9
0.0000
−400.05510000
0
3
2
1
6
6
g24
Linear
2
79.6556
−5.50801327
0
0
2
0
2
2
istics of the other 22 test cases are given in Table 1. From Table 1, we can see that various types of functions have been included in the test suite, such as linear, nonlinear, polynomial, cubic and quadratic functions. The number of decision variables (n) and the number of constraints (cn) are various in these test problems, and the types of the constraints are also different. There are linear inequality constraints (LI), linear equality constraints (LE), nonlinear inequality constraints (NI), nonlinear equality constraints (NE), active constraints (a), and inactive constraints. In Table 1, ρ is the estimated ratio of the feasible region out of the search space, f (x∗ ) is the best known solution.
4.2 Construction of reference points Many algorithms adopt reference points or reference vectors to maintain the diversity of multi-objective optimization, such as MOEA/D (Zhang and Li 2007) and NSGA-III (Deb and Jain 2014), they generally use the simplex-lattice design method to generate reference points. It is noted that some new methodologies have been introduced to construct reference vectors, Ma et al. (2014) proposed an uniform decomposition measurement for generating arbitrary number of vectors. Zapotecas Martínez et al. (2015) presented a low-discrepancy sequences-based method to deal with high-
123
NI
dimensional objective optimization. The two methods are used in MOEA/D, and have good performance. In order to be consistent with NSGA-III, we use the same determination method to place reference points on a normalized hyperplane, which has the equal intercept of one on all objective axes. For a M-objective problem, k divisions are chosen along each objective. A generated reference point can be considered as a vector containing M components, where each component is a proportion of an integer and the constant k: r = (r1 , r2 , . . . , r M ) s.t. r1 + r2 + · · · + r M = 1 k 0 1 and ri ∈ { , , . . . , }, i = 1, 2, . . . , M k k k
(17)
The whole point set is composed of all possible combinations of the components. So the total number of reference points (H ) for a M-objective problem with k divisions is: k H = C M+k−1
(18)
Assume that M = 3 (three-objective problems) and k = 4 (four divisions along each objective) are selected, 4 = 15 points will be generated. Fig. 1 illusH = C3+4−1 trates the reference points. When there are many objectives
Many-objective optimization with dynamic constraint Table 2 Number of reference points (H) and population size in MaDC M
Divi. of boun.
Divi. of insi.
H
Popsize
2
90
0
91
91
3
12
0
91
91
4
6
0
84
84
5
4
2
85
85
6
3
2
77
77
7
3
1
91
91
8
2
2
72
72
9
2
2
90
90
10
2
2
110
110
14
2
1
119
119
39
1
1
78
78
Fig. 1 Illustration of the generated reference points for three-objective problems with four divisions Table 3 Number of reference points (H) and population size in NSGAIII M
Fig. 2 Illustration of two-layered reference points for three-objective problems (two divisions for the boundary points and one division for the inside points)
(M ≥ 5), one layer of reference points is not appropriate. Note that if the value of k is less than the number of objectives (M), no intermediate point would be created by this method. For seven-objective problems, even if we use k = 7, it would generate 1716 reference points, but only one of them is intermediate point. The majority of points are located on the boundary, which will deteriorate the performance of the guided population on maintaining the population diversity. To address the issue, we use two layers (a boundary layer and a inside layer) of reference points for many-objective problems. A reference point on the inside layer can be defined as the midpoint of a line segment from the corresponding point of the boundary layer to the center point of the normalized hyperplane. Fig. 2 shows the two-layered reference points. The reference points include six points (k = 2) on the boundary layer: (0.0, 0.0, 1.0), (0.0, 0.5, 0.5), (0.0, 1.0,
Divi. of boun.
Divi. of insi.
H
Popsize
2
90
0
91
92
3
12
0
91
92
5
6
0
210
212
8
3
2
156
156
10
3
2
275
276
15
2
1
135
136
0.0), (0.5, 0.0, 0.5), (0.5, 0.5, 0.0) and (1.0, 0.0, 0.0), three points (k = 1) on the inside layer: (0.1667, 0.1666, 0.6667), (0.1666, 0.6667, 0.1667) and (0.6667, 0.1666, 0.1667). In this method, the selection is firstly based on the nondominated rank, and secondly based on the association with the above-created reference points. In this way, the diversity of reference points on the entire normalized hyperplane ensures that the obtained solutions are likely to cover a wide range of the Pareto Front. 4.3 Experimental setup In MaDC, the population size N was set to the number of reference points, i.e. popsi ze = H . For NSGA-III, in order to use the tournament selection, N is set to the smallest multiple of four, but larger than the number of reference points (H ) (Deb and Jain 2014). Tables 2 and 3 show the number of reference points (H ) for problems of various scales in the two algorithms, respectively. For the generation of the offspring, the scaling factor F was set to 0.5, the crossover probability parameter C R was set to 0.9, and the uniform mutation probability parameter Pm was set to 0.01. The value of ε in Eq. (9) was set to 1.0e−8. Six high-performance algorithms were chosen for the comparison. These selected state-of-the-art algorithms are: (1) SAMO-DE (Elsayed et al. 2011) ; (2) ECHT-EP2
123
X. Li et al. Table 4 Function values obtained by MaDC, SAMO-DE, ECHT-EP2, DE-DPS, DCMOEA2, HCOEA and DCMOEA Pro. g01 g02 g03 g04 g05 g06 g07 g08 g09 g10 g11 g12 g13 g14 g15 g16 g17 g18 g19 g21 g23 g24
Crit.
MaDC
SAMO-EA
ECHT-EP2
DE-DPS
DCMOEA2
HCOEA
DCMOEA
Best
−15.0000
−15.0000
−15.0000
−15.0000
−15.0000
−15.0000
−15.0000
Avg.
−15.0000
−15.0000
−15.0000
−15.0000
−15.0000
−15.0000
−15.0000
Best
−0.8036191
−0.8036191
−0.8036191
−0.8036191
−0.803619
−0.803241
−0.8036191
Avg.
−0.8010908
−0.7987352
−0.7998220
−0.8036191
−0.797824
−0.801258
−0.7969470
Best
−1.0005
−1.0005
−1.0005
−1.0005
−1.0005
−1.0005
−1.0005
Avg.
−1.0005
−1.0005
−1.0005
−1.0005
−1.0005
−1.0005
−1.0005
Best
−30665.5386
−30665.5386
−30665.5386
−30665.5386
−30665.538
−30665.5386
−30665.5386
Avg.
−30665.5386
−30665.5386
−30665.5386
−30665.5386
−30665.538
−30665.5386
−30665.5386
Best
5126.4967
5126.497
5126.497
5126.497
5126.4967
5126.498
5126.498
Avg.
5126.4967
5126.497
5126.497
5126.497
5126.4968
5148.960
5126.498
Best
−6961.8138
−6961.8138
−6961.8138
−6961.8138
−6961.813
−6961.8138
−6961.8138
Avg.
−6961.8138
−6961.8138
−6961.8138
−6961.8138
−6961.813
−6961.8138
−6961.8138
Best
24.3062
24.3062
24.3062
24.3062
24.3062
24.3062
24.3062
Avg.
24.3062
24.3096
24.3063
24.3062
24.3073
24.307
24.3064
Best
−0.095825
−0.095825
−0.095825
−0.095825
−0.095825
−0.095825
−0.095825
Avg.
−0.095825
−0.095825
−0.095825
−0.095825
−0.095825
−0.095825
−0.093491
Best
680.630
680.630
680.630
680.630
680.630
680.630
680.630
Avg.
680.630
680.630
680.630
680.630
680.630
680.630
680.630
Best
7049.248
7049.248
7049.249
7049.248
7049.273
7049.287
7049.248
Avg.
7049.304
7059.813
7049.249
7049.248
7049.292
7049.525
7049.248
Best
0.7499
0.7499
0.7499
0.7499
0.7499
0.750
0.75
Avg.
0.7499
0.7499
0.7499
0.7499
0.7499
0.750
0.75
Best
−1.000
−1.000
−1.000
−1.000
−1.000
−1.000
−1.000
Avg.
−1.000
−1.000
−1.000
−1.000
−1.000
−1.000
−1.000
Best
0.05394
0.05394
0.05394
0.05394
0.05394
0.05395
0.05395
Avg.
0.05394
0.05394
0.05394
0.05394
0.05394
0.05395
0.05395
Best
−47.76488
−47.76488
−47.7649
−47.76488
−47.7648
–
–
Avg.
−47.76448
−47.68115
−47.7648
−47.76488
−47.7647
–
–
Best
961.71502
961.71502
961.71502
961.71502
961.7150
–
–
Avg.
961.71502
961.71502
961.71502
961.71502
961.7150
–
–
Best
−1.905155
−1.905155
−1.905155
−1.905155
−1.90515
–
–
Avg.
−1.905155
−1.905155
−1.905155
−1.905155
−1.90515
–
–
Best
8853.5338
8853.5397
8853.5397
8853.5397
8853.5348
–
–
Avg.
8853.5338
8853.5397
8853.5397
8883.7747
8861.0021
–
–
Best
−0.866025
−0.866025
−0.866025
−0.866025
−0.8660
–
–
Avg.
−0.866025
−0.866024
−0.866025
−0.866025
−0.8660
–
–
Best
32.65559
32.65559
32.6591
32.65559
32.6566
–
–
Avg.
32.65564
32.75734
32.6623
32.65559
32.6593
–
–
Best
193.72451
193.72451
193.7246
193.72451
193.7278
–
–
Avg.
193.72451
193.77137
193.7348
193.72451
193.7315
–
–
Best
−400.0451
−396.1657
-398.9731
−400.0551
−398.4419
–
–
Avg.
−395.8492
−360.8176
−373.2178
−400.0551
−383.6315
–
–
Best
−5.508013
−5.508013
−5.508013
−5.508013
−5.5080
–
–
Avg.
−5.508013
−5.508013
−5.508013
−5.508013
−5.5080
–
–
123
Many-objective optimization with dynamic constraint
(Mallipeddi and Suganthan 2010); (3) DE-DPS (Sarker et al. 2014); (4) DCMOEA2 (Li et al. 2015); (5) HCOEA (Wang et al. 2007); (6) DCMOEA (Zeng et al. 2011). It is noted that the results of all peer algorithms are from their proposals. All involved algorithms in this paper cannot find feasible solutions on the two problems g20 and g22. To have a fair comparison, for MaDC and other peer algorithms, the number of environment states was set to S = 240,000/ popsi ze. The maximal number of generations Max G = 10,000. All results are averaged over 25 independent runs. 4.4 Results and comparison The detailed results of MaDC are provided in Table 4 along with that of the six peer algorithms, where the best values are highlighted in boldface. MaDC and the first four algorithms solved all the 22 test problems provided in Table 1, while the latter two algorithms, HCOEA and DCMOEA, solved the first 13 test problems. From Table 4, MaDC is able to obtain the optimal solutions for all problems except g23. The algorithms SAMO-DE, ECHT-EP2, DE-DPS, DCMOEA2 are able to obtain the optimal solutions for 20, 18, 21, 17 problems, respectively. The algorithms HCOEA and DEMOEA obtain the optimal solutions for 10, 12 problems, respectively. Regarding the average function values, MaDC performs better than the algorithms SAMO-DE, ECHT-EP2, DE-DPS, DEMOEA2, HCOEA and DEMOEA for nine, six, six, two, three, three test problems, respectively. In order to show the performance difference at the statistical level, we also implemented a twotailed t test with 48 degrees of freedom at a 0.05 level of significance. Table 5 presents the results, where +, ≈ and − signs denote the performance of MaDC is significantly better than, statistically equivalent to, and significantly worse than its peer algorithms, respectively. The results in Table 5 show that MaDC achieve significantly better results than most algorithms. 4.5 The influence of the number of reference points and the parameter ε Due to that each solution in the population has a corresponding reference point, the population size is set approximately Table 5 Comparison among MaDC, SAMO-DE, ECHT-EP2, DEDPS, DCMOEA2, HCOEA and DCMOEA (+, ≈, − denote that the performance of MaDC is significantly better than, statistically equivaSAMO-EA
ECHT-EP2
Fig. 3 Convergence plots of MaDC with the two different population sizes on g17
to the number of reference points in NSGA-III (see Table 3). We adopt the same mechanism in MaDC (see Table 2), but employ less reference points than that of NSGA-III (less divisions in each objective) to obtain a smaller population size. For example, the total number of reference points is 212 (six divisions along each objective) for a five-objective problem in NSGA-III (see Table 3), while the number is 85 (k = 4 for boundary layer and k = 2 for inside layer) in MaDC (see Table 2). For a seven-objective problem, the number of reference points for the larger population size is set to 238, while it is 91 for the smaller population size. The influence of the population size N on the performance of MaDC is studied. We conducted an experiment for MaDC on two test problems (the five-objective problem g17 and the seven-objective problem g23) with two different settings of the population size as mentioned above. The results are: the average results obtained for g17 and g23 are 8855.7497 and −377.1657, respectively, for the large population size, while the results are 8853.5338 and −396.5790, respectively with the small population size. The latter results are significantly better than the former. The convergence plots of MaDC with the two different population sizes on g17 and g23 are shown in Figs. 3 and 4, respectively. In the figures, X axis represents the number of fitness evaluations, Y axis represents the averlent to, and significantly worse than that of the corresponding algorithm, respectively.)
Algorithm
MaDC
DE-DPS
DCMOEA2
HCOEA
DCMOEA
Comparison
+
7
4
1
5
2
2
≈
15
17
16
16
11
10
−
0
1
4
1
0
1
123
X. Li et al.
5 Conclusions
Fig. 4 Convergence plots of MaDC with the two different population sizes on g23
To solve COPs, we propose a novel methodology which combines the dynamic constraint handling mechanism and the many-objective optimization technique. This method converts a COP to an equivalent dynamic constrained manyobjective optimization problem (DCMaOP) and implements a many-objective optimization algorithm with dynamic constraint handling, called MaDC, to solve the DCMaOP, by this way the COP is solved. The dynamic constraint handling is for providing an always feasible population to achieve the effective global search. It is realized by changing the constraint boundary at every state for the constrained problem. The many-objective optimization is mainly handled by the reference-point-based nondominated sorting approach, which can maintain a balance of the optimization of the objective and the constraints. The proposed algorithm is tested on 24 benchmark functions and compared with six state-ofthe-art algorithms. From the results, MaDC has shown its potential to deal with COPs and has better performance than the involved algorithms in this paper. The future works would be: (1) to improve selection strategy to retain more feasible solutions at each generation; (2) to introduce other methods to generate reference points; (3) to use the dynamic parameters selection mechanism to speed up the convergence of the algorithm; and (4) to explore other candidates of the dynamic environment. Acknowledgments The research in this paper was supported by the National Natural Science Foundation of China (Nos.: 61203306, 61271140 and 61305086). Compliance with ethical standards
Fig. 5 Convergence plots of the different elastic constraint boundary values for the three-objective DCMaOP
age distance of the solution vectors in normalized solution space. These results show that the MaDC with small population size converges much faster than MaDC with the large population size on the tested problems. ε in Eq. (9) is a positive close-to-zero parameter used to control the convergence speed of the elastic constraint. In order to investigate the impact of the parameter ε on the proposed algorithm, we tested our algorithm with two different values of ε: 1.0e−4 and 1.0e−8. The results show that MaDC with the fast convergency pattern preforms better than MaDC with the slow convergency pattern for seven problems (g02, g07, g10, g14, g19, g21 and g23). For other problems, the results show that there is no significant difference between the two values of . The fast convergency pattern (ε = 1.0e−8) and the slow convergency pattern (ε = 1.0e−4) are depicted in Fig. 5. From the results, ε = 1.0e−8 is more effective than ε = 1.0e−4.
123
Conflict of interest The authors declare that they have no conflict of interest. Ethical standard All procedures performed in studies involving human participants were in accordance with the ethical standards of the institutional and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards. This article does not contain any studies with animals performed by any of the authors. Informed consent Informed consent was obtained from all individual participants included in the study.
References Asafuddoula M, Ray T, Sarker RA (2015) A decomposition-based evolutionary algorithm for many objective optimization. IEEE Trans Evolut Comput 19(3):445–460 Bader J, Zitzler E (2011) HypE: an algorithm for fast hypervolumebased many-objective optimization. Evolut Comput 19(1):45–76 Cai X, Hu Z, Fan Z (2013) A novel memetic algorithm based on invasive weed optimization and differential evolution for constrained optimization. Soft Comput 17:1893–1910
Many-objective optimization with dynamic constraint Coello CAC (2000) Constraint-handling using an evolutionary multiobjective optimization technique. Civ Eng Environ Syst 17:319– 346 Coello CAC (2006) Multi-objective optimization: a history view of the field. IEEE Comput Intell Mag 1(1):28–36 Coello CAC, Montes EM (2002) Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Adv Eng Inform 16(3):193–203 Deb K, Datta R (2010) A fast and accurate solution of constrained optimization problems using a hybrid bi-objective and penalty function approach. In: IEEE Congress on Evolutionary Computation, CEC 2010, Barcelona, pp 1–8 Deb K, Jain H (2014) An evolutionary many-objective optimization algorithm using reference-point-based nondominated sorting approach, part I: Solving problems with box constraints. IEEE Trans Evolut Comput 18(4):577–601 Deb K, Meyarivan T (2002) A fast and elitist multiobjective genetic algorithm: NSGA-II. IEEE Trans Evolut Comput 6(2): 182–197 Elsayed SM, Sarker RA, Essam DL (2011) Multi-operator based evolutionary algorithms for solving constrained optimization problems. Comput Oper Res 38(12):1877–1896 Hsieh M, Chiang T, Fu L (2011) A hybrid constraint handling mechanism with differential evolution for constrained multiobjective optimization. In: IEEE Congress on Evolutionary Computation, CEC 2011, New Orleans, pp 1785–1792 Jara EC (2014) Multi-objective optimization by using evolutionary algorithms: the p-optimality criteria. IEEE Trans Evolut Comput 18:167–179 Kramer O (2010) A review of constraint-handling techniques for evolution strategies. Appl Comput Intell Soft Comput 1:1–11 Li X, Zeng SY, Qin S, Liu KQ (2015) Constrained optimization problem solved by dynamic constrained NSGA-III multiobjective optimizational techniques. In: IEEE Congress on Evolutionary Computation, CEC 2015, Sendai, pp 2923–2928 Liang JJ (2006) Problem definitions and evaluation criteria for the cec2006 special session on constrained real-parameter optimization. Website, http://www.ntu.edu.sg/home/epnsugan/
Ma X, Qi Y, Li L, Liu F, Jiao L, Wu J (2014) MOEA/D with uniform decomposition measurement for many-objective problems. Soft Comput 18:2541–2564 Mallipeddi R, Suganthan PN (2010) Ensemble of constraint handling techniques. IEEE Trans Evolut Comput 14(4):561–579 Montes EM, Coello CAC (2011) Constraint handling in nature-inspired numerical optimization: past, present and future. Swarm Evolut Comput 1(4):173–194 Sarker RA, Elsayed SM, Ray T (2014) Differential evolution with dynamic parameters selection for optimization problem. IEEE Trans Evolut Comput 18(5):689–707 Takahama T, Sakai S (2006) Constrained optimization by the constrained differential evolution with gradient-based mutation and feasible elites. In: IEEE Congress on Evolutionary Computation, CEC 2006, Vancouver, pp 1–8 Venkatraman S, Yen GG (2005) A generic framework for constrained optimization using genetic algorithms. IEEE Trans Evolut Comput 9(4):424–435 Wang Y, Cai ZX, Guo G, Zhou Y (2007) Multiobjective optimization and hybrid evolutionary algorithm to solve constrained optimization problems. IEEE Trans Syst, Man, and Cybern 37(3):560–575 Wang R, Purshouse RC, Fleming PJ (2013) Preference-inspired coevolutionary algorithms for many-objective optimization. IEEE Trans Evolut Comput 17(4):474–494 Wang Y, Cai ZX (2012) A dynamic hybrid framework for constrained evolutionary optimization. IEEE Trans Syst, Man, Cybern part B: Cybern 42(1):560–575 Zapotecas Martínez S, Aguirre HE, Tanaka K, Coello CAC (2015) On the low-discrepancy sequences and their use in MOEA/D for highdimensional objective spaces. In: IEEE Congress on Evolutionary Computation, CEC 2015, Sendai, pp 2835–2842 Zeng SY, Chen S, Zhao J, Zhou A, Li Z, Jing H (2011) Dynamic constrained multi-objective model for solving constrained optimization problem. In: IEEE Congress on Evolutionary Computation, CEC 2011, New Orleans, pp 2041–2046 Zhang QF, Li H (2007) MOEA/D: a multiobjective evolutionary algorithm based on decomposition. IEEE Trans Evolut Comput 11(6):712–731
123