A Framework of Scalable Dynamic Test Problems for Dynamic Multi-objective Optimization Shouyong Jiang
Shengxiang Yang
Centre for Computational Intelligence (CCI) School of Computer Science and Informatics De Montfort University The Gateway, Leicester LE1 9BH, UK Email:
[email protected]
Centre for Computational Intelligence (CCI) School of Computer Science and Informatics De Montfort University The Gateway, Leicester LE1 9BH, UK Email:
[email protected]
Abstract—Dynamic multi-objective optimization has received increasing attention in recent years. One of striking issues in this field is the lack of standard test suites to determine whether an algorithm is capable of solving dynamic multi-objective optimization problems (DMOPs). So far, a large proportion of test functions commonly used in the literature have only two objectives. It is greatly needed to create scalable test problems for developing algorithms and comparing their performance for solving DMOPs. This paper presents a framework of constructing scalable dynamic test problems, where dynamism can be easily added and controlled, and the changing Pareto-optimal fronts are easy to understand and their landscapes are exactly known. Experiments are conducted to compare the performance of four state-of-the-art algorithms on several typical test functions derived from the proposed framework, which gives a better understanding of the strengths and weaknesses of these tested algorithms for scalable DMOPs.
I.
I NTRODUCTION
argued that the DTLZ and ZDT test suites are already challenging in their static version, and simpler test functions are needed to analyse the effect of dynamics in DMOPs. Hence, they suggested the DSW functions for DMOPs. Furthermore, they proposed a new generic scheme DTF that is a generalized FDA function and allows a variable scaling of the complexity of the dynamic properties. They also added scalable and dynamic constraints to DMOPs by moving circular obstacles in the objective space. Recently, Helbig and Engelbrecht [6] have made a sound investigation into the current DMOPs used in the literature, and have proposed characteristics that an ideal DMO benchmark function suite should exhibit. Besides, after highlighting shortcomings of current DMOPs, they also provided several benchmark functions with complicated POSs and with either an isolated or deceptive POF.
Due to the presence of dynamic features, evolutionary computation (EC) has faced a challenging task to handle dynamic multi-objective optimization problems (DMOPs) in changing environments. Over the years, there has been a growing interest for dynamic multi-objective optimization (DMO), and some empirical studies have been published in this field. However, there are still many open issues remained in DMO. Benchmark test problem, of great importance to developing and examining algorithms for solving DMOPs, is one of the most urgent issues to be resolved in current research activities in DMO.
Most commonly used benchmark test functions in the literature have only two objectives. This means that, they are either simple or not scalable in terms of the number of objectives. While algorithms have been tested in dynamic bi-objective problems, it is necessary that they must be investigated for their ability to solve dynamic problems with more than two objectives. To achieve this, a test suite of scalable dynamic problems is essential. Besides, scalable dynamic problems are also useful for a comprehensive comparison between algorithms.
In the literature we find several different approaches to construct benchmark test functions for DMOPs, e.g., changing the Pareto-optimal front (POF) or the Pareto-optimal set (POS). The pioneering work by Firina et al. [3] has suggested a general method to develop dynamic test functions by adapting the static ZDT [19] and DTLZ [2] test suites. Firina et al. classified DMOPs, according to the effect of dynamism, into four types, leading to a clear description of the combinations of changes that appear in POF and POS. They also suggested a suite of DMOPs, namely the FDA test functions.
In this paper, we propose a framework for constructing scalable DMOPs. In the framework, dynamism can be easily added and controlled, and the changing POFs are easy to understand and their landscapes are exactly known beforehand. Besides, a set of test instances is also provided for illustration. The performance of four state-of-the-art DMO algorithms are studied on several selected instances, which provides a better understanding of the strengths and shortcomings of these algorithms.
Jin and Sendhoff [8] developed an open scheme for constructing DMOPs, where the weights that aggregate the different objectives of a static multi-objective test problem are dynamically changed. Guan et al. [5] studied DMOPs with objective replacement where some objectives may be replaced with new objectives during the evolution. Mehnen et al. [11]
The rest of this paper is outlined as follows. Sect. II introduces two commonly used classification rules in the field of DMO. Sect. III presents our proposed framework for scalable benchmark functions. Experimental results of the chosen algorithms on some generated instances are illustrated in Sect. IV. Sect. V concludes this paper.
978-1-4799-4515-3/14/$31.00 ©2014 IEEE
II.
C LASSIFICATION OF DMOP S
Having a sound and clear classification is of great importance not only for a better understanding of dynamism but also for an easier way to define or construct dynamic test problems. Thus, before developing dynamic multi-objective test problems, one should answer the question of what classes can be defined for them. Fortunately, there are already two classifications in the literature, which can be roughly termed as effect-based and cause-based classifications, respectively. A. Effect-based Classification In most real-life DMOPs, e.g. vehicle routing, the environments often change over time. According to the induced effects on the POF/POS, Farina et al. [3] classified dynamic environments into four different types: •
Type I - the POS changes over time while the POF remains stationary.
•
Type II - both the POF and POS change over time.
•
Type III - the POF changes over time while the POS remains stationary.
•
Type IV - both the POF and POS remain stationary, though the objective function or the constraints may change over time.
Farina et al. further noted that, when the environment is dynamic, more types of the above changes could occur simultaneously in the time scale. B. Cause-based Classification Tantar et al. [14] argued that Farina et al.’s classification, although of undisputed importance, does not capture or does not describe where the dynamic changes in DMOPs come from or the causes of the dynamic changes. Accordingly, they proposed the following intuitive classification for dynamic environments: •
First order - the decision variables change over time.
•
Second order - the objective functions change over time.
•
Third order - the current values of the decision variables or objective functions depend on their previous values.
•
Fourth order - parts of or the entire environments change over time.
1) 2) 3) 4)
Test problems should be easy to construct. Test problems should be scalable to have any number of decision variables. The resulting POF must be easy to understand, and its shape and location should be exactly known in advance. There should be schemes that hinder converging to the true POF and also obtaining an evenly distributed set of solutions.
As DMOPs are a generalization of static multi-objective optimization problems (MOPs), it is reasonable that the above guidelines should likewise apply to DMOPs. Recall from the literature, many commonly used static multi-objective test problems, such as DTLZ [2], WFG [7], and Li and Zhang’s framework [10], use component functions for defining the Pareto front and set. Therefore, static MOPs can be described as follows: Lsˆ(F, S, x, δ) where •
F : the function vector that defines the Pareto front, F = (f1 , f2 , · · · , fM );
•
M is the number of objectives;
•
S: the function vector that defines the Pareto set, S = (s1 , s2 , . . . , sN );
•
x: the set of decision variables, x = (x1 , x2 , · · · , xn ), n is the number of decision variables;
•
δ: the environmental parameter, δ is generally defined as a constant in static MOPs.
According to Eq. (1), Li and Zhang’s framework [10] can be illustrated as Lstatic (α, β, x, δ), where α = (α1 (xI ), α2 (xI ), · · · , αM (xI )) and β = (β1 (xII − g(xI )), β2 (xII − g(xI )), · · · , βM (xII − g(xI ))). Note that, α and β are functions that define the shape of POF and the shape of POS, respectively, and remain the same as in Li and Zhang’s framework [10]. To describe the dynamic behaviour of a DMOP, the static model presented in Eq. (1) has to be modified by adding a parameter that can represent the dynamism over time. Let t be the time instance and D be the parameter that models the dynamic behaviour of a changing environment. Thus, a DMOP can be modelled by Ldˆ(D, F, S, x, δ, t)
III.
(1)
(2)
P ROPOSED F RAMEWORK FOR S CALABLE DMOP S
Scalable DMOPs can be constructed in many possible ways, and the easiest method is probably to adapt the existing static many-objective test problems by adding some dynamic elements, e.g., the FDA4 and FDA5 test problems [3]. Bearing this in mind, we develop scalable dynamic many-objective test problems by (1) defining static many-objective problems and (2) adding different dynamic elements on them. According to the comments made by Deb et al. [2], test problems for multi-objective optimization should include the following features for adequately assessing an algorithm:
Note that dynamic behaviours can come from the dynamic transformation of F , S, x, and δ. The dynamic change of F and S may result in the problem having the changing POF and the changing POS, respectively. The dynamism of x is referred to the change of boundary. Unlike the case in static situations, the environmental parameter δ, e.g., constraints, may change in dynamic environments. An example in point is the dynamic MNK-landscapes illustrated in [14], where the environmental parameter (the number of interacting bits and their respective distribution on the string) changes over time. In this paper, we concentrate on the change of F and S, although we recognize
that the change of x and δ may also occur in some special cases. Thus, Eq. (2) can be rewritten as: Ldˆ(D(F ), D(S), x, t)
(3)
where D(F ) and D(S) represent possible dynamisms imposed on F and S, respectively. If there is no change on F or S, then D(F ) = F or D(S) = S. In order to generate clearly defined dynamic multiobjective test problems, more information must be provided for the parameters of Eq. (2). The Pareto set (S) can be easily constructed according to the framework of Li and Zhang [10]. The remaining work is to facilitate descriptions of the POF-associated component F and the dynamism-associated component D. A. The POF-associated Component The DTLZ1 and DTLZ2 problems, with the linear POF and spherical POF, respectively, are two basic scalable test functions and have been adapted to construct other kinds of scalable test problems, e.g., WFG [7] and the framework of Saxena et al. [12]. Nevertheless, these scalable problems are not enough to cover features that appear in real life, and more representative problems are needed. For this reason, we suggest three POF-associated component functions in the following. 1 f1 (x) = x1 +x2 x+···+x M x2 f2 (x) = x1 +x2 +···+xM .. F1 : . xM fM (x) = x1 +x2 +···+x M x = (x1 , x2 , · · · , xM ) ∈ (0, 1]M
(4)
(5)
where F2 describes a convex POF shape and all objective function values must satisfy f1 f2 · · ·fM = 1.
F3 :
MQ −1 f (x) = cos(0.5πxj ) 1 j=1 MQ −2 cos(0.5πxj ) f2 (x) = sin(0.5πxM −1 ) j=1
.. . fM −1 (x) = cos(0.5πx1 ) sin(0.5πx2 ) γ fM (x) = 1 − cos (0.5πx1 ) xI = (x1 , x2 , · · · , xM −1 ) ∈ [0, 1]M −1
B. The Dynamism-associated Component The dynamism of a DMOP in a dynamic environment can be attributed to various possible factors, such as drifting landscapes, transformations of environmental parameters, and changes of decision variables. In this paper, we just consider two kinds of dynamisms. One is the dynamism in the decision variable space, like the time-varying boundaries and the swaps between decision variables. The other is the dynamism in the objective space, like the change of objectives. Note that the above two dynamisms can occur simultaneously in a DMOP, which can help us construct a Type-II problem. 1) Dynamism in the Decision Variable Space: As defined in [10], an arbitrary prescribed POS shape in a MOP can be modelled as follows: S(x) = S(xII − g(xI ))
where F1 defines a linear POF shape: f1 + f2 + · · · + fM = 1. Note that F1 can be extended to have a convex or concave shape by a non-linear meta-function mapping: fi → fiσ . When 0 < σ < 1, the POF is convex; when σ > 1, the POF is concave. The parameter σ can be changed over time to develop a dynamic feature. f1 (x) = M −1√xx2 1x3 ...xM f (x) = M −1√xx1 2x3 ...xM 2 .. F2 : . fM (x) = M −1√x1xxM2 ...xM −1 xI = (x1 , x2 , · · · , xM ) ∈ [1, 10]M
where the POF shape is non-convex and can be diverse due to the parameter γ (γ > 0) and all objectiveγ function values 2 2 satisfy fM = 1 − (f12 + f22 + · · · + fM −1 ) . When γ = 1, the POF shape of F3 is a right-circular conical hyper-surface. When γ = 2, the POF shape is a circular hyper-paraboloid that opens down. Besides, the POF of F3 can be defined more complicatedly by choosing different non-linear mappings for the function fM . As an example, we just use the simple F3 defined in Eq. (6) to illustrate dynamism in a dynamic environment.
(6)
(7)
where x = (xI , xII ), xI is a POF-associated decision variable vector, and xII is a POS-associated decision variable vector. The minimum of S(x) is zero. The function S not only describes the correlation between decision variables, but also controls the difficulty of convergence. One simple method of adding a dynamic feature to the decision space is to make a transformation of the function S in Eq. (7) over time. This can be presented by D(S), where D is the associated dynamic transformation in the decision variable space. In the following, we give some instances for D(S). D(S(x))P= S(D(x)) S(x) = xi ∈xII x2i DS1 : D(x) : xi ← xi − G(t) G(t) = sin(0.5πt)
(8)
where S(x) defines a simple POS shape, and D(x) makes a shift for each decision variable in xII at the time moment. D(S(x)) = S(D(x)) 2 np P xj P xi − j=1 DS2 : S(x) = xi ∈xII np D(x) : swapt (xi ∈ xI , xj ∈ xI ), i 6= j
(9)
where, as described by D(x), any two decision variables that are randomly chosen from xI are swapped at time t. S(x) defines dependencies between variables. Taking into account the severity of change, we set the number of pairs of swapped
variables to be np = rand(1, M/2), where rand(a, b) generates a random integer between a and b, and M is the number of objectives.
1
(10) f3
D(S(x))P= S(D(x)) DS3 : S(x) = xi ∈xII x2i D(x) : swap (x ∈ x , x ∈ x ) t i I j II
where S(x) defines a simple POS shape, and D(x) defines the swap between a decision variable in xI and another variable in xII . Again, the number of pairs of variables swapped is set to be np = rand(1, M/2), where rand(a, b) generates a random integer between a and b. Intuitively, DS3 is much more complicated than DS2 since the swapping itself between two variables from different sets is a kind of severe change. 2) Dynamism in the Objective Space: The dynamism of the objective space can be described by D(F ), where D is the associated dynamic transformation in the objective space. For illustration, we give an example for D(F ), and the F illustrated comes from the POF-associated component. D(F (x)) = D(fi=1:M ) D1 (F ) : D(fi=1:M ) : fi ← fiA(t) (11) A(t) > 0
0 0
0
f2
Fig. 1.
1
f1
1
POF of SJY1 for three objectives.
10
Here, F1 is used for illustrating the dynamism. Each objective function in F1 changes over time, leading to the POF being 1
1
1
f3
A(t) f1A(t) + f2A(t) + · · · + fM = 1.
C. Test Instances After modelling the dynamism of DMOPs, we combine the POF-associated component, the dynamism component, and the framework of arbitrary POS shape, to give some test instances. Table I lists these test instances and clearly classifies them according to two classification rules. The time t in each problem is defined as: t=
1 τ b c nt τt
0 0
10
Fig. 2.
3)
f1
POF of SJY2 for three objectives.
(12)
We would like to make the following comments on the proposed test instances:
2)
10
f2
where nt represents the severity of change, τ is the iteration counter and τt represents the frequency of change.
1)
0
As shown in Fig. 1, SJY1 has a linear POF which remains stationary over time. This problem tests the tracking ability of algorithms on the changing POSs. The overall POF shape of SJY2, shown in Fig. 2, is a convex hyper-surface, and solutions become fewer when each objective moves away from the origin. This problem not only tests the diversity performance of algorithms as solutions are more likely to be in the intermediate region of the POF than the boundary region of the POF, but also assesses to what extent the approximated Pareto front covers the true POF. SJY3 is an instance that dynamism appears in both the POF and POS. The interaction between two variable subsets is to some extent a severe change, which leads to an algorithm having difficulty in relocating the new POS. Besides, the dynamism renders the
4)
5)
extreme members in the resulting POF far from or close to other members, and this can be clearly observed in Fig. 3. SJY4, depicted in Fig. 4, is a problem that the overall shape of the resulting POF can be from concave to convex. It is important to realize that a change in the shape of the POF requires a change in the distribution of the POS to obtain well-diversified solutions on the POF. SJY5 is a Type-IV problem as both the POF and POS remain stationary, the POF of which is shown in Fig. 5. It is worth noting that SJY5 conceives deceptive property that hinders algorithms finding the true POF. IV.
E XPERIMENTAL S TUDY
A. Experimental Settings Many static multi-objective optimization evolutionary algorithms (MOEAs) have been adapted to handle DMOPs. The algorithms used for this work are four commonly cited
TABLE I. Instance
Definition
S CALABLE T EST I NSTANCES OF DMOP S .
Description
Domain
xi fi=1:M (x, t) = (1 + S(x, t))( x +x +···+x 1 2 M P 2 S(x, t) = xi ∈xII (xi − G(t))
SJY1
Ldˆ(F1 , DS1, x, t)
Type
Order
Type I
Order 1
[1, 10]n
Type I
Order 1
[1, 10]n
Type II
)
G(t) = sin(0.5πt)
[0, 1]M
where xi=1:M ∈ xI , xi=M +1:n ∈ xII
×[−1, 1]n−M
POF: f1 + f2 + · · · + fM = 1 POS: xi = G(t), ∀xi ∈ xII fi=1:M (x, t) = (1 + S(x, t))(
SJY2
Ldˆ(F2 , DS2 , x, t)
xi Q
s M −1
j6=i,xj ∈xI
xj
)
swapt (xi ∈ xI , xM −i ∈ xI ), i = 1 : np !2 Pnp xj P S(x, t) = xi − j=1 , np = rand(1, M/2) x ∈x np i
II
where xi=1:M ∈ xI , xi=M +1:n ∈ xII POF: f1 f2 · · · fM = 1 POS: xi =
Pnp x j=1 j np
, ∀xj ∈ xI , ∀xi ∈ xII
fi=1:M (x, t) = (1 + S(x, t))(
SJY3
Ldˆ(D1 (F2 ), DS3 , x, t)
xi Q
s M −1
j6=i,xj ∈xI
xj
)H(t)
swapt (xi ∈ xI , xM +i ∈ xII ), i = 1 : np P (xi − 5)2 np = rand(1, M/2), S(x, t) = x ∈x i
II
H(t) = 0.5 + 2|sin(0.5πt)|
Order 1 & Order 2
where xi=1:M ∈ xI , xi=M +1:n ∈ xII POF: f1 f2 · · · fM = 1 POS: xi = 5, ∀xi ∈ xII f1 (x, t) = (1 + S(x, t))
MQ −1
cos(0.5πxj )
j=1
fi=2:M −1 (x, t) =(1 + S(x, t)) sin(0.5πxM −i+1 ) SJY4
Ldˆ(D1 (F3 ), S, x, t)
M −i Q
cos(0.5πxj )
j=1 γ(t)
fM (x, t) = (1 + S(x, t))(1 − cos (0.5πx1 )) P x2i , γ(t) = 2 + 1.8 sin(0.5πt) S(x, t) = x ∈x i
[0, 1]n
Type III
Order 2
[0, 1]n
Type IV
Order 2
II
where xi=1:M ∈ xI , xi=M +1:n ∈ xII 2 POF: fM = 1 − (f12 + f22 + · · · + fM −1 )
γ(t) 2
POS: xi = 0, ∀xi ∈ xII MQ −1 f1 (x, t) = cos(0.5πxj ) j=1
fi=2:M −1 (x, t) = sin(0.5πxM −i+1 )
SJY5
Ldˆ(D1 (F3 ), S, x, t)
1+S(x,t)
M −i Q
cos(0.5πxj )
j=1 1 (1+S(x,t))B(t)
fM (x, t) = 1+cos2 (0.5πx ) 1 P S(x, t) = x2i , B(t) = 1.5 + 1.2 sin(0.5πt) x ∈x i
II
where xi=1:M ∈ xI , xi=M +1:n ∈ xII 2 POF: fM (f12 + · · · + fM −1 ) = 1
POS: xi = 0, ∀xi ∈ xII
approaches in the literature: DMOPSO [9], dCOEA [4], dNSGA-II-A [1], and MOEA/D [17]. All the algorithms have a mechanism of change detection and response except MOEA/D. To examine the performance of MOEA/D on DMOPs, we employ the following scheme for MOEA/D: 10% individuals were randomly selected from the current population and used for change detection; if a change is detected, a restart scheme is used for change response. Besides, the parameter settings of each algorithm were derived from the referenced paper.
algorithm. To study the effect of the frequency of change (τt ) on each problem, nt was set to 10, and τt was set to τt = 5, 10, 20, 30, 50. To guarantee the fairness for all the tested algorithms, the total number of changes was set to 40 during the evolution, which is adequate to cover all potential changes in SJY problems. Besides, 100 more generations were given to each algorithm before the first change to minimize the potential effect of static optimization. Thus, the total number of generations was 100 + 40τt .
All the algorithms were tested on our proposed SJY problems. For three-objective problems, the number of variables was set to 10, and the population size was 300 for each
B. Performance Metrics Performance metrics play a fundamental role in comparing and evaluating algorithms. Nevertheless, there are no standard
20
4
f3
f3
f3
35
0 0
4
4
0 0
0 0
0
20
f1
f2
0
0
20
(a)
40
f1
f2
f1
40
f2
(b)
(c)
POFs of SJY3 at three time steps for three objectives: (a) t = 0; (b) t = 0.5; (c) t = 1.
Fig. 3.
1
1
0 0
f3
f3
f3
1
0 0 0 0 1
1
f1
1
0
1
f1
f2
0
(a) Fig. 4.
f2
(b)
1
1
0
f2
f1
(c)
POFs of SJY4 at three time steps for three objectives: (a) t = 0; (b) t = 1; (c) t = 3.
true POF. All the metrics were calculated just before the next change occurred. 1
C. Experimental Results f3
Due to the page limit, the first three test instances are used to evaluate the performance of the algorithms compared. The experimental results reported in Table II to Table IV are the average values and the standard errors of 30 independent runs, and the best results are highlighted in bold.
0.5 0
performance measures in the field of DMO at present. Most of commonly used performance metrics in DMO are an adapted version of performance measures for static multi-objective optimization.
Table II presents the statical results of SJY1. In this table, the mean values of mIGD and S obtained by dCOEA are smaller than those obtained by the other three algorithms for τt ≥ 10, indicating that, in most cases, dCOEA is more capable of tracking the changing POSs and finding a set of welldistributed solutions than the other algorithms. In the case of rapid environmental changes, for example, τt = 5, MOEA/D seems to achieve better diversity performance than dCOEA. It is also clear that both DMOPSO and dNSGA-II-A obtain good results on M S, meaning that they can find wide-spread solutions to cover the true POF. Despite that, DMOPSO is more stable than dNSGA-II-A on M S since the deviation values of the former are zeros for all tested frequencies.
In this paper, three performance metrics: the mean inverted generational distance (mIGD) [18], Schott’s spacing metric (S) [13], and maximum spread (M S) [4], found in the literature, were used to measure the performance of these algorithms. mIGD and S can evaluate the convergence and the diversity performance of algorithms, respectively; while M S can measure to what extent the approximated POF covers the
The results of SJY2, shown in Table III, are somehow divergent. While dNSGA-II-A converges the best to the changing POF for a rapid environmental change, it is outperformed by MOEA/D in the cases of slow changes. Besides, dNSGA-IIA achieves the best diversity performance in all cases except τt = 50. In the case of τt = 50, dCOEA obtains the smallest S value. DMOPSO again achieves best values on
1
f1 Fig. 5.
1
0
f2
POF of SJY5 for three objectives.
TABLE II. Metric
mIGD
S
MS
τt
DMOPSO
dNSGA-II-A
dCOEA
MOEA/D
5
8.3568e-03(6.0657e-04)
1.0930e-02(1.9552e-03)
3.1796e-03(3.3742e-04)
3.2581e-02(2.6680e-05)
10
7.4592e-03(7.9481e-04)
6.3580e-03(6.9998e-04)
1.7723e-03(1.2276e-04)
6.3851e-03(1.1577e-05)
20
6.3062e-03(5.9666e-04)
5.2131e-03(4.1980e-04)
1.2733e-03(9.0944e-05)
6.3797e-03(4.6508e-06)
30
6.0469e-03(6.8873e-04)
5.4576e-03(4.0964e-04)
8.4753e-04(3.5084e-05)
6.4112e-03(4.4218e-06)
50
5.3808e-03(5.4692e-04)
5.4719e-03(5.7427e-04)
7.5779e-04(9.0657e-05)
6.4333e-03(3.9023e-06)
5
1.8864e-01(1.5785e-02)
9.5682e-02(8.6249e-03)
5.4772e-02(1.1731e-02)
2.9425e-02(3.5000e-04)
10
1.5134e-01(1.7274e-02)
6.9932e-02(7.8640e-03)
3.1875e-02(3.5718e-03)
3.1983e-02(1.9881e-04)
20
1.1423e-01(1.1347e-02)
5.2617e-02(9.3135e-03)
2.6370e-02(6.9427e-03)
3.2297e-02(2.1854e-04)
30
9.6088e-02(1.0032e-02)
4.2963e-02(5.4478e-03)
1.9076e-02(5.1001e-03)
3.2087e-02(1.6219e-04)
50
7.5909e-02(1.1886e-02)
3.4532e-02(2.4303e-03)
1.4716e-02(5.4144e-04)
3.1494e-02(1.7792e-04)
5
1(0)
1.0000e+00(4.9905e-05)
9.1108e-01(1.4568e-02)
7.5328e-02(1.1166e-03)
10
1(0)
1.0000e+00(1.0467e-05)
9.4492e-01(9.5140e-03)
9.5953e-01(4.7504e-03)
20
1(0)
1.0000e+00(4.9905e-06)
9.6343e-01(6.2122e-03)
9.9184e-01(1.7406e-03)
30
1(0)
1.0000e+00(1.0467e-05)
9.7989e-01(3.0418e-03)
9.9877e-01(4.8372e-04)
50
1(0)
1.0000e+00(3.3492e-06)
9.8912e-01(2.0091e-03)
9.9993e-01(1.9073e-05)
TABLE III. Metric
mIGD
S
MS
mIGD
S
MS
P ERFORMANCE OF THE FOUR ALGORITHMS FOR SJY2
τt
DMOPSO
dNSGA-II-A
dCOEA
MOEA/D
5
4.9320e-02(1.6682e-02)
1.0161e-02(1.4056e-03)
1.3580e-02(2.2132e-03)
1.1191e-02(4.7831e-04)
10
3.0874e-02(9.5994e-03)
8.8957e-03(1.2303e-03)
9.6894e-03(4.8335e-04)
8.1601e-03(5.4828e-04)
20
1.9715e-02(4.0849e-03)
8.2880e-03(5.8340e-04)
8.7607e-03(6.2425e-04)
5.5806e-03(2.8461e-04)
30
1.7306e-02(2.3571e-03)
8.0530e-03(2.8553e-04)
8.6214e-03(5.5284e-04)
4.4442e-03(2.2287e-04)
50
1.5955e-02(1.0113e-02)
7.9599e-03(2.0598e-04)
8.5405e-03(5.6411e-04)
3.3586e-03(1.3771e-04)
5
9.2438e+00(2.9313e+00)
1.3598e-01(6.2739e-02)
4.3376e-01(1.2189e-01)
7.3477e-02(1.7229e-02)
10
4.9354e+00(2.5238e+00)
9.3017e-02(1.1901e-02)
1.7849e-01(1.4570e-02)
1.2264e-01(1.0737e-02)
20
2.2808e+00(9.2871e-01)
8.1897e-02(7.2830e-03)
1.0319e-01(4.5112e-03)
1.6987e-01(6.2038e-03)
30
1.7814e+00(1.1111e+00)
8.1627e-02(3.8006e-03)
8.5297e-02(3.2972e-03)
1.8923e-01(5.1148e-03)
50
1.4884e+00(1.8359e+00)
8.1659e-02(3.3015e-03)
7.4648e-02(2.3547e-03)
2.1970e-01(4.1707e-03)
5
7.2719e-01(5.4193e-02)
4.2333e-01(2.4770e-02)
3.8342e-01(3.3795e-02)
3.0499e-01(2.4518e-02)
10
7.6386e-01(6.0467e-02)
4.3579e-01(8.4833e-03)
4.1976e-01(2.3805e-02)
3.6215e-01(2.7129e-02)
20
7.8220e-01(4.0035e-02)
4.3972e-01(9.5369e-03)
4.6610e-01(2.7834e-02)
4.5961e-01(1.9266e-02)
30
7.5494e-01(4.1563e-02)
4.4465e-01(3.8986e-03)
4.7906e-01(3.1770e-02)
5.3049e-01(1.9549e-02)
50
7.2486e-01(5.8903e-02)
4.4830e-01(3.1572e-03)
4.9749e-01(3.1668e-02)
6.3755e-01(2.1997e-02)
TABLE IV. Metric
P ERFORMANCE OF THE FOUR ALGORITHMS FOR SJY1
P ERFORMANCE OF THE FOUR ALGORITHMS FOR SJY3
τt
DMOPSO
dNSGA-II-A
dCOEA
MOEA/D
5
4.9067e-01(1.6788e-01)
2.1908e-02(2.3486e-03)
1.0769e-02(7.5932e-04)
2.2816e-02(3.8846e-04)
10
5.8050e-01(1.0295e-01)
1.8614e-02(3.4419e-03)
7.2002e-03(6.3038e-04)
2.0223e-02(3.3100e-04)
20
4.7119e-01(1.4210e-01)
1.5289e-02(3.4028e-03)
7.8588e-03(3.8683e-04)
1.7747e-02(3.6534e-04)
30
4.3840e-01(1.4107e-01)
1.3957e-02(2.2910e-03)
8.5549e-03(4.1054e-04)
1.5993e-02(3.5378e-04)
50
2.7672e-01(1.3296e-01)
1.3543e-02(2.6814e-03)
9.2582e-03(2.8141e-04)
1.4033e-02(2.1317e-04)
5
1.7589e+02(2.6505e+01)
5.4054e-02(1.2062e-02)
3.5271e+00(1.7356e+00)
5.6856e-01(4.4796e-01)
10
1.5020e+02(1.4264e+01)
6.4154e-02(1.5749e-02)
2.5932e+00(1.0574e+00)
7.9942e-01(3.0660e-01)
20
1.2454e+02(2.4546e+01)
7.3982e-02(1.4029e-02)
2.1650e+00(1.1888e+00)
1.0142e+00(8.4785e-02)
30
9.0022e+01(2.2725e+01)
7.7483e-02(8.1739e-03)
2.1841e+00(1.5795e+00)
1.1074e+00(1.2236e-01)
50
5.3348e+01(2.5109e+01)
7.9102e-02(1.0059e-02)
1.4471e+00(6.2045e-01)
1.4599e+00(1.6099e-01)
5
8.4095e-01(2.9901e-02)
1.1512e-01(2.4816e-02)
8.6277e-01(2.0206e-02)
3.8025e-01(4.2654e-02)
10
8.3211e-01(8.0422e-03)
1.6392e-01(3.5546e-02)
9.6611e-01(9.8538e-03)
5.1678e-01(2.9432e-02)
20
8.4730e-01(3.5053e-02)
2.1608e-01(4.0136e-02)
9.9198e-01(2.8351e-03)
6.0697e-01(1.4243e-02)
30
8.4555e-01(1.7540e-02)
2.3786e-01(2.4221e-02)
9.9434e-01(2.0650e-03)
6.3919e-01(1.6635e-02)
50
8.7318e-01(5.2741e-02)
2.4712e-01(3.1745e-02)
9.9644e-01(9.7479e-04)
7.0703e-01(1.5121e-02)
M S, indicating DMOPSO is more likely to approximate some extreme solutions on the POF than the other algorithms.
[2]
It can be observed from Table IV that dNSGA-II-A and dCOEA are two outperformers for SJY3 with regard to the tested metrics, while dNSGA-II-A achieves the best on the diversity performance, dCOEA performs the best regarding convergence and maximum spread. It should be noted that dNSGA-II-A appears to find solutions that are only located in a small region of the POF. Furthermore, the mean S values of the four algorithms tend to become large as the frequency of change turns slow, which is because extreme solutions on the POF are far from centroid solutions; the more extreme solutions an algorithm finds, the larger S will be. Therefore, the S metric deteriorates when τt becomes large.
[3]
To summarise, the SJY problems are able to examine different performance properties of an algorithm. While DMOPSO is efficient in finding the extreme solutions on the POF, dNSGA-II-A is suitable for approximating well-diversified solutions. The dCOEA algorithm tends to have a good convergence performance on the tested problems. Due to the absence of an efficient change response mechanism, MOEA/D only achieves the best convergence performance for SJY2. Thus, a restart scheme is not enough for MOEA/D to handle dynamic multi-objective problems, and other techniques should be introduced to strengthen its performance. V.
C ONCLUSIONS AND F UTURE W ORK
An extensive review of the DMOPs in the literature revealed that there is a lack of standard test problems for dynamic multi-objective optimization. At present, most of the DMOPs used in the literature are problems with only two objectives, and they are not enough to test and compare the performance of algorithms. For this reason, this paper presents a framework for developing scalable dynamic multi-objective test functions. Besides, some test instances are provided for experiments to assess the performance of several chosen algorithms. Hopefully, this will help researchers to further the performance analysis of algorithms in handling dynamic problems with a scalable number of objectives. As a first step, the analysis has been focusing on DMOPs with a small number of objectives in this paper. The future work will include comprehensive theoretical analysis on dynamic many-objective problems. Besides, we will also focus on handling some open issues in DMO, such as dynamic performance measures and dynamic multi-objective evolutionary algorithms. ACKNOWLEDGEMENT The authors would like to thank Dr. Mard´e Helbig and Prof. Andries P. Engelbrecht for providing the kind help and source codes for the experiments. This work was supported by the Engineering and Physical Sciences Research Council (EPSRC) of UK under Grant EP/K001310/1. R EFERENCES [1]
K. Deb, N. Rao U.B., S. Karthik, “Dynamic multi-objective optimization and decision-making using modified NSGA-II: a case study on hydrothermal power scheduling,” in Proc. 4th Int. Conf. Evol. Multi-criterion Optim., LNCS 4403, 2007, pp. 803–917.
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
K. Deb, L. Thiele, M. Laumanns, E. Zitzler, “Scable test problems for evolutionary multi-objective optimization,” KanGAL Report 2001001, Kanpur Genetic Algorithms Lab (KanGAL), Indian Inst. Technol., 2001. M. Farina, K. Deb, P. Amato, “Dynamic multiobjective optimization problems: test cases, approximations, and applications,” IEEE Trans. Evol. Comput., vol. 8, no. 5, pp. 425–442, 2004. C. Goh, K. C. Tan, “A competitive-cooperative coevolutionary paradigm for dynamic multiobjective optimization,” IEEE Trans. Evol. Comput., vol. 13, no. 1, pp. 103–127, 2009. S. Guan, Q. Chen, W. Mo, “Evolving dynamic multi-objective optimization problems with objective replacement,” Artif. Intell. Rev., vol. 23, pp. 267–293, 2005. M. Helbig, A. P. Engelbrecht, “Benchmarks for dynamic multi-objective optimisation algorithms,” ACM Computing Surveys, vol. 46, no. 3, Article No. 37, 2014. S. Huband, P. Hingston, L. Barone, L. While, “A review of multiobjective test problems and a scalable test problem toolkit,” IEEE Trans. Evol. Comput., vol. 10, no. 2, pp. 477–506, 2006. Y. Jin, B. Sendhoff, “Constructing dynamic optimization test problems using the multi-objective optimization concept,” in Proc. EvoWorkshops 2004: Appl. Evol. Comput., 2004, pp. 525–536. M. S. Lechuga, “Multi-objective optimisation using sharing in swarm optimisation algorithms,” PhD Dissertation, University of Birmingham, Birmingham, UK, 2009. H. Li, Q. Zhang, “Multiobjective optimization problems with complicated pareto sets, MOEA/D and NSGA-II,” IEEE Trans. Evol. Comput., vol. 13, no. 2, pp. 284–302, 2009. J. Mehnen, G. Rudolph, T. Wagner, “Evolutionary optimization of dynamic multiobjective functions,” Tech. Report FI-204/06, Universitat Dortmund, Dortmund, Germany, 2006. D. K. Saxena, Q. Zhang, J. A. Duro, A. Tiwari, “Framework for manyobjective test problems with both simple and complicated Pareto-set shapes,” in Proc. 6th Int. Conf. on Evol. Multi-Criterion Optim., LNCS 6576, 2011, pp. 197–211. J. R. Schott, “Fault tolerance design using single and multi-criteria optimization,” MSc Thesis, Department of Aeronautics and Astronautics, Massachusetts Institute of Technology, 1995. E. Tantar, A. Tantar, P. Bouvry, “On dynamic multi-objective optimization classification and performance measures,” in Proc. 2011 IEEE Congr. Evol. Comput., 2011, pp. 2759–2766. P. P. Y. Wu, D. Campbell, T. Merz, “Multiobjective four-dimensional vehicle motion planning in large dynamic environments,” IEEE Trans. Syst., Man, and Cybern. Part B: Cybern., vol. 41, no. 3, pp. 621–634, 2011. Z. Zhang, “Multiobjective optimization immune algorithm in dynamic environments and its application to greenhouse control,” Appl. Soft Comput., vol. 8, no. 2, pp. 959–971, 2008. Q. Zhang, H. Li, “MOEA/D: A multiobjective evolutionary algorithm based on decomposition,” IEEE Trans. Evol. Comput., vol. 11, no. 6, pp. 712–731, 2007. A. Zhou, Y. Jin, Q. Zhang, “A population prediction strategy for evolutionary dynamic multiobjective optimization,” IEEE Trans. Cybern., vol. 44, no. 1, pp. 40–53, 2014. E. Zitzler, K. Deb, L. Thiele, “Comparison of multiobjective evolutionary algorithms: empirical results,” Evol. Comput., vol. 8, no. 2, pp. 173– 195, 2000.