Soft Comput (2014) 18:1405–1424 DOI 10.1007/s00500-013-1153-0
METHODOLOGIES AND APPLICATION
A DE and PSO based hybrid algorithm for dynamic optimization problems Xingquan Zuo · Li Xiao
Published online: 31 October 2013 © Springer-Verlag Berlin Heidelberg 2013
Abstract Many real world optimization problems are dynamic in which the fitness landscape is time dependent and the optima change over time. Such problems challenge traditional optimization algorithms. For such problems, optimization algorithms not only have to find the global optimum but also need to closely track its trajectory. In this paper, a new hybrid algorithm integrating a differential evolution (DE) and a particle swarm optimization (PSO) is proposed for dynamic optimization problems. Multi-population strategy is adopted to enhance the diversity and try to keep each subpopulation on a different peak in the fitness landscape. A hybrid operator combining DE and PSO is designed, in which each individual is sequentially carried out DE and PSO operations. An exclusion scheme is proposed that integrates the distance based exclusion scheme with the hill-valley function to track the adjacent peaks. The algorithm is applied to the set of benchmark functions used in CEC 2009 competition for dynamic environment. Experimental results show that it is more effective in terms of overall performance than other comparative algorithms. Keywords Dynamic optimization algorithm · Differential evolution · Particle swarm optimization · Exclusion scheme
Communicated by T.-P. Hong. X. Zuo (B) Computer School, Beijing University of Posts and Telecommunications, Beijing, People’s Republic of China e-mail:
[email protected] L. Xiao Automation School, Beijing University of Posts and Telecommunications, Beijing, People’s Republic of China
1 Introduction The optimization problems that we confront with in real life are often dynamic, which means that the elements of these problems are time-varying, such as dynamic resource scheduling and dynamic vehicle routing. The purpose of optimization algorithms for such dynamic problems is no longer to find the static optimum but to detect and track the moving optimum in a dynamic environment. Currently, evolutionary algorithms (EAs) have been considered to be the most widely used methods for such problems (Cruz et al. 2011; Nguyen et al. 2012). Differential evolution (DE) (Storn and Price 1997) and particle swarm optimization (PSO) (Kennedy and Eberhart 2001) emerged in recent years are very popular optimization algorithms. They have drawn much attention of many researchers because of their excellent performances for continuous optimization problems. Both of DE and PSO are population-based algorithms. Different from other EAs, DE produces a new candidate solution (individual) using the vector difference of two randomly selected individuals from the population. It has a good global search capability but usually converges slowly in the later stage of population evolution. Mimicking the behavior of bird flocking and fish swarming, PSO evolves a population (swarm) by making each particle (solution) in the swarm be attracted by two attractors (that is, pbest and gbest). This evolutionary behavior allows PSO be able to converge quickly but at the same time easy to get the local optima. In recent years, hybrid algorithms have aroused the interest of researchers since they are proved to be very effective for complex optimization problems by combining the advantages of different search strategies. Due to the complementary properties of DE and PSO, a few hybrid algorithms combining DE and PSO have been studied (Zhang and Xie 2003; Das
123
1406
et al. 2005; Moore and Venayagamoorthy 2006; Omran et al. 2009); however, all of those hybrid algorithms are devised for static optimization problems. To the best of our knowledge, it is still hard to find DE and PSO based hybrid algorithms for dynamic environment. In this paper, we study the hybrid of DE and PSO to devise a hybrid algorithm (Multi-DEPSO) for dynamic optimization problems. This algorithm contains several subpopulations, each of which are successively evolved by the DE and PSO operators. A new exclusion scheme is suggested that combines the distance based exclusion scheme and hill-valley function. The benchmark problems for CEC’2009 competition of dynamic optimization are used to test our algorithm. Experimental results show that it has better performances compared to other comparative algorithms. This paper is organized as follows. In Sect. 2, the related works are reviewed. Section 3 provides the benchmark problems in dynamic environments. The proposed Multi-DEPSO algorithm is introduced in Sect. 4. Section 5 provides numerical analysis and discussions of experimental results. Finally, conclusions are drawn in Sect. 6. 2 Related literatures Evolutionary algorithms are the most effective methods for dynamic optimization problems, and a fair number of dynamic evolutionary algorithms have been studied over the years. Branke (2001) proposed a multi-population algorithm named self organizing scouts (SOS). The algorithm is based on the idea of “forking genetic algorithm (GA)”. It uses a parent population to continuously search the potential peak, while a number of smaller populations to track the promising peak over time. Bui et al. (2005) applied multi-objective GA to solve single-objective dynamic optimization problems. Function fitness value is used as the first objective, and the second objective is chosen from six different objectives, i.e., time-based, random, inversion of the primary objective function, distance to the closest neighbor, average distance to all individuals and the distance to the best individual in the population. Moser and Hendtlass (2007) proposed a multiindividual version of extremal optimization (EO). It performs global search using the basic EO algorithm with a “stepwise” sampling scheme that samples each dimension in equal distances, and carries out a local search by the hill-climbing search. Yu and Suganthan (2009) proposed an algorithm integrated the evolutionary programming with an ensemble of memories. It has two memory archives: the first one represents the short-term memory and serves for the regular updating; the second one represents the long-term memory, which is active when premature convergence is identified. Woldesenbet and Yen (2009) proposed a dynamic evolutionary algorithm that uses variable relocation to adapt already converged or currently evolving individuals to the new envi-
123
X. Zuo, L. Xiao
ronmental condition. del Amo et al. (2012) presented a comparison study on the performance of eight algorithms for continuous dynamic optimization problems. Differential evolution and particle swarm optimization are very popular optimization algorithms. Compared to most other EAs, DE is much more simple and straightforward to implement. It has good performance in terms of static problems. During the last few years, it was applied to dynamic optimization problems and has been proved that it also does well for such problems. For example, Mendes and Mohais (2005) presented a multi-population differential evolution algorithm, which evolves each subpopulation by DE operations. Brest et al. (2009) proposed a self-adaptive differential evolution for dynamic environments. The algorithm uses a self-adaptive control mechanism to change control parameters, and adopts multiple populations and an aging mechanism. Hui and Suganthan (2012) proposed a differential evolution for dynamic optimization problems. The algorithm incorporates an ensemble of adaptive mutation strategies with a greedy tournament global search method, as well as keeps track of past good solutions in an archive with adaptive clearing to enhance population diversity. Particle swarm optimization is also very effective for dynamic optimization problems because of its capability of fast convergence. Blackwell (2003) introduced charged particles. Each particle has a charge of magnitude Q and experiences the repulsion effects from all other charged particles. Blackwell and Branke (2004) extended the charged particle based PSO by constructing interacting multi-swarms. They presented a multi-population quantum swarm optimization, in which charged particles are randomized within a ball of fixed radius centered on the swarm attractor. In their further work (Blackwell and Branke 2006), an anti-convergence strategy is added to the exclusion scheme. Li and Yang (2009) proposed a clustering particle swarm optimizer for dynamic optimization. It employs hierarchical clustering method to track multiple peaks and uses a fast local search to find the near optimal solutions in local promising region. Daneshyari and Yen (2011) proposed a cultural based particle swarm optimization, which incorporates the required information into the belief space and use the stored information to detect the changes and select the leading particles in three levels, namely personal, swarm and global level. Sharifi et al. (2012) proposed a two-phased cellular PSO to address dynamic optimization problems. The algorithm introduces two search phases to make a balance between exploration and exploitation in cellular PSO. It is well known that DE has a good global search capability but usually converges slowly in the later stage of evolution, while PSO is able to converge quickly but easy to get local optima. Due to their complementary advantages, some studies have shown that the hybrid of DE and PSO can lead to a more effective algorithm for static problems (Zhang
A DE and PSO based hybrid algorithm for dynamic optimization problems
and Xie 2003; Das et al. 2005; Moore and Venayagamoorthy 2006; Omran et al. 2009). For example, Zhang and Xie (2003) proposed a hybrid particle swarm with differential evolution operator, which performs the PSO at odd generations and the DE with bell-shaped mutation at even generations. Das et al. (2005) proposed a PSO with differentially perturbed velocity (PSO-DV). It embeds a differential operator into the velocityupdate scheme of PSO, and a DE-type selection strategy is also incorporated into PSO. Although there have been a few studies devoted to the hybrid of DE and PSO for static optimization problems, it is still hard to find their integration for dynamic environments. In this paper, we devise an effective DE and PSO based hybrid algorithm to solve dynamic optimization problems.
1407 Table 1 Parameters of DRPBG (F1) Number of peaks
P = 10 or 50
Number of dimensions
n(fixed) = 10; n (changed) ∈ [5, 15]
Peak heights
∈ [10, 100]
Peak widths
∈ [1, 10]
Number of changes
60
Change frequency
10000 × n
Search range
∈ [−5, 5]n
Sampling frequency
100
⎛ n F( x , t) = max ⎝ Hi (t) ⎝1+Wi (t) ⎛
i=1...m
j=1
⎞⎞ (x j − X i j (t))2 ⎠⎠ n
(1)
3 Dynamic optimization problem In recent years, researchers have designed a number of dynamic problems to test the performance of dynamic optimization algorithms, such as the moving peaks benchmark problem (MPB) (Branke 1999), DF1 generator (Morrison and Jong 1999), single-objective and multi-objective dynamic problem generators. Among these problems, one of the most widely used is the MPB, which have an artificial multi-dimensional landscape consisting of several peaks, where the height, width and position of each peak are altered slightly every time a change in the environment occurs (Cruz et al. 2011). The above problems are generated separately by different generators. In order to construct dynamic problems for the binary space, real space and combinatorial space using one generator, a generalized dynamic benchmark generator (GDBG) was proposed by Li et al. (2008) and used in the CEC 2009 competition for dynamic environments. They (Li et al. 2008) introduced a rotation method instead of shifting the positions of peaks as does in the MPB and DF1 generators. GDBG system offers a greater challenge for optimization than MPB due to the rotation method, larger number of local optima, and higher dimensionalities. Particularly, two benchmark instances are created by GDBG for real space, namely dynamic rotation peak benchmark generator (DRPBG) and dynamic composition benchmark generator (DCBG), whose source code is available at http://www.cs.le.ac.uk/people/syang/ECiDUE/DBG.tar.gz. 3.1 Dynamic rotation peak benchmark generator DRPBG has a peak-composition structure similar to those of MPB and DF1; however it uses a rotation method instead of shifting the positions of peaks. The fitness function of DRPBG (namely function F1) with n dimensions and m peaks is defined by
where x = (x1 , x2 , . . . , xn ) is a particular solution; X i (t) = (X i1 (t), X i2 (t), . . . , X in (t)) is the location of ith peak; (t) = (W1 (t), H (t) = (H1 (t), H2 (t), . . . , Hm (t)) and W W2 (t), . . . , Wm (t)) are vectors storing the height and width of every peak, respectively. The height and width of each peak are time-varying and changed by H (t + 1) = H (t) ⊕ H (t + 1) = W (t) ⊕ W W
(2) (3)
are determined by the change types. The where H and W parameters of DRPBG are given in Table 1. 3.2 Dynamic composition benchmark generator The fitness function of DCBG is defined as: F(x, ϕ, t) m i )+ Hi (t))) (4) = (ωi · ( f i ((x − Oi (t)+ Oiold )/λi · M i=1
where m is the number of basic functions; f i (x) = C · i |, where C is a predefined constant, f (x) is ith f i (x)/| f max i basic function used to construct the composition function, i i is is the estimated maximum value of f i (x); M and f max orthogonal rotation matrix for f i (x); Oi (t) is the optimum of the changed f i (x) at time t; Oiold is the optimum of the original f i (x) without any change; the weight value ωi and the stretch factor λi for f i (x) are calculated by the rule in (Li et al. 2008). H and O are changed as the parameters H and X in DRPBG. Five basic benchmark functions are used in the DCBG, namely Sphere, Rastrigin, Weierstrass, Griewank and Ackley functions, resulting in test functions F2–F6. The parameters of DCBG are given in Table 2. For each of above six functions (F1–F6), there are the following change types of system control parameters, namely
123
1408
X. Zuo, L. Xiao
Table 2 Parameters of DCBG (F2–F6) Number of peaks
P = 10
Number of dimensions
n(fixed) = 10; n (changed) ∈ [5, 15]
Peak heights
∈ [10, 100]
Peak widths
∈ [1, 10]
Number of changes
60
Change frequency
10000 × n
Search range
∈ [−5, 5]n
Sampling frequency
100
Step 4: Return to step 3 until the termination criterion is satisfied. 4.2 Particle swarm optimization
small step change (T0), large step change (T1), random change (T2), chaotic change (T3), recurrent change (T4), recurrent change with noise (T5), and random change type with changed dimension (T6).
4 Multi-population DEPSO 4.1 Differential evolution Differential evolution proposed by Storn and Price (1997) is a population-based approach for function optimization. Unlike other EAs in which individual mutation is executed randomly, the mutation plays a main role in DE. It uses the mutation to produce a new individual by calculating the vector differences between other two randomly selected individuals. DE consists of three basis operators: mutation, crossover and selection. The steps of the basic DE are as follows: Step 1: Initialize the population. Step 2: Evaluate each individual in the population. Step 3: For each individual xi in the population.
v = xr1 +F.(xr2 − xr3 )
(5)
where F is the mutation parameter. 3. Perform crossover operator to produce a candidate ui using v and xi . The jth dimension of the candidate is calculated by vi j i f rand ( j) ≤ C R or j = jrand ui j = (6) xi j other wise where xi j and vi j are the jth dimensions of xi
and v , respectively; rand( j) ∈ U (0, 1) is a random real number for the jth gene; jrand is a random integer in the range [1, n]; n is the number of
123
Particle swarm optimization (Kennedy and Eberhart 2001) is a stochastic optimization algorithm inspired by bird flocking and fish swarming. It is similar in some aspects to EAs except that potential solutions move through the search space. Each particle indicates a solution, and experiences linear spring-like attractions towards two attractors: its best position attained so far and the best of the particle attractors. In PSO, each particle in the swarm has a position x and a velocity v. The position and velocity of each particle are updated in each generation by −−−→ vi = w · vi + c1 · rand1 · ( pbest i − xi ) −−−→ (7) +c2 · rand2 · (gbest − xi ) xi = xi + vi
(8)
where w is the inertia weight that denotes how much of the −−−→ previous velocity will be preserved in the current one; pbesti −−−→ is the best position of particle xi so far; gbest is the best position of the swarm so far; rand1 and rand2 are random numbers in (0, 1); c1 and c2 are acceleration constants. 4.3 Proposed hybrid algorithm for dynamic environment
1. Randomly select three individuals xr1 , xr2 , xr3 from the population, which are different from xi and also different from each other. 2. Create a trial vector v by
dimensions of decision variables; CR is the crossover parameter. 4. Evaluate the candidate ui . 5. Select the better one from xi and ui and put it into the population of the next generation.
Multiple population strategy is useful to keep the search diversity, with the objective of maintaining multiple subpopulations in different subareas in the fitness landscape. Many studies have shown that this strategy is suitable for dynamic optimization problems because locating and tracking multiple relatively good optima rather than a single global optimum is more effective for dynamic environments (Blackwell and Branke 2004; Mendes and Mohais 2005; Li and Yang 2012), so we adopt multi-population strategy in MultiDEPSO. Suppose Multi-DEPSO contains N subpopulations, each of which consists of S individuals. The velocity of each individual is in the range of [vmin , vmax ]. There is a gbest for each subpopulation, and suppose the gbest for the kth subpopulation Popk (k = 1, . . ., N ) in the gth generation is denoted g by gbestk . Each subpopulation Popk (k = 1, . . ., N ) has an inferior set I n f k (k = 1, . . ., N ), whose size is the same as that of Popk and initialized as Popk . The inferior set is used to store candidate individuals produced by the DEPSO
A DE and PSO based hybrid algorithm for dynamic optimization problems
operator during the evolutionary process (Sect. 4.3.5). When the environment changes, each subpopulation is reinitialized using its inferior set to maintain the previous search expe-
1409
rience, and then DEPSO operator is used to re-evolve all subpopulations to track the changed global optimum. The pseudo code of Multi-DEPSO is as follows:
123
1410
X. Zuo, L. Xiao
The detailed operators are introduced below. 4.3.1 Detection of change Environment change is detected by recalculating the evaluation values of gbests in the (g − 1)th and gth generg−1 ations. Suppose the evaluation value of gbestk in the g−1 (g−1)th generation is denoted by f g−1 (gbestk ), where f g−1 (.) is fitness function for the (g−1)th generation and may change along with the generation g for a dynamic g−1 is optimization problem. In the gth generation, gbestk g−1 reevaluated to obtain its evaluation value f g (gbestk ). If g−1 g−1 f g−1 (gbestk ) = f g (gbestk ), it means that the fitness function f g (.) is different from f g−1 (.), i.e., the environment has changed during the gth generation. 4.3.2 Reaction on changes Once a change is detected, each subpopulation is reinitialized and then evolved to track the global optimum. Individuals stored in each inferior set are sorted in descending order according to their evaluation values. Each reinitialized subpopulation is composed of the top per c × S individuals in its inferior set and (1 − per c) × S random individuals, where 0 ≤ per c ≤ 1. The velocity of each individual in the subpopulation is reinitialized randomly. Reinitializing a subpopulation by its inferior set is used to keep the previous search experiences, making the subpopulation quickly track the changed global optimum. Randomly reinitializing some individuals is used to keep large search diversity.
Fig. 1 Hill-valley function based exclusion scheme
and Branke 2004) with a hill-valley function (Ursem 1999). In this scheme, if the distance between two subpopulations is less than the threshold: X (9) rexcl = 1 2m n where n is the number of dimensions, X is the range of solutions, m is the number of peaks, then the two subpopulations are likely to share the same peak. The idea of hill-valley function is further used to judge whether they locate on the same peak. Suppose x and y are the best individuals in the two subpopulations, respectively. A solution z is constructed by the linear combination of x and y, namely z = c · x + (1 − c) · y, where c ∈ {0.05, 0.5, 0.95}. As illustrated in Fig. 1, if there exists z to satisfy f (z ) < min{ f ( x ), f (y )}(where f (.) is the fitness function), then we conclude that the two subpopulations locate on two different peaks, respectively; otherwise, they on the same peak.
4.3.3 Exclusion scheme 4.3.4 Opposite re-initialization It is a key issue to keep each subpopulation on a different peak in the fitness landscape. A subpopulation is represented by the best individual among it. Each subpopulation Popk performs an exclusion scheme. That is, if Popk shares the same peak with any other subpopulation, then the better subpopulation is kept and the worse one is marked for re-initialization. Before reinitializing the worse subpopulation, if the best individual in it is better than the worst one in the better subpopulation, then the worst one is replaced by the best one. In the literature (Blackwell and Branke 2004), a distance based exclusion scheme is presented. According to the scheme, if the distance between two subpopulations is less than a threshold, the worse one is marked for re-initialization. The shortage of such scheme is obvious when two peaks are so close that their distance is less than the threshold. In this case, it is hard to simultaneously find both of the two peaks. To overcome this shortage, we propose a new exclusion scheme that integrates the distance based one (Blackwell
123
For each subpopulation marked to be reinitialized in the exclusion scheme, a part of its individuals are produced by an opposite re-initialization operator, and other individuals are randomly initialized. The velocity of each individual in the subpopulation is initialized as a random value in the range [vmin , vmax ]. The opposite re-initialization creates a new individual by xnew = xlow + x high − x
(10)
where x is the original individual to be reinitialized and xnew is the new produced individual; xlow and x high are lower and upper boundaries of solution space, respectively. This operation is used to make the reinitialized subpopulation far away from the kept (better) one among the two closely adjacent populations. Opposite re-initialization produces a new solution by mapping the original one to the opposite position in the solution
A DE and PSO based hybrid algorithm for dynamic optimization problems
space, and as a result, applying this operator to a large proportion of individuals may lead to losing search diversity. According to experiences, in most cases applying this operator to 10 % of individuals is an appropriate choice. 4.3.5 DEPSO operator An operator combining DE and PSO (termed DEPSO operator) is devised, in which each individual sequentially performs the DE and PSO operators. This operator is described as below. 1. Suppose xi is the ith individual in Popk , and its pbest is −−−→ −−−→ denoted by pbesti ; the gbest of Popk is gbest. 2. Randomly select two individuals xr1 and xr2 from Popk , where xi , xr1 and xr2 are different from each other. Produce a candidate pi by the DE/best/1 scheme: pi j =
gbest j + F(xr1 j − xr2 j ) if rand(j) ≤ CR or j = jrand xij otherwise
(11) where gbest j , xr1 j and xr2 j are the jth dimensions of −−−→ −−−→ gbest , xr1 and xr2 , respectively. If pi is better than pbesti , −−−→ then pbesti = pi ; otherwise, an arbitrary individual in the inferior set Infk is replaced by pi according to the probability of 50 % (updating I n f k ). If pi is better than −−−→ −−−→ gbest, then gbest = pi . 3. Produce a candidate qi by applying the PSO operator to pi : vi
−−−→ = w. v i + c1 .rand1 .( pbesti − pi ) −−−→ +c2 .rand2 .(gbest − pi )
qi = pi + vi
(12)
1411
5.1 Comparative algorithms Most of exiting dynamic evolutionary algorithms were applied to moving peaks benchmark problem (MPB) (Mendes and Mohais 2005; Bui et al. 2005; Blackwell and Branke 2006; Moser and Hendtlass 2007; Woldesenbet and Yen 2009; Daneshyari and Yen 2011; Li and Yang 2012; Sharifi et al. 2012). Among those state-of-the-art evolutionary algorithms for the benchmark problems created by GDBG in real space, the following three typical algorithms were reported to be able to achieve good results, such that we compare our algorithm with them. 5.1.1 CPSO Clustering particle swarm optimizer (CPSO) (Li and Yang 2009): it employs hierarchical clustering method to track multiple peaks based on a nearest neighbor search strategy, and a fast local search is used to find the optimal solution in a local promising region. 5.1.2 jDE Self-adaptive differential evolution algorithm (Brest et al. 2009): it uses a self-adaptive control mechanism to change control parameters, and adopts multi-populations and an aging mechanism. 5.1.3 DASA Differential ant-stigmergy algorithm (Korosec and Silc 2009): An ACO-based algorithm that uses the phenomenal trail laying as a means of communication between ants. 5.2 Algorithm parameters
(13)
−−−→ −−−→ If qi is better than pbesti , then pbesti = qi ; otherwise, an arbitrary individual in I n f k is replaced by qi according −−−→ to the probability of 50 %. If qi is better than gbest, then −−−→ gbest = qi . 4. Select the best one among xi , pi and qi to replace xi .
5 Numerical experiments F1–F6 introduced in Sect. 2 were used to test our algorithm. There are totally 7 test problems, namely F1 with 10 peaks, F1 with 50 peaks and F2–F6. Each problem involves 7 change types, such that there are totally 49 problem instances.
Since Multi-DEPSO tries to track each peak by a different subpopulation, in most cases the number of subpopulations is set to the number of peaks contained in the test problem, such as F2–F6. However, for the problem with a large number of peaks (e.g., F1), this setting will result in too many individuals, such that the number of evaluations consumed in one generation of evolution would increase greatly. In this case, the number of generations of evolution will be decreased (notice that the number of evaluations between two changes of environment is fixed), and as such, each subpopulation may suffer insufficient evolution and cannot converge to a sufficient good solution. Therefore, the number of subpopulations for F1 is given by 30 instead of 50. Parameters c1 and c2 are usually given by 2 or 2.05 in the PSO, and are chosen as 2.0 here. Parameter perc is set
123
1412
X. Zuo, L. Xiao
Table 3 Parameters of multi-DEPSO S
per f or mance =
10
N
30 for F1 10 for F2–F6
perc
0.2 for F1 1.0 for F2–F6
F
0.15
CR
0.6
w
0.1
c 1 , c2
2.0, 2.0
vmax , vmin
2.0, −2.0
number o f test case
mar kk .weightk
mar kk = per centagek
r uns num_change i=1
j=1
ri j /(num_change × r uns) S last s (1 − ri j )/S ri j = ri j 1+
(19) (20)
s=1
S = change_ f r equency/s_ f to 0.2 for F1 and 1.0 for F2–F6. Algorithm performance is not very sensitive to perc and its value is selected after brief experiment. Other parameters F, CR, w, vmax and vmin are given according to experiences. The algorithm parameters are summarized in Table 3. 5.3 Performance metrics
(21)
where s_ f is the sampling frequency and set to 100; rilast j is the relative value of the best one to the global optimum after reaching change frequency; risj is the relative value of the best one to the global optimum at the sth sampling during a change; the parameters of percentage and weight of each change type were given in the literature (Li et al. 2008).
5.4 Experimental results
Average best, average mean, average worst and standard deviation (STD) are usually used as performance metrics for dynamic optimization algorithms (Li et al. 2008). We use these metrics to compare the four algorithms. Average best r uns num_change last = Min j=1 E i, j (t)/r uns
(18)
k=1
For each problem instance, 20 independent runs of MultiDEPSO are done to obtain the average best, average mean, average worst and STD of the instance. The results of jDE, CPSO and DASA reported in literatures (Brest et al. 2009; Li and Yang 2009; Korosec and Silc 2009) as well as the results of Multi-DEPSO are given in Tables 4, 5, 6, 7, 8, 9, and 10.
(14) 5.4.1 Dynamic rotation peak benchmark problems
i=1
Average mean =
r uns num_change i=1
E i,last j (t)/(r uns × num_change)
j=1
(15) Average wor st r uns num_change last Max j=1 E i, j (t)/r uns =
(16)
i=1
ST D
r uns num_change 2 (E i,last i=1 j (t) − Average mean) j=1 = r uns × num_change − 1 (17) where r uns is the number of runs of the algorithm, and num_change is the number of changes in a benchmark problem. E last (t) = | f (xbest (t)) − f (x ∗ (t))| means the offline error between the best individual obtained by the algorithm and the global optimum at time t. The overall performance (Li et al. 2008) of an algorithm is evaluated by the sum of the marks of all test instances:
123
For F1 with 10 peaks, the average mean of Multi-DEPSO is much smaller than that of other algorithms except the change types of large (T1) and random with changed dimension (T6). For F1 with 50 peaks, the average mean of Multi-DEPSO is smaller than that of other algorithms except chaotic change type (T3). From Tables 4 and 5, we can conclude that the Multi-DEPSO is more effective for dynamic rotation peak problems. In addition, it can be observed from Tables 4 and 5 that STD of Multi-DEPSO is smaller than that of other algorithms for each change type, which means that each run of MultiDEPSO is able to obtain a more stable result, and its result does not depend much on different runs.
5.4.2 Dynamic composition benchmark problems For F2, Multi-DEPSO is able to obtain the best average mean for change types of small step (T0), chaotic (T3), recurrent with noise (T5) and random with changed dimension (T6). The CPSO can achieve the best average mean for change types of large step (T1), random (T2) and recurrent (T4).
A DE and PSO based hybrid algorithm for dynamic optimization problems Table 4 Average best, average worst, average mean and STD for F1 with 10 peaks
1413
T0
T1
T2
T3
T4
T5
T6
Avg best
0.00182
0.00159
0.00124
0.00184
0.00245
0.00199
0.00101
Avg worst
0.01295
18.7584
21.2338
0.028762
7.988303
0.06035
20.4375
Avg mean
0.00577
1.25581
1.70189
0.01186
0.31398
0.01371
2.37167
STD
0.00041
1.18719
1.59138
0.00068
0.18673
0.00221
2.60172
Avg best
0
0
0
0
0
0
0
Avg worst
1.29443
7.69346
18.2853
5.78234
22.0022
63.7411
20.8627
Avg mean
0.02703
0.38268
3.31702
0.819431
1.9981
2.39353
1.65234
STD
0.17168
1.35758
5.16391
1.9985
4.33637
9.36818
4.15955
Avg best
1.054e−7
5.214e−8
4.306e−8
9.721e−7
2.561e−7
4.325e−6
5.036e−9
Avg worst
1.224
27.12
28.15
3.239
21.72
26.55
35.52
Avg mean
0.03514
2.718
4.131
0.09444
1.869
1.056
4.54
STD
0.4262
6.523
8.994
0.7855
4.491
4.805
9.119
DEPSO
jDE
CPSO
DASA
Bold values indicate the best results
Avg best
4.17e−13
3.80e−13
3.80e−13
6.57e−13
5.56e−13
7.90e−13
3.55e−14
Avg worst
5.51
38.5
39.7
9.17
20.9
47.1
29.1
Avg mean
0.18
4.18
6.37
0.482
2.54
2.34
4.84
STD
1.25
9.07
10.7
1.95
4.80
8.66
8.96
T0
T1
T2
T3
T4
T5
T6
Avg best
0.01442
0.01467
0.01359
0.013445
0.037031
0.016374
0.008578
Avg worst
2.442444
14.22157
22.61378
9.192493
4.996475
1.952626
22.55359
Avg mean
0.152132
1.709486
4.160964
0.685151
0.49879
0.326572
3.13595
STD
0.073266
0.523884
1.496935
0.238989
0.125507
0.052456
1.10259
Avg best
0
0
0
0
0
0
0
Avg worst
11.6773
23.0028
21.0606
31.9837
8.24729
74.0282
27.8875
Avg mean
1.01592
4.49469
4.62995
1.36954
1.32133
3.77731
5.76461
STD
2.44593
5.9918
5.68496
4.637
1.85967
10.72
7.23945
Avg best
2.447e−6
2.061e−7
9.888e−7
4.353e−6
2.121e−6
9.033e−5
4.169e−6
Avg worst
4.922
22.08
25.65
1.974
9.606
22.08
27.9
Avg mean
0.2624
3.279
6.319
0.125
0.8481
1.482
6.646
STD
0.9362
5.303
7.442
0.3859
1.779
4.393
7.94
Table 5 Average best, average worst, average mean and STD for F1 with 50 peaks DEPSO
jDE
CPSO
DASA
Bold values indicate the best results
Avg best
5.97e−13
5.03e−13
3.57e−13
7.73e−13
8.02e−13
6.73e−13
7.39e−14
Avg worst
7.67
29.1
31
5.58
11.6
35.1
32.2
Avg mean
0.442
4.86
8.42
0.509
1.18
2.07
7.84
STD
1.39
7.00
9.56
1.09
2.18
5.97
9.05
For F3, the average mean of jDE is the best for all change types, and Multi-DEPSO achieves the second best average mean. F3 is a highly multimodal function, in which each local
optimum has a large attraction zone, such that it is very hard to find its global optimum. It seems that F3 is a challenging problem for Multi-DEPSO. Maybe this is because Multi-
123
1414
X. Zuo, L. Xiao
Table 6 Average best, average worst, average mean and STD for F2
T0
T1
T2
T3
T4
T5
T6
Avg best
1.06188
0.05866
0.05809
0.0369
0.08422
0.04909
0.45048
Avg worst
9.09844
418.49
249.726
3.7395
392.668
7.96616
34.2311
Avg mean
0.094971
32.87704
18.99183
0.216453
46.91993
0.496136
2.431512
STD
0.038168
25.47611
18.5234
0.129523
22.78225
0.297576
1.37753
Avg best
0
0
0
0
0
0
0
Avg worst
22.5031
505.656
518.463
13.9058
532.667
25.433
40.6599
Avg mean
2.66117
22.1633
21.6973
1.52726
79.8655
2.21724
6.63454
STD
5.59541
65.3448
67.5824
3.16119
150.127
5.4268
9.72248
Avg best
9.377e−5
7.423e−5
4.651e−5
1.121e−5
7.792e−5
1.087e−4
2.978e−7
Avg worst
19.26
144.1
158.3
10.18
320.7
26.08
30.44
Avg mean
1.247
10.1
10.27
0.5664
25.14
1.987
3.651
STD
4.178
35.06
33.45
2.137
64.25
5.217
6.927 1.30e−12
DEPSO
jDE
CPSO
DASA
Bold values indicate the best results
Avg best
1.97e−11
2.34e−11
2.72e−11
1.41e−11
3.59e−11
1.65e−11
Avg worst
33.9
403
356
16.5
433
24.9
36.7
Avg mean
3.30
25.6
18.9
1.45
49.6
2.11
3.87
STD
8.78
83.2
67.8
3.83
112
5.29
8.12
T0
T1
T2
T3
T4
T5
T6
Avg best
0.06476
3.32704
0.44171
0.08616
2.3566
0.12316
0.09665
Avg worst
206.334
979.322
936.641
1199.02
909.343
1357.59
937.054
Avg mean
6.289092
755.8362
666.571
366.7833
670.8808
494.4307
419.7473
STD
5.758064
42.07012
53.99888
57.61205
54.0287
74.01072
34.84024
Avg best
0
1.620e−12
0
0
0
0
Avg worst
20.9429
918.094
928.106
1082.11
850.407
864.455
823.973
Avg mean
3.15564
662.958
437.374
49.6869
514.524
144.669
135.799
STD
5.05216
346.827
405.551
175.367
363.07
275.835
268.587
Avg best
0.003947
126.2
42.89
7.909e−005
228.5
4.356
0.9334
Avg worst
711.2
1008
966.1
1204
974.2
1424
1011
Avg mean
137.5
855.1
765.9
430.6
859.7
753
653.7
STD
221.6
161
235.8
432.2
121.5
361.7
334
Table 7 Average best, average worst, average mean and STD for F3 DEPSO
jDE
CPSO
DASA
Bold values indicate the best results
Avg best
3.39e−11
43.4
1.38
4.51e−11
3.08
4.21e−11
0.106
Avg worst
435
988
937
1170
923
1470
909
Avg mean
15.7
824
688
435
697
626
433
STD
67.1
204
298
441
315
460
380
DEPSO adopts the multi-population strategy and it is hard to track the global optimum when the number of local peaks is much greater than that of subpopulations.
123
For F4, Multi-DEPSO can obtain the best average mean for most of change types, including small step (T0), random (T2), recurrent with noise (T5) and random with changed
A DE and PSO based hybrid algorithm for dynamic optimization problems Table 8 Average best, average worst, average mean and STD for F4
1415
T0
T1
T2
T3
T4
T5
T6
Avg best
1.24775
0.0686
0.06578
0.04012
0.09551
0.05373
0.05271
Avg worst
85.7224
528.998
464.658
5.83151
514.844
16.0707
92.9958
Avg mean
0.368845
60.98454
33.12147
0.358254
89.97101
0.738746
4.891355
STD
0.874256
41.36978
21.14063
0.305621
33.00122
0.494278
5.723732 0
DEPSO
jDE Avg best
0
0
0
0
0
0
Avg worst
26.002
548.155
588.663
5.78234
530.775
65.8348
53.9721
Avg mean
2.41864
18.107
38.9608
0.24119
91.2736
5.34039
8.20154
STD
5.13129
0.5962
120.385
1.1034
167.711
11.6498
13.7876 3.31e−6
CPSO Avg best
6.36e−5
0.0001868
0.000103
9.346e−6
0.000407
8.616e−5
Avg worst
29.38
459.8
389.4
14.62
481
63.06
93.32
Avg mean
2.677
37.15
36.67
0.7926
67.17
4.881
7.792
STD
7.055
99.43
97.18
2.775
130.3
15.39
19.21 7.10e−12
DASA
Bold values indicate the best results
Avg best
2.01e−11
2.95e−11
2.87e−11
1.85e−11
5.89e−11
2.09e−11
Avg worst
57.6
505
540
18.8
528
39.7
451
Avg mean
5.60
65.6
53.6
1.85
108
2.98
27.4
STD
26.5
160
140
4.22
178
7.59
90
T0
T1
T2
T3
T4
T5
T6
Avg best
0.11115
0.12499
0.12218
0.09414
0.15637
0.12015
0.10186
Avg worst
0.48364
2.6479
1.81052
0.68148
5.48976
0.69145
4.18952
Avg mean
0.246574
0.360252
0.349826
0.251717
0.455315
0.275657
0.460683
STD
0.021747
0.105976
0.052441
0.030622
0.18452
0.020411
0.112699 0
Table 9 Average best, average worst, average mean and STD for F5 DEPSO
jDE Avg best
0
0
0
0
0
0
Avg worst
7.59703
16.4304
13.7508
2.9071
10.5656
3.0604
9.33144
Avg mean
0.214576
0.364186
0.648663
0.048452
0.547516
0.12034
0.27533
STD
1.01544
2.14683
2.35519
0.375305
2.02188
0.538033
1.25121
Avg best
0.0001584
0.0003224
0.0003337
4.85e−6
0.0001377
0.0002077
2.052e−6
Avg worst
25.41
31.76
27.77
26.66
63.2
42.54
103.2
Avg mean
1.855
2.879
3.403
1.095
7.986
4.053
6.527
STD
5.181
6.787
6.448
4.865
13.81
8.371
22.8
CPSO
DASA
Bold values indicate the best results
Avg best
3.22e−11
3.74e−11
3.86e−11
2.69e−11
5.99e−11
2.85e−11
1.93e−12
Avg worst
17.1
22.2
16.0
8.10
29.0
8.75
18.7
Avg mean
0.955
0.990
0.949
0.392
2.30
0.467
1.11
STD
3.43
4.05
3.31
1.61
6.36
1.73
3.76
dimension (T6). The jDE and CPSO are able to achieve the best average mean for two change types and one type, respectively.
For F5, the average mean of Multi-DEPSO is the best for the change types of large step (T1), random (T2) and recurrent (T4), and is very close to the best one for other
123
1416
X. Zuo, L. Xiao
Table 10 Average best, average worst, average mean and STD for F6
T0
T1
T2
T3
T4
T5
T6
Avg best
0.07308
0.09631
0.09678
0.06957
0.12938
0.08393
0.07556
Avg worst
23.6541
123.88
186.093
24.4568
351.965
73.4647
62.3688
Avg mean
3.19917
12.0208
12.3886
4.39732
21.2452
6.29510
6.19019
STD
2.50585
7.03171
8.27144
0.96898
14.7919
3.60235
4.26841 0
DEPSO
jDE Avg best
0
0
0
0
0
0
Avg worst
30.9585
75.573
73.8737
39.3129
57.2766
54.6031
50.4207
Avg mean
4.36615
14.7216
18.6152
5.7253
9.33048
9.29854
11.8022
STD
7.89419
17.3995
18.4275
8.80205
13.5166
13.7458
13.981
Avg best
0.0001693
0.000126
0.0006566
1.28e−5
0.001835
0.0002852
0.0002053
Avg worst
37.79
258.5
504.8
131.8
628.8
265.7
424.5
Avg mean
6.725
21.57
27.13
9.27
71.57
23.67
32.58
STD
9.974
63.51
83.98
24.23
160.3
51.55
76.9 6.48e−12
CPSO
DASA
Bold values indicate the best results
Table 11 Comparison of the overall performance for 4 algorithms on 49 instances
Avg best
2.36e−11
3.58e−11
3.69e−11
2.55e−11
6.37e−11
2.56e−11
Avg worst
48.3
554
529
81.6
499
249
137
Avg mean
8.87
37.0
26.7
9.74
37.9
13.3
11.7
STD
13.3
122
98.4
22.0
118
57.4
36.7
T0
T1
T2
T3
T4
T5
T6
Marks
Multi-DEPSO F1(10)
0.01496
0.01480
0.01470
0.01499
0.01492
0.01495
0.00972
0.09903
F1(50)
0.01496
0.01474
0.01432
0.01486
0.01492
0.01494
0.00965
0.09838
F2
0.02376
0.01802
0.01906
0.02354
0.01803
0.02310
0.01481
0.14031
F3
0.02128
0.00198
0.00331
0.00975
0.00353
0.00621
0.00513
0.05117
F4
0.02368
0.01682
0.01880
0.02337
0.01599
0.02297
0.01436
0.13599
F5
0.02360
0.02347
0.02353
0.02341
0.02370
0.02339
0.01549
0.15659
F6
0.02124
0.01776
0.01818
0.01913
0.01932
0.01855
0.01330
0.12747
0.0148
0.0145
0.0137
0.0145
0.0138
0.0140
0.0094
0.0947
jDE F1(10) F1(50)
0.0145
0.0135
0.0134
0.0143
0.0142
0.0134
0.0086
0.0919
F2
0.0203
0.0137
0.0139
0.0200
0.0122
0.0185
0.0099
0.1085
F3
0.0170
0.0027
0.0042
0.0144
0.0037
0.0084
0.0035
0.0539
F4
0.0193
0.0138
0.0138
0.0219
0.0112
0.0167
0.0089
0.1056
F5
0.0218
0.0206
0.0207
0.0221
0.0210
0.0209
0.0127
0.1398
F6
0.0180
0.0127
0.0111
0.0157
0.0162
0.0137
0.0081
0.0955
F1(10)
0.01413
0.01339
0.01303
0.01465
0.01334
0.01323
0.00857
0.09033
F1(50)
0.01411
0.01332
0.01256
0.01463
0.01377
0.01310
0.00830
0.08978
F2
0.01747
0.01380
0.01393
0.02160
0.01365
0.01545
0.01048
0.10637
F3
0.00631
0.00044
0.00090
0.00665
0.00065
0.00083
0.00118
0.01696
F4
0.01651
0.01128
0.01175
0.02120
0.01111
0.01365
0.00915
0.09466
F5
0.01596
0.01468
0.01446
0.02099
0.01461
0.01293
0.00942
0.10304
F6
0.01335
0.01056
0.01036
0.01514
0.00995
0.00862
0.00662
0.07460
CPSO
123
A DE and PSO based hybrid algorithm for dynamic optimization problems
1417
Table 11 continued T0
T1
T2
T3
T4
T5
T6
Marks
F1(10)
0.01471
0.01357
0.01280
0.01416
0.01396
0.01355
0.00885
0.09160
F1(50)
0.01455
0.01339
0.01241
0.01423
0.01438
0.01346
0.00832
0.09074
F2
0.01865
0.01446
0.01583
0.01890
0.01420
0.01826
0.01215
0.11240
F3
0.01413
0.00072
0.00174
0.00742
0.00223
0.00455
0.00282
0.03360
F4
0.01759
0.01233
0.01327
0.01788
0.01091
0.01699
0.01005
0.09900
F5
0.02021
0.02012
0.02030
0.02049
0.02019
0.02024
0.01346
0.13500
F6
0.01478
0.01154
0.01335
0.01337
0.01367
0.01318
0.00970
0.08960
DASA
Performance (sum all marks and multiply by 100): Multi-DEPSO: 80.89; jDE: 68.99; DASA: 65.21; CPSO: 57.57
Fig. 2 Average offline error for F1 with 10 peaks
types. The jDE can obtain the best average mean for the change types of small step (T0), chaotic (T3), recurrent with noise (T5) and random with changed dimension (T6). For F6, Multi-DEPSO is able to obtain the best average mean for all change types except the recurrent type (T4). In summary, Multi-DEPSO is able to obtain the best results for F2, F4 and F6. For F5, the performance of MultiDEPSO is similar to that of jDE and both of them can achieve a small average mean for each change type. For F3, the average mean of Multi-DEPSO is the second-best and the jDE is able to find the best one. From Tables 6, 7, 8, 9, 10, we can also observe that MultiDEPSO is able to obtain the smallest STD for F2–F6, which means that each run of Multi-DEPSO is able to find a more stable result.
5.4.3 Comparison of overall performance The comparative results in terms of the overall performance of each algorithm on the 49 test cases are given in Table 11. A mark is calculated for each algorithm on each instance and the algorithm performance is evaluated by the sum of weighted marks for all test instances. In the Table, “marks” indicates the sum of weighted marks for all change types of a test problem. From Table 11, it can be seen that the overall performance of Multi-DEPSO equals to 80.89, which is better than the scores of jDE (the winner of the CEC’99 competition, whose score is 68.99), DASA (65.21) and CPSO (57.57). It means that Multi-DEPSO outperforms all the three contestant algorithms in a statistically significant fashion over the 49 test instances.
123
1418
X. Zuo, L. Xiao
Fig. 3 Average offline error for F1 with 50 peaks
Fig. 4 Average offline error for F2
5.4.4 Comparison of average offline error From Table 11, we can see that jED has the best performance among the three comparative algorithms, so we further com-
123
pare Multi-DEPSO with jDE to observe their average offline error. 20 independent runs of Multi-DEPSO and jDE are done for each change type of F1–F6. Their average offline error is
A DE and PSO based hybrid algorithm for dynamic optimization problems
1419
Fig. 5 Average offline error for F3
Fig. 6 Average offline error for F4
illustrated in Figs. 2, 3, 4, 5, 6, 7, and 8. From these Figures, we can see that for F1 (10 peaks and 50 peaks), F2, F4 and F6, Multi-DEPSO maintains a smaller offline error for most of change types than jDE. It means that Multi-DEPSO is able
to quickly track the moving optima (peaks) in a dynamic environment. jDE can obtain a smaller offline error for most of change types of F3. The average offline error of the two algorithms is similar for F5.
123
1420
X. Zuo, L. Xiao
Fig. 7 Average offline error for F5
Fig. 8 Average offline error for F6
We can also see from those Figures that the offline error of Multi-DEPSO is very stable, i.e., its offline error does not change sharply along with the changes of the dynamic environment.
123
Multi-DEPSO obtains a smaller average offline error means that it is capable of finding a better solution within the fixed number of evaluations in each change of the dynamic environment, so that Multi-DEPSO has a higher search effi-
A DE and PSO based hybrid algorithm for dynamic optimization problems Table 12 Average mean of F2 for different numbers of subpopulations T0
T1
T2
T3
T4
T5
T6
1421 Table 13 Average mean of F2 for different inertia weight values T0
T1
T2
T3
T4
T5
T6
P = 10
0.09497 32.88 18.99
0.2164 46.92
0.4961
2.432
w = 0.1 0.09497 32.88 18.99 0.2165 46.92 0.4961
2.432
P =5
4.861
5.270
3.413
4.619
w = 0.5 0.8689
9.917
P =2
19.95
24.83 21.87
64.36 33.13 22.46
50.88
60.01 19.45
17.18
ciency compared to other three algorithms. This may be because the multi-population strategy tends to ensure large search diversity while DEPSO operator is able to effectively balance the convergence speed and global search capability.
5.5 Algorithm parameters analysis To observe the influence of the number of subpopulations on algorithm performance, the number of subpopulations is set to 10, 5, and 2, respectively, and for each setting MultiDEPSO is run for 20 times for F2. The average mean for each change type is given in Table 12. The average offline error of the 20 runs for different numbers of subpopulations is shown in Fig. 9. Multi-DEPSO is able to obtain the smallest average mean and average offline error when the number of subpopulations equals to 10 (i.e., the number of peaks in the test problem).
w = 0.9 29.46
71.68 52.46 1.611 127.2 230.8 195.3 63.40
2.962
278.6 97.20
111.6
As stated in Sect. 5.2, for the problem with a small number of peaks, the most appropriate setting is to let the number of subpopulations equal to the number of peaks. To investigate the influence of inertia weight w on algorithm performance, Multi-DEPSO with different inertia weight values is applied to F2. The average mean for each change type is given in Table 13. The average offline error of 20 runs of Multi-DEPSO with different w values is illustrated in Fig. 10. The performance of Multi-DEPSO becomes worse when increasing the inertia weight and a small inertia weight leads to improvement of algorithm performance. This is because a small inertia weight means an individual is mainly attracted by its pbest and the gbest, resulting in quickly converging, while a large inertia weight may make the algorithm converge slowly and unable to get a high quality solution within the given number of evaluations in each change of the environment.
Fig. 9 Influence of the number of subpopulations on average offline error
123
1422
X. Zuo, L. Xiao
Fig. 10 Average offline error of F2 for different inertia weight values Table 14 Average mean of F5 for the two exclusion schemes
T0
T1
T2
T3
Proposed exclusion scheme
0.2465
0.3602
0.3498
0.2517
0.4553
0.2756
0.4606
Exclusion scheme in (Blackwell 2004)
0.2558
0.4039
0.3531
0.2546
0.6336
0.2781
0.8168
5.6 Exclude scheme analysis To show the effect of the proposed exclusion scheme, it is compared to the exclusion scheme in Blackwell and Branke (2004). 20 runs of Multi-DEPSO are done for F5 using the two schemes, respectively. The comparison results are given in Table 14. The average offline error in the 20 runs for each change type is shown in Fig. 11. We can see that the proposed exclusion scheme is able to reduce the performance metric of average mean, and achieve a smaller average offline error compared to the scheme in Blackwell and Branke (2004). 6 Conclusion In this paper, a new hybrid algorithm combining ED and PSO (Multi-DEPSO) is proposed for dynamic optimization problems. In this algorithm, a multi-population strategy is adopted to maintain large search diversity during the
123
T4
T5
T6
evolution process to avoid local optima. A hybrid operator of DEPSO is devised by integrating the operators of DE and PSO to make each subpopulation quickly track the moving peaks. A new exclusion scheme is suggested that borrows ideas from hill-valley function to effectively handle the case of closely adjacent peaks existing in fitness landscape. Multi-DEPSO is tested by the set of GDBG benchmark problems used in the CEC 2009 competition for dynamic optimization. Experiments show that Multi-DEPSO is able to quickly track the changing global optimum and achieve better results compared to several other state-of-the-art algorithms. Future work could consider the design of a scheme to balance the number and size of subpopulations to make Multi-DEPSO more effectively handle dynamic problems with a huge number of peaks. Also, Multi-DEPSO might be applied to real life dynamic optimization problems, such as the resource allocation problem for cloud computing (Zuo et al. 2013) in a dynamic environment.
A DE and PSO based hybrid algorithm for dynamic optimization problems
1423
Fig. 11 Average offline error of F5 for different exclusion schemes
Acknowledgments We would like to thank Dr. Brest for providing us the C code of jDE for comparison with our algorithm. This work is partially supported by National Natural Science Foundation (61374204). Conflict of interest The authors declare that they have no conflict of interest.
References Blackwell T (2003) Swarm in dynamic environments. Genetic and evolutionary computation conference, Chicago, pp 1–12 Blackwell T, Branke J (2004) Multi-swarm optimization in dynamic environments. In: Applications of evolutionary computing, Coimbra, Portugal, pp 489–500 Blackwell T, Branke J (2006) Multiswarms, exclusion, and anticonvergence in dynamic environments. IEEE Trans Evol Comput 10(4):459–472 Branke J (1999) Memory enhanced evolutionary algorithms for changing optimisation problems. In: IEEE congress on evolutionary computation, Washington, DC, USA, pp 1875–1882 Branke J (2001) Evolutionary optimization in dynamic environments. Springer, Berlin Brest J, Zamuda A, Boskovic B, Maucec MS, Zumer V (2009) Dynamic optimization using self-adaptive differential evolution. In: IEEE congress on evolutionary computation, Trondheim, Norway, pp 415– 422 Bui LT, Branke J, Abbass HA (2005) Diversity as a selection pressure in dynamic environments. In: Genetic and Evolutionary Computation Conference, Washington, DC, USA, pp 1557–1558 Cruz C, González JR, Pelta DA (2011) Optimization in dynamic environments: a survey on problems, methods and measures. Soft Comput 15(7):1427–1488 Daneshyari M, Yen GG (2011) Dynamic optimization using cultural based PSO. In: IEEE congress on evolutionary computation, New Orleans, LA, USA, pp 509–516
Das S, Konar A, Chakraborty UK (2005) Improving particle swarm optimization with differentially perturbed velocity. In: Genetic and evolutionary computation conference, Washington, DC, USA, pp 177–184 del Amo IG, Pelta DA, González JR, Masegosa AD (2012) An algorithm comparison for dynamic optimization problems. Appl Soft Comput 12:3176–3192 Hui S, Suganthan PN (2012) Ensemble differential evolution with dynamic subpopulations and adaptive clearing for solving dynamic optimization problems. In: IEEE congress on evolutionary computation. Brisbane, Australia, pp 1–8 Li C, Yang S, Nguyen TT, Yu EL, Yao X, Jin Y, Beyer HG, Suganthan PN (2008) Benchmark generator for CEC 2009 competition on dynamic optimization. University of Leicester, University of Birmingham, Honda Research Institute Europe, Vorarlberg University of Applied Sciences, Nanyang Technological University, Technical Report Li C, Yang S (2009) A clustering particle swarm optimizer for dynamic optimization. In: IEEE congress on evolutionary computation, Trondheim, Norway, pp 439–446 Li C, Yang S (2012) A general framework of multipopulation methods with clustering in undetectable dynamic environments. IEEE Trans Evol Comput 16(4):556–577 Kennedy J, Eberhart RC (2001) Swarm intelligence. Morgan Kaufmann, San Francisco Korosec P, Silc J (2009) The differential ant-stigmergy algorithm applied to dynamic optimization problems. In: IEEE congress on evolutionary computation, Trondheim, Norway, pp 407–414 Mendes R, Mohais AS (2005) DynDE: a differential evolution for dynamic optimization problems. In: IEEE congress on evolutionary computation, Edinburgh, UK, pp 2808–2815 Moore PW, Venayagamoorthy GK (2006) Evolving digital circuit using hybrid particle swarm optimization and differential evolution. Int J Neural Syst 16(3):163–177 Morrison RW, De Jong KA (1999) A test problem generator for nonstationary environments. In: IEEE congress on evolutionary computation, Washington, DC, USA, pp 2047–2053
123
1424 Moser I, Hendtlass T (2007) A simple and efficient multi-component algorithm for solving dynamic function optimisation problems. In: IEEE congress on evolutionary computation, Singapore, pp 252–259 Nguyen TT, Yang SX, Branke J (2012) Evolutionary dynamic optimization: a survey of the state of the art. Swarm Evol Comput 6:1–24 Omran MGH, Engelbrecht AP, Salman A (2009) Bare bones differential evolution. Eur J Oper Res 196(1):128–139 Sharifi A, Noroozi V, Bashiri M, Hashemi AB, Meybodi MR (2012) Two phased cellular PSO: a new collaborative cellular algorithm for optimization in dynamic environments. In: IEEE congress on evolutionary computation, Brisbane, Australia, pp 1–8 Storn R, Price K (1997) Differential evolution—a simple and efficient heuristic for global optimization over continuous spaces. J Global Optim 11:341–359
123
X. Zuo, L. Xiao Ursem RK (1999) Multinational evolutionary algorithms. In: IEEE congress on evolutionary computation, Washington, DC, USA, pp 1633–1640 Woldesenbet YG, Yen GG (2009) Dynamic evolutionary algorithm with variable relocation. IEEE Trans Evol Comput 13(3):500–513 Yu EL, Suganthan PN (2009) Evolutionary programming with ensemble of explicit menories for dynamic optimization. In: IEEE congress on evolutionary computation, Trondheim, Norway, pp 18–21 Zhang WJ, Xie XF (2003) DEPSO: hybrid particle swarm with differential evolution operator. In: IEEE international conference on systems, man and cybermetics, Washington, DC, USA, pp 3816–3821 Zuo XQ, Zhang GX, Tan W (2013) Self-adaptive learning PSO based deadline constrained task scheduling for hybrid IaaS cloud. IEEE Trans Autom Sci Eng. doi:10.1109/TASE.2013.2272758