Efficient multi-swarm PSO algorithms for dynamic ... - Semantic Scholar

1 downloads 0 Views 420KB Size Report
Aug 31, 2011 - Abstract Particle swarm optimization has been successfully applied in many research and application areas because of its effectiveness and ...
Memetic Comp. (2011) 3:163–174 DOI 10.1007/s12293-011-0066-7

REGULAR RESEARCH PAPER

Efficient multi-swarm PSO algorithms for dynamic environments Pavel Novoa-Hernández · Carlos Cruz Corona · David A. Pelta

Received: 30 November 2010 / Accepted: 9 July 2011 / Published online: 31 August 2011 © Springer-Verlag 2011

Abstract Particle swarm optimization has been successfully applied in many research and application areas because of its effectiveness and easy implementation. In this work we extend one of its variants to address multi-modal dynamic optimization problems, the multi-swarm PSO (mPSO) proposed by Blackwell and Branke. The aim of our proposal is to increase the efficiency of this algorithm. To this end, we propose techniques operating at swarm level: one of which divides each swarm into two groups depending on the quality of the particles for facing the loss of diversity, and the other control the number of active swarms during the run using a fuzzy rule. A detailed experimental analysis shows the robustness of our proposal. Keywords Fuzzy rule

Multi-swarm PSO · Dynamic environments ·

1 Introduction Modern society is full of complex processes that require new techniques for their analysis and treatment. In many cases, this complexity is due to the dynamism of variables, rules or objectives involved. Several examples of dynamic problems can be given: optimal allocation of resources to P. Novoa-Hernández (B) Department of Mathematics, University of Holguin, 80100 Holguin, Cuba e-mail: [email protected] C. C. Corona · D. A. Pelta Department of Computer Science and Artificial Intelligence, University of Granada, 18071 Granada, Spain e-mail: [email protected] D. A. Pelta e-mail: [email protected]

customers who change over time, network routing problems where links or nodes can be active/inactive, solving learning tasks from dynamically changing databases, among many others. A mathematical modeling of these processes leads to what is known as dynamic optimization problems (DOP), a field of research whose interest has grown rapidly in the last 20 years (see [13,20] for good surveys). Most existing works related to DOP use computational methods that have been effective in stationary problems and have had to undergo certain adjustments to ensure a proper behavior in dynamic environments, see for example [1,14, 25,28]. One of the most widely used methods is the Particle Swarm Optimization (PSO) [2,3,21] because of its effectiveness and easy implementation. However, two important issues must be addressed in the adaptation of PSO for DOPs: the outdated memory and diversity loss. The outdated memory appears when the best solutions obtained so far by the algorithm are no longer true (e.g. the global memory), this implies that typical particles will be around false attractors. The diversity loss occurs when the global optima is shifted away from a converged swarm. In that case, if the swarm has a significant level of convergence, the slow velocities of its particles prevent from reaching the new shifted optima. This behavior is quite similar to being trapped in local optima in multi-modal optimization problems. The first of these adaptation issues can be solved relatively easy (e.g. updating particle and swarm memories) whereas the diversity loss is more difficult to deal with [5]. An effective multi-swarm PSO (mPSO) approach was proposed by Blackwell and Branke [7] to overcome the above issues. Basically, this approach includes several swarms moving independently in the search space with the aim of simultaneously exploring several optima at the same time. To solve the diversity loss issue each swarm is equipped with two types of particles: classical (neutral) particles and charged or

123

164

quantum particles. The latter ones are devoted to maintain the diversity during the run (e.g. in every algorithm iteration). Depending of the use of charged or quantum particles one can obtain two slightly different algorithms: multi-swarm charged PSO (mCPSO) and multi-swarm quantum PSO (mQSO). The mPSO approach was tested on different multimodal problems showing a superior performance over stateof-art algorithms, such as: PSO with partial reinitialization [18], hierarchical PSO [19], and self-organizing scouts [10]. Despite the success exhibited in the past, we believe that the mPSO approach can be further improved. Thus, the goal of this paper is to develop a more efficient and effective adaptation of mPSO for dynamic optimization problems. The adaptation is particularly oriented to the outdated memory and diversity loss drawbacks and are based in a previous work [26] where we proposed two improvement strategies. Now, we have extended these strategies along two directions. Firstly, a division of each swarm into two groups depending on the quality of the particles is proposed for generating diversity, where a part of particles will remain fixed while the rest will be diversified around the best particle of the swarm. Secondly, an adaptive and fuzzy rule is used for controlling the number of active swarms during the run. In order to better explain our proposals, this paper is structured as follows: Sect. 2 gives an introduction to dynamic optimization problems and the multi-swarm PSO approach. In Sect. 3 we present our proposals and explain how they can be included in mPSO. Section 4 is devoted to computational experiments. Conclusions and future work are given in Sect. 5. 2 Background and related works The main difference between dynamic optimization and stationary optimization is that in the former, one (or more) elements vary over the time (e.g. number of dimensions, search space boundaries, number of restrictions, etc). In this paper we focus on problems with dynamic objective functions, as shown in the following definition: Denoting Ω as search space, a DOP can be defined as the set of objective functions f (t) : Ω → R(t ∈ N0 ), where the goal is to find the set of global optima X (t) at every time t : X (t) = {x ∗ ∈ Ω| f (t) (x ∗ )  f (t) (x), ∀x ∈ Ω}. Here,  is a comparison relation which means is better or equal than, hence ∈ {≤, ≥}. Note that for a given value of t a specific stationary optimization problem can be obtained. If changes in the environment only cause small differences between two successive objective functions, then a good strategy will be to use the previously found best solutions as starting points for finding the new ones. For that reason the main objective of an algorithm in this field of research is not only to find the current optima, but also to follow it as closely as possible [1,7,9,17,20].

123

Memetic Comp. (2011) 3:163–174

Some dynamic test problems have been developed to study and compare the performance of the proposed methods. Examples of these artificial problems are: moving peaks benchmark (MPB) introduced by Branke [9], DF1 generator by Morrison and De Jong [24], and recently the generalized dynamic benchmark generator (GDBG) proposed by Li and Yang [22]. The interested reader can find more information about dynamic optimization (related methods, performance measures, problem test and applications) in the website Intelligent Strategies in Uncertain and Dynamic Environments.1 2.1 Multi-swarm PSO in dynamic environments PSO is a stochastic, population based technique proposed by Kennedy and Eberhart [15]. Basically each individual i (called particle) is a candidate solution whose movement in the search space is governed by four vectors: its position xi , speed vi , the position of the best solution found individually pi , and the position of the best solution found in the neighborhood gbest . When the neighborhood is represented by the whole swarm, then the resulting model is called gbest, otherwise lbest. In this work we consider the gbest model. Specifically, the expressions that govern the particles motion in the search space are: vi = ω vi + c1 η1 ◦ (pi − xi ) + c2 η2 ◦ (gbest − xi )

(1)

xi = xi + vi

(2)

where ω is an inertia weight that says how much of the previous velocity will be preserved in the current one. Besides, c1 and c2 are acceleration constants, while η1 and η2 are random vectors with components in the interval [0.0, 1.0]. Note that the ◦ operator means an entry-wise product. As we mentioned before, PSO has been used as a basis for developing more sophisticated, problem-specific algorithms. This is the case of the multi-swarm PSO proposed by Blackwell and Branke [7]. This approach use several swarms mainly due to the following reasons: to achieve an effective exploration of the search space, and to follow the changing optima over time. The main steps of the mPSO approach are shown by Algorithm 1. The exclusion test avoids multiples swarms exploring the same optima. If two swarms are close enough, then the worst of them is randomly reset over the search space. Moreover, the anti-convergence test is intended to explore new areas of space, through the reset of the worst of all converged swarms. As stated before, two algorithms were developed on this approach: mCPSO and mQSO. Both implement an atomic structure for the swarms: neutral (classic PSO) particles are moving close to the current best solution (nucleus), and charged or quantum particles are moving surrounding the 1

http://www.dynamic-optimization.org/.

Memetic Comp. (2011) 3:163–174

1 2 3 4 5 6 7 8 9 10 11

165

Randomly initialize the particles in the search space; while stopping condition is not met do Apply exclusion test; Apply anti-convergence test; Detect changes in the environment; foreach swarm s do Move its particles according to their type (neutral, charged or quantum); Evaluate each particle position; Update pi and gbest ; end end

Algorithm 1: The mPSO approach

nucleus. Neutral particles move according to expressions 1 and 2, while charged and quantum particles do so respectively, according to: +

xi = xi + vi +

N  j=1, j =i

Qi Q j (xi − x j ) |xi − x j |3

xi = B(gbest , rcloud )

(3) (4)

where Q i > 0 is an extra, fixed parameter that state the charge of the particle, N + is the number of the charged particles, and B(gbest , rcloud ) is a function that generate a random vector within a hypersphere with radius rcloud . From the original work [7] one can observe that mQSO algorithm performs better than mCPSO. Nevertheless, it should be noticed that this algorithm needs also to perform O(N 2 ) computations (being N the number of particles) to calculate expression 3 and this is not usually considered for performance evaluation. For more details about the mPSO approach and related issues, the interested reader can refer to [6–8]. It is important to remark that the use of multiple “agents” (swarms or populations) in PSO to deal with dynamic environments has been also explored by other authors. See for example the speciation-based PSO (SPSO) [27], the evolutionary swarm cooperative algorithm (ESCA) [23] and the PSO based memetic algorithm in [30].

this issue. Perhaps the simplest one is to randomly restart the swarm particles in the search space [18]. However, it is assumed that dynamic problems do not show severe changes (otherwise, restarting would be the only viable alternative), so certain information (e.g. good solutions) that was useful in the past can be conveniently used in future stages. The way that mCPSO and mQSO generate diversity is through particles with different motions. In the first method the diversity is maintained by a Coulomb repulsion between charged particles, whereas in mQSO quantum particles are random positioned in a hypersphere around the gbest . Both approaches have in common that this diversity is continuously generated over the run (e.g. in every algorithm iteration). However, there are evidences that this diversity is most effective when a change has been detected in the environment [16]. Following this idea, we propose a simple strategy: after a problem change, each swarm is divided into two parts depending on the quality of the particles. A part will remain fixed while the rest will be diversified around the swarm best particle gbest . This sampling around gbest is carried out by an uniform distribution (UD) in a hypersphere, as explained by Clerc in [11]. This hypersphere will be centered on gbest and have a certain radius rex p . Note that this strategy is similar to the mQSO algorithm, however our strategy is fired after each change and it is just applied to a part of the population (e.g. worst particles).

1 2

//Sort particles, and select the worst... Set sor ted List ← sort Particles(s); Set wor st Par ticles ← select W or st (sor ted List) ;

3 4 5 6 7

//Generate particles around gbest for each particle i in wor st Par ticles do Set xi ← UD(gbest , rex p ); Evaluate each particle position; Update pi and gbest ; end

Algorithm 2: Diversity strategy method

In this section we describe our strategies for the mPSO approach: an alternative to generate diversity after a change of the problem, and a control rule that stops converged and low quality swarms.

Algorithm 2 summarizes the steps of the diversity strategy. First a sorted list with the worst particles at the top was created. Then all particles in the sorted list replace its position vector with a point generated by an uniform distribution with center in gbest and radii rex p . The remaining steps are devoted to evaluating the new position and updating the particle and swarm memories. In what follows we refer to mPSO scheme with the diversity strategy described above as mPSOD.

3.1 Diversity after problem changes

3.2 Control rule for swarms

As explained above, the diversity loss is the most difficult problem that arises when PSO is adapted to dynamic environments. In this regard, literature shows ways to deal with

In order to develop this control rule, we have relied on the fact that in a multi-modal landscapes not all swarms are in promising areas in the search space: hopefully just a few will

3 Description of our proposals

123

166

Memetic Comp. (2011) 3:163–174

Unlike [29], here a and b are two time-varying parameters. Formally, being m the number of swarms and gi the best solution found by the swarm i, then a and b are updated in every algorithm iteration as follows: m 1  (t) f (gi ) m

(6)

b = max f (t) (gi )

(7)

a=

i=1

i=1...m

In short, the quality of a swarm depends on the average fitness of the set of swarms and on the fitness of the best of the swarms. So, defining a threshold γcut (acting as an α − cut) we can classify a swarm s as bad if the following function is

123

1

bad

0.8

μ

be optimizing around the current global optima while others are probably on suboptimal solutions spending significant computational resources (e.g. function evaluations). These function evaluations, or the spent time, could be used to refine the current best solutions before the problem changes. A suggestive idea would be stopping these swarms at some point of the algorithm execution. However, it is necessary to remember that the objective of having multiple swarms is to monitor the search space to find new promising solutions. So the question that emerges from this analysis is: how to stop these swarms without affecting the exploration of the whole algorithm? To manage this question we focus on two important aspects of swarms. First, the quality of the best solution found by the swarm (e.g. its gbest ), and secondly the degree of diversity among the particles. In this regard, we considered that each swarm can be classified according to one of the combinations of these two characteristics. This classification can be established using two labels for each feature: low and high for diversity, and bad and good for fitness. Please, note that low degree of diversity is a necessary condition for swarm convergence. Of course, our control rule would aim to stop the swarms with low diversity and bad fitness, since they are the ones that are consuming resources in a non-profitable way. However, it remains to define what is a bad or good solution, and which measure is the most appropriated to represent the swarm diversity. For the first case, we based on a concept previously applied in the context of multi-thread cooperative strategy for optimization [29]. The basic idea is to define the fuzzy set of “bad swarms” and then, when a swarm needs to be evaluated, we will measure its degree of membership to that set. The membership function of the fuzzy set that we will use to evaluate the quality of the swarm is defined as follows: ⎧ if f s > b ⎨ 0.0 (b− f s ) (5) μbad ( f s ) = (b−a) if a ≤ f s ≤ b ⎩ 1.0 if f s < a.

Bad swarms

0.6 0.4

Active swarms

0.2 0

0

10

20

a=30

40

50

60

b=70

80

Fitness

Fig. 1 Fuzzy membership function μbad

true: is Bad(s) =



tr ue f alse

if μbad ( f s ) ≥ γcut otherwise.

(8)

Figure 1 help us to understand how the fuzzy membership function related to label μbad works with a = 30 and b = 70. Note that the γcut parameter creates two sets of swarms according to their membership degree: bad swarms and active swarms. On the other hand, to evaluate the degree of diversity of the swarms we have selected the measure proposed by Blackwell in [4]. It is defined as the maximum component separation of the particle positions in the swarm. This measure was also used to improve the performance of the multi-swarm model in previous works. In particular, it serves as a trigger for the anti-convergence operator in [7]. The formula is:     (9) δs = max xik − x kj  i, j = 1, . . . , n i k = 1, . . . , m where n i is the number of particles in each swarm, and m the number of dimensions of search space. As in the original mPSO approach we will assume that a swarm has converged it the diversity is less than some preset value. Thus we can define the following boolean function:  tr ue if δs ≤ rconv (10) hasConverged(s) = f alse otherwise. Algorithm 3 shows how one can include both, the diversity strategy and the swarms control rule in the mPSO approach. Note that in the control rule we have also included the condition no change which means the absence of a recent change in the environment. As shown in Table 1, several combinations are possible if we include none, one or both improvements. It also shows the type of particles used for each algorithm, where n and q stand for neutral and quantum particles, respectively. Note that we have not included the mCPSO algorithm since it has lower performance compared with mQSO, according to the result reported in [7]. Finally, the combination of our two improvements leads in a more sophisticated algorithm which we have called mPSODE.

Memetic Comp. (2011) 3:163–174

167

1

Randomly initialize the particles in the search space;

2 3 4 5

while stopping condition is not met do Apply exclusion test; Apply anti-convergence test; Detecting changes in the environment;

6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21

foreach swarm s do //Diversity strategy after a change if change then Apply strategy diversity method ; end //Swarms control rule if no change and hasConverged(s) and isBad(s) then Stop swarm s ; continue ; end else Move particles according to their type; Evaluate each particle position; Update pi and gbest ; end end Update a and b parameters ; end

Algorithm 3: The mPSO approach with our proposals Table 1 Algorithms using none, one or the two improvements proposed Algorithm

Particles

Diversity strategy

Control rule

mQSO

n+q



– 

mQSOE

n+q



mPSOD

n





mPSODE

n





Table 2 Standard settings of Moving Peaks Benchmark’s Scenario 2 Parameter

Setting

Number of peaks

10–200

Number of dimensions

5

Peaks heights

∈ [30, 70]

Peaks widths

∈ [1, 12]

Change frequency (e)

1,000–5,000

Change severity (sev)

1.0–3.0

Correlation coefficient

0.0

Peak function ( p f unction)

cone : f (x) =



different problem instances (e.g. combining a certain number of peaks, shift severity, change frequency, etc.). Furthermore, we assume that each scenario of the problem will change 100 times every e evaluations. Then each run ends when the algorithm has consumed 100 × e evaluations. It is important to remark that the severity parameter (s) is one of the most influential factors. It represents the magnitude of change (e.g. the distance that problem’s optima will be shifted as a result of an environment change). Choosing an appropriated performance measure for evaluating algorithms in dynamic environments is an open research topic, although some interesting progresses have been made in [31]. However, in this work we select the offline error and offline performance proposed in [10], which are respectively defined as follows: (t)

err oro f f = per f o(t) ff =

1  (t) ∗ ( f (x ) − f (t) (xbest )) t

(11)

1 t

(12)

n

t=1 n 

f (t) (xbest )

t=1

here, t denotes a single function evaluation, so with these measures we have both, the average error and the average performance of the algorithm throughout the run. Besides, x∗ and xbest are the positions of the global optima of the problem and the current best solution found by the algorithm, respectively. The mPSO approach has several parameters that need to be established a priori. Following the guidelines of the original work [7], we will use in all our algorithms 10 swarms, each containing 10 particles. The mQSO algorithm will work with 5 neutral and 5 quantum particles. Also, mPSOD is composed by 10 neutral particles, with 5 particles diversified after a problem change. We performed 30 runs with different random seeds for the problem and algorithm. Besides, with respect to the PSO parameters applied at swarm level, we have selected the following values: ω = 0.729 and c1 = c2 = 1.496, as was suggested in [12] in order to achieve a reasonable performance in general. Taking the above general considerations, we have organized the experiments as follows:

n 2 i=1 x i

4 Experiments In this section we evaluate our improvements through computational experiments. We selected the moving peaks benchmark (MPB) [9] as the first test problem, particularly its Scenario 2. As shown in Table 2 from the parameter settings of this scenario it is possible to obtain a family of

1. Effect of rex p parameter in the diversity strategy. – Algorithm: mPSOD. – Factors to study: sev., e. 2. Effect of γcut parameter in the control rule for swarms. – Algorithm: mQSOE. – Factors to study: sev., e. 3. Other problems with different landscapes. – Algorithm: mQSO, mQSOE, mPSOD, mPSODE. – Factors to study: p f unction, sev., e.

123

168

Memetic Comp. (2011) 3:163–174

Table 3 Mean of the offline error (std. error) for mPSOD with different rex p in MPB’s Scenario 2, varying the shift severity

Table 4 Mean of the offline error (std. error) for mPSOD with different rex p in MPB’s Scenario 2, varying the change frequency

rex p

rex p

0.0

1.0

2.0

3.0

1,000

2,000

3,000

4,000

e

sev. 0.3

0.41 ± 0.05

1.45 ± 0.07

2.50 ± 0.11

3.53 ± 0.13

0.3

4.23 ± 0.15

2.76 ± 0.12

2.12 ± 0.09

1.82 ± 0.11

1.5

0.45 ± 0.07

1.58 ± 0.08

2.21 ± 0.08

3.06 ± 0.14

1.5

4.78 ± 0.17

3.11 ± 0.13

2.48 ± 0.12

2.09 ± 0.12

3.0

0.62 ± 0.09

1.89 ± 0.08

2.63 ± 0.10

3.32 ± 0.13

3.0

5.24 ± 0.16

3.45 ± 0.14

2.69 ± 0.09

2.31 ± 0.09

4.5

0.53 ± 0.06

2.20 ± 0.11

3.03 ± 0.11

3.56 ± 0.13

4.5

5.60 ± 0.16

3.68 ± 0.13

2.95 ± 0.12

2.50 ± 0.09

6.0

0.59 ± 0.09

2.28 ± 0.11

3.17 ± 0.12

3.96 ± 0.14

6.0

5.90 ± 0.17

3.85 ± 0.14

3.08 ± 0.14

2.66 ± 0.12

rand

0.42 ± 0.06

1.85 ± 0.09

2.67 ± 0.10

3.40 ± 0.13

rand

5.10 ± 0.15

3.25 ± 0.12

2.52 ± 0.08

2.13 ± 0.11

The first two experiments are devoted to study the influence of parameters related with our strategies versus different shift severities (sev.), and change frequencies ( e). The latter one adds an extra factor p f unction, which defines the type of function used as peak (see Table 2). Taking the MPB as a template, our goal is to create new problem instances with different landscapes. 4.1 Effect of rex p parameter in the diversity strategy

Best values are shown using boldface

(a) Offline error

7.0

Offline error (Avg. 30 runs)

Best values are shown using boldface

mPSOD(0.3) 6.0

mPSOD(6.0)

5.0 4.0 3.0 2.0 1.0 0

123

10

20

30

40

50

Problem changes Offline performance (Avg. 30 runs)

As described in Sect. 3.1, the parameter rex p defines the radius of the hypersphere used to uniformly randomize the worst particles in every swarm. The main usefulness of this type of randomization after a change is to solve the diversity loss. So, hypothetically speaking rex p should be of similar magnitude as the shift severity of the problem at hand: a small value of rex p should be more effective for low shift severities, the same should be met for high values of rex p versus high shift severity. In this regard, we selected the following values rex p ∈ {0.3, 1.5, 3.0, 4.5, 6.0}. Compared to the size of the search space bounds [0, 100]5 these are relatively small values. We have also included a rand alternative, which randomly assigned to rex p one of the above values in every algorithm iteration. The offline error achieved by the mPSOD algorithm with different configurations of rex p , varying the shift severity (sev.) and the change frequency ( e) is shown in Tables 3 and 4 show. The experiments of Table 3 used  e = 5,000, while in Table 4 sev = 1.0. The text in bold corresponds to the best obtained values. As expected, best results for the first set of experiments correspond to configurations with lower exploration radius (rex p ≤ 1.5). It means that after each change, swarms just need some diversity (≈ sev) to find the shifted optima. For example, when the sev is 0.0 and 1.0, the version with rex p = 0.3 is the best behaved precisely because rex p is the smallest value closer to these severities. Something similar occurs for sev = 2.0 and sev = 3.0, for which the best alternative is rex p = 1.5. Regarding the experiments related with e, the

(b) Offline performance 68 67 66 65 64

Problem optimum

63

mPSOD(0.3) mPSOD(6.0)

62 0

10

20

30

40

50

Problem changes

Fig. 2 Evolution of the offline error and the offline performance for mPSOD in the MPB’s Scenario 2 (sev = 1.0,  e = 5,000). Graphics shows the best (rex p = 0.3) and worst (rex p = 6.0) configurations

first thing that stands out is that the configuration rex p = 0.3 do not seem to be affected when varying the change frequency. This result correlates with those obtained in the previous set of experiments, for which the best configuration for sev = 1.0 was precisely rex p = 0.3. Despite the results showed in Tables 3 and 4, it is also important to analyze the algorithm behavior over time. To that end, we select the best and the worst configurations rex p = 0.3, and rex p = 6.0 over the problem instance with sev = 1.0 and  e = 5,000. In Fig. 2a and b we plotted the evolution of the offline error and offline performance in terms of problem changes. From these graphics it is possible to notice that the main difference between configurations occurs at an early stage

Memetic Comp. (2011) 3:163–174

of the search. In particular, rex p = 0.3 is more accurate at the beginning. However, there are time windows for which rex p = 6.0 is temporarily better than rex p = 0.3. For example, please observe what happens near the 30th problem change (Fig. 2b): the performance of rex p = 6.0 is superior to the performance of rex p = 0.3. These are indications that the exploration radius, as most of the parameters in stochastic algorithms, are optimal only in some parts of the optimization process. For this reason, the rand configuration shows a good performance in general in both sets of experiments. In fact, its results are close (on average) to the other configurations (rex p = {3.0, 4.5, 6.0}). In summary, for the problem instances considered in this section, the best m P S O D configurations are rex p = 0.3 and rex p = 1.5. An interesting aspect is that the rand option shows in general good results for the two factors studied.

169 Table 5 Offline error(std. error) for mQSOE with different γcut in MPB’s Scenario 2 varying the shift severity γcut

0.0

1.0

2.0

3.0

sev. 0.10

0.29 ± 0.04

1.31 ± 0.08

2.76 ± 0.13

3.89 ± 0.20

0.25

0.26 ± 0.04

1.09 ± 0.07

2.27 ± 0.16

3.30 ± 0.18

0.50

0.33 ± 0.05

0.98 ± 0.05

1.78 ± 0.10

2.63 ± 0.14

0.75

0.28 ± 0.04

1.12 ± 0.07

1.77 ± 0.09

2.49 ± 0.11

1.00

0.40 ± 0.06

1.19 ± 0.08

1.77 ± 0.09

2.39 ± 0.11

rand

0.27 ± 0.05

1.06 ± 0.06

1.51 ± 0.15

2.27 ± 0.19

Best values are shown using boldface Table 6 Offline error(std. error) for mQSOE with different γcut in MPB’s Scenario 2 varying the change frequency γcut

1,000

2,000

3,000

4,000

0.1

3.54 ± 0.17

2.18 ± 0.10

1.73 ± 0.11

1.37 ± 0.06

0.25

3.53 ± 0.16

2.17 ± 0.12

1.54 ± 0.10

1.30 ± 0.07

0.50

3.49 ± 0.15

2.26 ± 0.12

1.71 ± 0.12

1.26 ± 0.07

0.75

3.59 ± 0.15

2.29 ± 0.13

1.81 ± 0.12

1.45 ± 0.08

1.00

3.69 ± 0.14

2.33 ± 0.12

1.71 ± 0.08

1.43 ± 0.06

rand

3.57 ± 0.15

2.21 ± 0.11

1.52 ± 0.09

1.13 ± 0.07

e

4.2 Effect of γcut in the control rule for swarms The value γcut has a significant impact on the activation of the control rule. For example, a very small value of γcut (e.g. ≈0) would make a considerable number of swarms classified as bad, increasing the probability of stopping them. In particular, when γcut = 0.0 the rule stops all converged swarms (even the best one), which is an undesirable behavior since the algorithm cannot exploit the current best solution. Conversely, a higher value for γcut implies that just a small number of swarms will be classified as bad. In particular, when γcut = 1.0 just those converged swarms with fitness lower than or equal to the average will be stopped. From this analysis, we selected the following values for the γcut parameter: {0.1, 0.25, 0.50, 0.75, 1.0}. A rand option is also analyzed, which randomly assigns one of the mentioned values to γcut in every algorithm iteration. As explained in Sect. 2, the control rule can be included in any algorithm based on the mPSO approach. In the following experiments we decided to use the algorithm mQSO, even if we could select the algorithm mPSOD. The reason for such choice is that we study the γcut parameter independently of the rex p included in mPSOD. Tables 5 and 6 show the results for these γcut settings again taking into account severity and change frequency as factors. Note that unlike the experiments of the previous section, now the results show just minor differences among the alternatives. However, when the problem has a low shift severity (e.g. 0.0, 1.0) it seems more effective to stop swarms than for problems with higher severities (sev > 1.0). That’s why variants with γcut = {0.25, 0.5} have good results compared with those which have γcut = {0.75, 1.0}. On the other hand, when the severity increases, then γcut = 1.0 show a slight superiority with respect to the others ones. Besides, configuration γcut = 0.1 presents (except for sev = 0.0) the worst perfor-

Best values are shown using boldface

mance. Remember that for this γcut value, the control rule is more frequently activated. In the experiments varying the change frequency, the configurations with γcut = {0.25, 0.5} are more accurate in problems with low e. Surprisingly, the variant γcut = 0.1 achieved good results for these problem instances, too. However, its performance is affected negatively as values of e get close to 4,000. We have also studied the performance of the best and worst choice over time. The plots in Fig. 3 show the behavior of variants γcut = 0.50 and γcut = 0.1 for the problem instance with sev = 1.0 and e = 5,000. It can be observed that most of the differences in performance between both configurations occurred between changes 10 and 20. Also, during the first half of the run, the performance with respect to reaching the optimum is lower for both algorithms than in the second half, where they can achieve much better error values (Fig. 3b) Again, we must highlight that in general the rand option presents a good performance, especially for the most difficult problem instances (e.g. those with high severity or low Δe). 4.3 Other problems with different landscapes In order to extend the study of our proposals we decided to conduct further experiments on problems with different

123

170

Memetic Comp. (2011) 3:163–174

Offline error (Avg. 30 runs)

(a) Offline error 3.0

mQSOE(0.5) mQSOE(0.1)

2.5

2.0

1.5

1.0 0

10

20

30

40

50

Offline performance (Avg. 30 runs)

Problem changes

(b) Offline performance 68 67 66 65 64

Problem optimum

63

mQSOE(0.5) mQSOE(0.1)

62 0

10

20

30

40

50

Problem changes

Fig. 3 Evolution of the offline error and the offline performance for mQSOE in the MPB’s Scenario 2 (sev = 1.0,  e = 5,000). Graphics shows the best (γcut = 0.5) and worst (γcut = 0.1) configurations

landscapes. The dynamics of these problems are essentially the same as the Scenario 2 of the MPB but with two different features: the boundaries of the search space and peak function. Table 7 shows the eight selected functions for the peaks. These functions are common in the context of stationary optimization [32]. Functions Cone, Sphere, Schwefel, and Quadric are uni-modal, while Rastrigin, Griewank, Ackley, and Weierstrass offer multi-modal landscapes. We kept the settings of MPB’s Scenario 2, except for the number of peaks in multi-modal problems which was established in 1.

This time, the algorithms considered are the best ones from the previous experiments. Since there is not a clear winner configuration for both the diversity strategy and the control rule for swarms, we decided to employ the rand configuration for rex p and γcut . However, the set of possible values for this rand configurations will be composed only by those alternatives for which the best performance was previously obtained. In the case of the diversity strategy, the rex p ∈ {0.3, 1.5}, while the control rule will use γcut ∈ {0.25, 0.50, 0.75}. For comparison purposes, we have also included the mQSO algorithm in its original version. Tables 8 and 9 show the results for the new problem instances and the implemented algorithms, varying the severity and the change frequency, respectively. The last column of these tables includes the improvement rate (Imp. rate) between our best algorithm and mQSO. This improvement rate is computed for every problem instance i as follows:

ei,Best (13) I mp Ratei = 100 1 − ei,m Q S O where ei,Best = min{ei,m Q S O E , ei,m P S O D , ei,m P S O D E } is the best (minimum) offline error obtained by our algorithms. The asterisk on each value indicates that the related improvement is statistically significant. If the improvement rate is equal to zero then mQSO is better than the rest for that specific problem. To determine the differences among algorithms we applied nonparametric tests. First, we used the Friedman test (P < 0.05) for detecting differences at group level. In case of such differences exist, then a Wilcoxon test (α = 0.05) is applied to compare our best algorithm with mQSO. Note for example that the problem instance with Sphere as peak function and sev = 2.0 has an improvement rate of 0.0, this is because mQSO is better than the others, however this superiority is not statistically significant. For the same problem with sev = 3.0 one can see that again mQSO is better, but this time is significantly different with respect to mQSOE which is the best of our methods.

Table 7 List of the functions Peak function Uni-modal

Cone Sphere Schwefel Quadric

Multi-modal

Rastrigin Griewank Ackley Weierstrass

123

Formula  n 2 f (x) = i=1 x i n f (x) = i=1 xi2 n n f (x) = i=1 |xi | + i=1 |xi | n i f (x) = i=1 ( j=1 x j )2 n f (x) = i=1 (xi2 − 10 cos 2π xi + 10) n 1 n f (x) = 4,000 i=1 xi2 − i=1 cos ( √xi ) + 1 i  n n f (x) = −20 exp (−0.2 n1 i=1 xi2 ) − exp ( n1 i=1 cos 2π xi ) + 20 + exp n kmax k k f (x) = i=1 ( k=0 [a cos(2π b (xi + 0.5))]) max k −n kk=0 [a cos(π bk )] a = 0.5, b = 3.0, kmax = 20

Search space [0, 100]5 [0, 100]5 [0, 100]5 [0, 100]5 [−5.12, 5.12]5 [−32, 32]n [−32, 32]5 [−0.5, 0.5]5

Memetic Comp. (2011) 3:163–174

171

Table 8 Offline error: mean ± standard deviation values for different problems varying shift severity sev. Problem Uni-modal

Cone

Sphere

Schwefel

Quadric

Multi-modal

Rastrigin

Griewank

Ackley

Weierstrass

sev.

mQSO

mQSOE

mPSOD

mPSODE

Imp. rate

0.0

0.34 ± 0.05

0.28 ± 0.04

0.49 ± 0.09

0.43 ± 0.06

19.12∗

1.0

1.78 ± 0.06

1.06 ± 0.08

1.59 ± 0.10

1.44 ± 0.12

40.24∗

2.0

2.36 ± 0.10

1.51 ± 0.10

2.39 ± 0.12

2.25 ± 0.11

35.71∗

3.0

2.88 ± 0.11

2.27 ± 0.14

3.24 ± 0.13

2.90 ± 0.14

21.26∗

0.0

0.23 ± 0.05

0.21 ± 0.05

0.21 ± 0.03

0.35 ± 0.06

7.05

1.0

0.92 ± 0.06

0.56 ± 0.04

0.90 ± 0.04

0.99 ± 0.07

39.68∗

2.0

1.59 ± 0.06

1.70 ± 0.10

1.91 ± 0.06

2.07 ± 0.07

0.00

3.0

2.60 ± 0.08

3.09 ± 0.12

3.12 ± 0.09

3.93 ± 0.15

0.00∗

0.0

0.39 ± 0.05

0.43 ± 0.06

0.46 ± 0.06

0.44 ± 0.06

0.00

1.0

3.00 ± 0.10

1.80 ± 0.11

2.72 ± 0.10

2.50 ± 0.12

39.97∗

2.0

3.89 ± 0.15

2.91 ± 0.11

3.94 ± 0.13

3.61 ± 0.14

25.31∗

3.0

4.77 ± 0.16

4.13 ± 0.17

5.11 ± 0.18

4.69 ± 0.17

13.40∗

0.0

0.99 ± 0.17

0.76 ± 0.10

1.21 ± 0.10

1.19 ± 0.19

23.44∗

1.0

1.47 ± 0.11

1.63 ± 0.17

1.54 ± 0.10

1.85 ± 0.16

0.00∗

2.0

1.85 ± 0.12

2.81 ± 0.11

2.17 ± 0.12

2.46 ± 0.14

0.00∗

3.0

2.68 ± 0.15

4.32 ± 0.25

3.28 ± 0.10

4.95 ± 0.23

0.00∗

0.0

4.86 ± 0.33

4.68 ± 0.35

3.65 ± 0.30

4.08 ± 0.25

24.92∗

1.0

5.50 ± 0.47

5.16 ± 0.30

3.31 ± 0.26

3.53 ± 0.22

39.90∗

2.0

5.45 ± 0.33

5.64 ± 0.34

3.69 ± 0.16

4.19 ± 0.27

32.19∗

3.0

6.13 ± 0.33

6.57 ± 0.24

4.60 ± 0.19

4.48 ± 0.18

26.90∗

0.0

0.46 ± 0.01

0.40 ± 0.01

0.42 ± 0.01

0.39 ± 0.01

16.45∗

1.0

0.81 ± 0.01

0.81 ± 0.01

0.78 ± 0.01

0.74 ± 0.01

9.23

2.0

0.88 ± 0.01

1.07 ± 0.02

0.92 ± 0.01

0.86 ± 0.01

2.49

3.0

0.98 ± 0.01

1.33 ± 0.02

1.05 ± 0.01

1.01 ± 0.02

0.00

0.0

1.35 ± 0.16

2.25 ± 0.23

0.70 ± 0.10

0.67 ± 0.10

50.63∗

1.0

1.57 ± 0.14

1.80 ± 0.19

0.89 ± 0.05

0.78 ± 0.05

49.94∗

2.0

2.27 ± 0.20

3.14 ± 0.10

1.77 ± 0.07

1.60 ± 0.09

29.46∗

3.0

3.20 ± 0.14

5.26 ± 0.17

2.38 ± 0.05

2.21 ± 0.05

30.82∗

0.0

0.01 ± 0.01

4E–3 ± 2E–3

1E−3 ± 2E−4

1E–3 ± 4E–4

92.31∗

1.0

1.28 ± 0.01

0.73 ± 0.01

1.12 ± 0.01

0.87 ± 0.01

42.77∗

2.0

1.79 ± 0.01

1.55 ± 0.02

1.82 ± 0.01

1.34 ± 0.01

25.46∗

3.0

2.17 ± 0.01

1.88 ± 0.02

2.30 ± 0.01

1.53 ± 0.01

29.52∗

Best values are shown using boldface

The results of the first set of experiments (see Table 8) indicate that for less complex landscapes (those based on uni-modal functions), algorithms based on quantum particles are efficient. The mQSOE variant is the best in 10 out of 16 problem instances. For those instances based on multi-modal functions, the methods which incorporate the diversity strategy after a problem change (mPSOD and mPSODE) are the most successful ones, although when the Griewank function is used as a peak, this superiority over mQSO is not clear. The best rate of improvement values of our algorithms in multimodal functions are achieved in the Weierstrass function with values ranging from 25 to 92. For the case of uni-modal functions, best values correspond to the Cone function ranging from 21 to 40.

The differences of our algorithms with respect to mQSO are more marked in the second set of experiments related with Δ e. Table 9 shows that mQSO is superior in problems Sphere and Quadric, although this difference is not significant. It still holds that for relatively easy problems quantum based algorithms are more appropriate, while mPSOD and mPSODE are better for the multi-modal functions, except for the Griewank. Here, algorithms including the control rule for swarms reached better results in more problems than in the previous experiment. See for example that the mQSOE algorithm clearly outperforms to mQSO in uni-modal problems, reaching improvement rates from 14.5 to 41.8. In order to finish the comparison of our methods versus mQSO, we have plotted the results of experiments in the

123

172

Memetic Comp. (2011) 3:163–174

Table 9 Offline error: mean ± standard deviation values for different problems varying change frequency (e)

Uni-modal

Problem

e

mQSO

mQSOE

mPSOD

mPSODE

Imp. rate

Cone

1,000

4.27 ± 0.14

3.58 ± 0.14

4.57 ± 0.17

4.27 ± 0.21

16.22∗

2,000

3.05 ± 0.12

2.21 ± 0.12

2.89 ± 0.11

2.70 ± 0.12

27.47∗

3,000

2.40 ± 0.08

1.53 ± 0.07

2.34 ± 0.12

2.19 ± 0.16

36.21∗

4,000

1.95 ± 0.07

1.13 ± 0.06

1.88 ± 0.08

1.81 ± 0.14

41.78∗

1,000

4.02 ± 0.25

4.34 ± 0.30

6.23 ± 0.37

6.31 ± 0.29

0.00

2,000

2.00 ± 0.10

1.65 ± 0.14

2.53 ± 0.15

2.69 ± 0.20

17.47∗

3,000

1.62 ± 0.10

0.97 ± 0.08

1.72 ± 0.12

1.64 ± 0.15

40.07∗

4,000

0.95 ± 0.04

0.70 ± 0.06

1.40 ± 0.11

1.30 ± 0.10

25.82∗

1000

6.73 ± 0.21

5.75 ± 0.22

7.25 ± 0.22

6.77 ± 0.27

14.54∗

2,000

4.79 ± 0.14

3.51 ± 0.16

4.85 ± 0.18

4.67 ± 0.21

26.69∗

3,000

4.01 ± 0.15

2.71 ± 0.16

3.82 ± 0.14

3.48 ± 0.14

32.54∗

4,000

3.31 ± 0.11

2.20 ± 0.14

3.21 ± 0.13

2.88 ± 0.16

33.49∗

1,000

3.26 ± 0.12

3.51 ± 0.18

4.40 ± 0.30

4.03 ± 0.24

0.00

2,000

2.03 ± 0.11

2.10 ± 0.13

2.28 ± 0.13

2.41 ± 0.18

0.00

3,000

1.70 ± 0.11

1.74 ± 0.20

2.01 ± 0.11

1.92 ± 0.13

0.00

Sphere

Schwefel

Quadric

Multi-modal

Rastrigin

Griewank

Ackley

Weierstrass

4,000

1.46 ± 0.12

1.50 ± 0.14

1.62 ± 0.11

1.63 ± 0.13

0.00

1,000

6.79 ± 0.37

5.85 ± 0.34

5.87 ± 0.30

6.39 ± 0.32

13.82∗

2,000

6.77 ± 0.44

5.53 ± 0.36

5.08 ± 0.32

4.82 ± 0.27

28.92∗

3,000

6.49 ± 0.45

5.19 ± 0.36

4.23 ± 0.20

4.20 ± 0.21

35.30∗

4,000

5.69 ± 0.44

5.35 ± 0.36

3.59 ± 0.13

3.91 ± 0.29

36.89∗

1,000

2.50 ± 0.05

2.55 ± 0.05

2.62 ± 0.04

2.65 ± 0.06

0.00

2,000

1.37 ± 0.02

1.46 ± 0.03

1.51 ± 0.02

1.41 ± 0.05

2.41

3,000

1.15 ± 0.02

1.18 ± 0.02

1.12 ± 0.02

1.04 ± 0.01

9.99

4,000

0.78 ± 0.01

0.89 ± 0.01

0.90 ± 0.01

0.87 ± 0.01

0.00

1,000

3.00 ± 0.15

2.49 ± 0.09

2.79 ± 0.06

2.29 ± 0.12

23.78∗

2,000

2.16 ± 0.12

1.73 ± 0.12

1.78 ± 0.04

1.49 ± 0.09

31.00∗

3,000

1.95 ± 0.15

1.49 ± 0.12

1.39 ± 0.10

0.98 ± 0.04

49.49∗

4,000

1.56 ± 0.10

1.34 ± 0.10

1.18 ± 0.07

0.88 ± 0.05

44.05∗

1,000

2.64 ± 0.01

2.20 ± 0.01

2.55 ± 0.01

2.38 ± 0.01

16.65∗

2,000

2.03 ± 0.01

1.49 ± 0.01

1.91 ± 0.01

1.67 ± 0.01

26.55∗

3,000

1.70 ± 0.01

1.12 ± 0.01

1.58 ± 0.01

1.30 ± 0.01

34.10∗

4,000

1.45 ± 0.01

0.89 ± 0.00

1.35 ± 0.01

1.05 ± 0.01

38.73∗

Best values are shown using boldface

graphs shown in Figs. 4 and 5. We can see the different trends exhibited by the methods. Note for example that the curves corresponding to mQSO algorithm are almost always above of some any other method’s curve. This is a further argument to confirm the superiority of the proposed strategies.

5 Conclusions and future works In this work we have proposed two strategies for improving the adaptation of a well-known approach for optimization in dynamic environments: the multi-swarm PSO (mPSO). These strategies were oriented to manage the outdated memory and diversity loss. The strategy of diversification was

123

developed based on an exploration radius around the best solution in the swarm. It was found through experiments that in problems with a moderate shift severity the best variants are those with an exploration radius near to the magnitude of this severity. Moreover, on multi-modal problems this strategy seemed to be more effective than the constant generation of diversity shown by algorithms like mQSO. The second strategy was designed for improving the efficiency of the mPSO. Those swarms with a bad behavior and a certain level of convergence are stopped by using an adaptive and fuzzy rule. The experimental results confirmed a remarkable improvement in those cases that used this idea because of a simple reason: the resources (objective function evaluations) are not wasted in unprofitable areas of the

2 1 0

0.0

1.0

2.0

3.0

mQSO mQSOE mPSOD mPSODE

6 5 4 3 2 1 0

0.0

1.0

sev.

4 2 0

1000

2000

3000

Offline error (Mean of 30 runs)

Offline error (Mean of 30 runs)

mQSO mQSOE mPSOD mPSODE

6

3.0

mQSO mQSOE mPSOD mPSODE

3

2

1

0

0.0

1.0

sev.

(e) Sphere varying Δe 8

2.0

(c) Cone varying sev.

4

4000

9

(f) Schwefel varying Δe mQSO mQSOE mPSOD mPSODE

7 6 5 4 3 1000

2000

3000

Δe

2.0

3.0

mQSO mQSOE mPSOD mPSODE

6

4

2

0

0.0

1.0

sev.

8

2

(d) Quadric varying sev. Offline error (Mean of 30 runs)

3

(b) Schwefel varying sev.

4000

(g) Cone varying Δe

5

mQSO mQSOE mPSOD mPSODE

4

3

2

1

1000

2000

Δe

2.0

3.0

sev.

3000

Offline error (Mean of 30 runs)

mQSO mQSOE mPSOD mPSODE

4

7

Offline error (Mean of 30 runs)

(a) Sphere varying sev.

Offline error (Mean of 30 runs)

5

173

Offline error (Mean of 30 runs)

Offline error (Mean of 30 runs)

Memetic Comp. (2011) 3:163–174

4000

5

(h) Quadric varying Δe mQSO mQSOE mPSOD mPSODE

4

3

2

1

1000

2000

3000

Δe

4000

Δe

6 5 4 3

0.0

1.0

2.0

3.0

4

2

0

0.0

Offline error (Mean of 30 runs)

(e) Rastrigin varying Δe 10

mQSO mQSOE mPSOD mPSODE

8

6

4 1000

2000

3000

Δe

2.0

3.0

mQSO mQSOE mPSOD mPSODE

1.5

1

0.5

0

0.0

sev.

4000

Offline error (Mean of 30 runs)

sev.

1.0

(c) Griewank varying sev.

mQSO mQSOE mPSOD mPSODE

3

2

1 1000

2000

2.0

3.0

4

(d) Weierstrass varying sev. mQSO mQSOE mPSOD mPSODE

3

2

1

0

0.0

sev.

(f) Ackley varying Δe 4

1.0

Offline error (Mean of 30 runs)

7

mQSO mQSOE mPSOD mPSODE

6

2

3000

4000

3

(g) Griewank varying Δe mQSO mQSOE mPSOD mPSODE

2.5 2 1.5 1 0.5

Δe

1000

2000

3000

Δe

1.0

2.0

3.0

sev.

4000

Offline error (Mean of 30 runs)

mQSO mQSOE mPSOD mPSODE

8

(b) Ackley varying sev. Offline error (Mean of 30 runs)

(a) Rastrigin varying sev.

Offline error (Mean of 30 runs)

9

Offline error (Mean of 30 runs)

Offline error (Mean of 30 runs)

Fig. 4 Comparison among the investigated methods over uni-modal problems

(h) Weierstrass varying Δe mQSO mQSOE mPSOD mPSODE

4

3

2

1 1000

2000

3000

4000

Δe

Fig. 5 Comparison among the investigated methods over multi-modal problems

search space. However, it has been observed that the use of this rule in the algorithms was more effective for problems with uni-modal functions. As future work, we consider that self-adaptation is a promising research area to explore, where centralized or decentralized sharing information mechanisms could be used to

promote those parameters configurations that are well suited at every stage of the search. Acknowledgments This work has been partially funded by projects TIN2008-01948, TIN2008-06872-C04-04 from the Spanish Ministry of Science and Technology, and P07-TIC02970 from the Andalusian Government.

123

174

Memetic Comp. (2011) 3:163–174

References 1. Angeline P (1997) Tracking extrema in dynamic environments. In: Angeline P, Reynolds R, McDonnell J, Eberhart R (eds) Evolutionary programming VI, Lecture notes in computer science, vol 1213. Springer, Berlin, pp 335–345 2. Banks A, Vincent J, Anyakoha C (2007) A review of particle swarm optimization. Part I: background and development. Nat Comput Int J 6:467–848 3. Banks A, Vincent J, Anyakoha C (2008) A review of particle swarm optimization. Part II: hybridisation, combinatorial, multicriteria and constrained optimization, and indicative applications. Nat Comput Int J 7:109–124 4. Blackwell T (2005) Particle swarms and population diversity. Soft Comput Fusion Found Methodol Appl 9:793–802 5. Blackwell T, Bentley P (2002) Don’t push me! collision-avoiding swarms. In: CEC ’02: Proceedings the IEEE congress on evolutionary computation. IEEE Computer Society, Washington, DC, USA, pp 1691–1696 6. Blackwell T, Branke J (2004) Multi-swarm optimization in dynamic environments. In: Lecture notes in computer science, vol 3005. Springer, Heidelberg, pp 489–500 7. Blackwell T, Branke J (2006) Multiswarms, exclusion, and anticonvergence in dynamic environments. IEEE Trans Evol Comput 10(4):459–472 8. Blackwell T, Branke J, Li X (2008) Particle swarms for dynamic optimization problems. In: Rozenberg GEA (ed) Swarm intelligence, natural computing series. Springer, Berlin , pp 193–217 9. Branke J (1999) Memory enhanced evolutionary algorithms for changing optimization problems. In: CEC 99. Proceedings of the congress on evolutionary computation, vol 3. IEEE Press, pp 1875– 1882 10. Branke J, Schmeck H (2002) Designing evolutionary algorithms for dynamic optimization problems. In: Tsutsui S, Ghosh A (eds) Theory and application of evolutionary computation: recent trends. Springer, Berlin, pp 239–262 11. Clerc M (2006) Particle swarm optimization. Wiley-ISTE, New York 12. lerc M, Kennedy J (2002) The particle swarm—explosion, stability, and convergence in a multidimensional complex space. IEEE Trans Evol Comput 6(1):58–73 13. Cruz C, González J, Pelta D (2010) Optimization in dynamic environments: a survey on problems, methods and measures. In: Soft computing—a fusion of foundations, methodologies and applications, pp 1–22 14. Dasgupta D, Mcgregor D (1992) Nonstationary function optimization using the structured genetic algorithm. In: Parallel problem solving from nature. Elsevier, Amsterdam, pp 145–154 15. Eberhart R, Kennedy J (1995) A new optimizer using particle swarm theory. In: Proceedings of the sixth international symposium on micro machine and human science MHS95. IEEE Press, pp 39–43 16. García Del Amo I, Pelta D, González J, Novoa P (2010) An analysis of particle properties on a multi-swarm PSO for dynamic

123

17.

18.

19.

20. 21.

22.

23.

24.

25. 26.

27.

28.

29.

30.

31.

32.

optimization problems. In: Current topics in artificial intelligence. Lecture notes in computer science, vol 5988, pp 32–41 Gonzalez J, Masegosa A, Garcia I (2011) A cooperative strategy for solving dynamic optimization problems. Memetic Comput 3:3– 14 Hu X, Eberhart R (2002) Adaptive particle swarm optimization: Detection and response to dynamic systems. In: CEC ’02: proceedings of the IEEE congress on evolutionary computation. IEE Press, pp 1666–1670 Janson S, Middendorf M (2004) A hierarchical particle swarm optimizer for dynamic optimization problems. In: Lecture notes in computer science, vol 3005. Springer, Berlin, pp 513–524 Jin Y, Branke J (2005) Evolutionary optimization in uncertain environments—a survey. IEEE Trans Evol Comput 9(3):303–317 Kennedy J, Eberhart R (1995) Particle swarm optimization. In: IEEE international conference on neural networks, vol 4, pp 1942– 1948 Li C, Yang S (2008) A generalized approach to construct benchmark problems for dynamic optimization. In: Simulated evolution and learning. Lecture notes in computer science, vol 5361. Springer, Berlin, pp 391–400 Lung R, Dumitrescu D (2010) Evolutionary swarm cooperative optimization in dynamic environments. Nat Comput Int J 9(1):83– 94 Morrison R, De Jong K (1999) A test problem generator for nonstationary environments. In: Proceedings of the 1999 congress on evolutionary computation, vol 3, pp 2047–2053 Moser I, Chiong R (2010) Dynamic function optimisation with hybridised extremal dynamics. Memetic Comput 2:137–148 Novoa-Hernández P, Pelta D, Corona C(2010) Improvement strategies for multi-swarm pso in dynamic environments. In: González J, Pelta D, Cruz C, Terrazas G, Krasnogor N (eds) Nature inspired cooperative strategies for optimization (NICSO 2010). Studies in computational intelligence, vol 284. Springer, Berlin, pp 371–383 Parrott D, Li X (2004) A particle swarm model for tracking multiple peaks in a dynamic environment using speciation. In: CEC ’04: proceedings the IEEE congress on evolutionary computation, vol 1, pp 98–103 Pelta D, Cruz C, Verdegay J (2009) Simple control rules in a cooperative system for dynamic optimization problems. Int J Gen Syst 38(7):701–717 Pelta D, Sancho-Royo A, Cruz C, Verdegay JL (2006) Memory and fuzzy rules in a co-operative multi-thread strategy for optimization. Inform Sci 176(13):1849–1868 Wang H, Yang S, Ip W, Wang D (2010) A particle swarm optimization based memetic algorithm for dynamic optimization problems. Nat Comput Int J 9(3):703–725 Weicker K (2002) Performance measures for dynamic environments. In: Guervós JJM, Adamidis P, Beyer HG, FernándezVillacãnas Martı JL, Schwefel HP (eds) Parallel problem solving from nature, vol VII. Springer, Berlin, pp 64–73 Yao X, Liu Y, Lin G (1999) Evolutionary programming made faster. IEEE Trans Evol Comput 3:82–102

Suggest Documents