Using Scout Particles to Improve a Predator-Prey Optimizer Arlindo Silva1 , Ana Neves1 , and Teresa Gonçalves2 1
Escola Superior de Tecnologia do Instituto Politécnico de Castelo Branco {arlindo,dorian}@ipcb.pt 2 Universidade de Évora
[email protected]
Abstract. We discuss the use of scout particles, or scouts, to improve the performance of a new heterogeneous particle swarm optimization algorithm, called scouting predator-prey optimizer. Scout particles are proposed as a straightforward way of introducing new exploratory behaviors into the swarm, expending minimal extra resources and without performing global modifications to the algorithm. Scouts are used both as general mechanisms to globally improve the algorithm and also as a simple approach to taylor an algorithm to a problem by embodying specific knowledge. The role of each particle and the performance of the global algorithm is tested over a set of 10 benchmark functions and against two state-of-the-art evolutionary optimizers. The experimental results suggest that, with the addition of scout particles, the new optimizer can be competitive and even superior to the other algorithms, both in terms of performance and robustness. Keywords: swarm intelligence, particle swarm optimization, heterogeneous particle swarms
1
Introduction
The particle swarm optimization algorithm (PSO) is a stochastic optimization algorithm that uses a population of individuals, represented as vectors of real numbers, to search for the global optimum in a multidimensional space [7]. Individuals are usually called particles, and the particle set is called a swarm. In addition to their current position, each individual keeps a memory of the best position it has found so far, as well as a velocity vector. Each particle is also aware of the best position found by its best neighbor in the swarm. The population behavior mimics the social interactions in real swarms by computing each particles’ velocity in terms of how strongly the individual is attracted to its own notion of where the best solution should be and the believe of the group, represented by the best neighbor. The velocity vector is added to the particle’s position at each iteration, thus defining its trajectory in the search space. Qualities like conceptual simplicity, quick implementation, low computational costs and being easily adaptable to new domains have made the PSO hugely
2
popular amongst practitioners, with successful applications in many areas [11]. Despite its popularity, the basic PSO also presents some challenges that must be tackled for a successful application [1, 14]. These include controlling the balance between exploration and exploitation; maintaining some level of diversity in the swarm after it has converged (there is no equivalent to the mutation operator present in evolutionary algorithms); fine-tuning a reasonable (but not optimal) solution to where the swarm has converged; avoiding performance degradation when optimizing non-separable functions. Since the introduction of the original PSO, there has been a very strong body of research dedicated to the study and overcome of the algorithm perceived weaknesses [12]. Amongst the many variants in use today, we can identify three main groups of approaches that seem the most promising: hybrid approaches, which use one or more mutation operators mainly inspired in other evolutionary algorithms [6]; memetic variants, which ally the advantages of local search algorithms with the global exploratory capabilities of the PSO [10] and, more recently, heterogeneous particle swarm optimizers, where different particles within the same swarm can have different behaviors and/or properties, allowing for different parts of the swarm to be tuned for different aspects of the problem being optimized or for different phases of the exploration process [4, 8]. In previous work we introduced a new heterogeneous particle swarm algorithm, called scouting predator-prey optimizer (SPPO), which uses an extra particle, called a predator, to dynamically control diversity and the balance between exploitation and exploration [16]. The same article introduced the concept of scout particles, which are a swarm subset that can be updated with different rules, leading to alternative exploratory behaviors. In this paper, we discuss the effects of different scout particles, and show how they can perform different roles, from globally improving the SPPO performance, to using problem specific knowledge to better adapt the algorithm to a specific task.
2
The Scouting Predator-Prey Optimizer
In particle swarm optimization each swarm member is represented by three msize vectors, assuming an optimization problem f (x) in Rm . For each particle i we have a xi vector that represents the current position in the search space, a pi vector storing the best position found so far and a third vector vi corresponding to the particle’s velocity. For each iteration t of the algorithm, the current position xi of every particle i is evaluated by computing f (xi ). Assuming a minimization problem, xi is saved in pi if f (xi ) < f (pi ), i.e. if xi is the best solution found by the particle so far. The velocity vector vi is then computed with equation 1 and used to update the particle’s position (equation 2). vit+1 = wvit + u(0, φ1 ) ⊗ (pi − xti ) + u(0, φ2 ) ⊗ (pg − xti )
(1)
xt+1 i
(2)
=
xti
+
vit
3
In equation 1 (pi −xti ) represents the distance between a particle and its best position in previous iterations and (pg − xti ) represents the distance between a particle and the best position found by the particles in its neighborhood, stored in pg . u(0, φ1 ) and u(0, φ2 ) are random number vectors with uniform distributions in intervals [0, φ1 ] and [0, φ2 ], respectively. w is a weight that decreases linearly with t and ⊗ is a vector component-wise multiplication.
2.1
The Predator Effect
One of the limitations of the standard particle swarm algorithm is its inability to introduce diversity in the swarm after it has converged to a local optimum. Since there is no mechanism similar to a mutation operator, and changes in xi are dependent on differences between the particles’ positions, as the swarm clusters around a promising area in the search space, so does velocity decreases and particles converge. This is the desirable behavior if the optimum is global, but, if it is local, there is no way to increase velocities again, allowing the swarm to escape to a new optimum, i.e. there is no way to go back to a global exploration phase of search after the algorithm entered a local search (exploitation) phase. We introduced the predator-prey idea to alleviate this problem [15, 16]. The predator particle’s velocity vp is updated using equation 3, oscillating between the best particle’s best and current position. This update rule makes the predator effectively chase the best particle in the search space. vpt+1 = wvpt + u(0, φ1 ) ⊗ (xtg − xtp ) + u(0, φ2 ) ⊗ (pg − xtp )
(3)
The role of the predator particle in the SPPO algorithm is to introduce a perturbation factor in the swarm and to guarantee that this disturbance increases as the swarm converges to a single point. To achieve this, we add a perturbation to a particles’s velocity in dimension j, as defined by equation 4, where u(−1, 1) and u(0, 1) are random numbers uniformly distributed between the arguments, xmax and xmin are, respectively the upper and lower limit to the search space and r is the user defined perturbation probability. t t vij = vij + u(−1, 1)|xmax − xmin |, if u(0, 1) < r exp−|xij −xpj |
(4)
From equation 4 follows that a random perturbation is added to the velocity value in dimension j with a probability that depends on the particles’s distance to the predator in that dimension. This probability is maximum (r) when that distance is 0 but rapidly decreases if the particle escapes the predator. Since the predator chases the best particle, the perturbation in the swarm is more likely when all the particles are very near, i.e. during the exploitation phase, and becomes almost inexistent when the particles are far apart. This mechanism allows for particles to escape and find new optima far from the current attractor even in the last phases of exploitation.
4
2.2
Scout Particles
The predator-prey optimizer is an heterogeneous particle swarm algorithm, since the predator particle is updated using a different equation. We propose a new level of heterogeneity by including scout particles into the swarm. Scout particles, or scouts, are a subset of the swarm that implements different exploration strategies. They are inspired by the common presence in real insect swarms, e.g. honeybees [2], of specific individuals in charge of scouting for new food sources. When successful in this task, they communicate the finding to other individuals, called recruits, that may then follow to the new food source. In our algorithm, scouts are specific particles to which we assign update rules different form those used by the rest of the swarm. Each new rule implements a different exploratory behavior with specific objectives in terms of function optimization. As in real swarms, resource management is essential, so we must ensure that the positive effect of the scouts is not offset by many extra function evaluations or memory requirements. It is also important to ensure that the addition of scouts doesn’t disrupt the swarm usual behavior. To achieve this, recruitment is only done by updating the scout best found position, as well as its neighborhood best when that is the case. From the viewpoint of the rest of the swarm the use of scouts is, consequently, transparent, constituting a very flexible way to introduce new capabilities to the optimization algorithm, or even to hybridize it with a different optimizer altogether. Scouts can be used to introduce improvements to the global algorithm, e.g. a local search sub-algorithm, or to implement problem dependent mechanisms to better adapt the algorithm to a specific problem. In the later case, the mechanism can be of general nature or based in specific problem knowledge that may be used to speed up the optimization process. To illustrate these ideas, we will use two scout particles to improve the performance of the predator-prey algorithm in continuous optimization problems and a third particle to taylor the algorithm to a purposefully designed problem set. For the first scout we choose the best particle in the swarm at a given iteration and perform a random mutation on one of its dimensions j using equation 5, where n(0, σ 2 ) is a a random number drawn from a normal distribution with average 0 and standard deviation σ. pg is updated to the new p0g only if f (p0g ) < f (pg ). σ starts at xmax /10 and is updated during the run using the 1/5 rule borrowed from evolution strategies [3]: after every 100 generations σ is doubled if the mutation success rate is over 1/5 and is halved otherwise. This mutation mechanism allows for a local search to be made around pg over time. p0gj = pgj + n(0, σ)
(5)
The second scout particle uses an update rule inspired by opposition based learning (OBL) [18]. The basic idea behind OBL is that sometimes it might be useful to search in the opposite direction of the current position. Opposition based extensions of particle swarm [19] and differential evolution [13] have improved on the results of the corresponding base algorithms. For this second scout, we use the particle with worst evaluation, pw , based on the heuristic that
5
in the opposite of the worst particle might be a more promising search region. We compute the opposite particle position p0w using equation 6 for each dimension j, with atj and btj being, respectively, the maximum and minimum value for pij at generation t. Again, pw is only updated to the new p0w if f (p0w ) < f (pw ). p0wj = atj + btj − pwj
(6)
Scout particles are updated prior to the main update cycle of the swarm, where they can cumulatively be updated using equations 1 and 4. To save objective function evaluations, we update the best particle but not the worst one. The third scout particle is problem dependent and it will be described in the experimental results section, together with the relevant experiments.
3
Experimental Results
For the experiments described in this paper we used 10 common benchmark functions, selected from the optimization bibliography. The first three are unimodal functions, but f2 is non-separable and in f3 the optimum is in a very flat zone with little information to guide the algorithms. f4 -f10 are multimodal functions with many local optima, selected to pose different obstacles to the optimization algorithms. Table 1 lists names and parameters for the benchmark functions, including the optimum value f (x∗ ) and its position x∗i . Table 1. Benchmark functions’ names and parameters. Function
Name
f1 f2 f3 f4 f5 f6 f7 f8 f9 f10
Sphere Rotated Ellipsoid Zhakarov’s Rastrigin’s Ackley’s Griewangk’s Salomon’s Kursawe’s Shaffer’s Levy-Montalvo’s
Range
f (x∗ )
x∗ i
Displacement
[−100 : 100] [−100 : 100] [−5 : 10] [−5.12 : 5.12] [−32 : 32] [−600 : 600] [−100 : 100] [−1000 : 1000] [−100 : 100] [−500 : 500]
0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 1
25 25 1.25 1.28 8 150 25 250 25 125
From Table 1 we can observe that all used functions (f10 being the exception) have their optimum in the search space origin. They all also present some level of symmetry around that point. While these characteristics will later be useful to set up some specific experiments, they could also give some advantage to algorithms that present a bias towards the origin of the search space. To avoid this, the global optimum was displaced, for each function, by the amount shown on Table 1. All functions were optimized in 40 dimensions in the reported experiments and their equations can be found in [16] as well as in the field bibliography. The first set of experimental results compares the final SPPO algorithm, using both local search and an opposition based search scouts, against other PSO
6
and differential evolution (DE) based approaches [17]. We included DE algorithms since differential evolution shares many of the PSO advantages, namely in terms of simplicity, flexibility and performance. We used a recent state-of-the art hybrid PSO (HMRPSO) variant [5], which reported very good results in the optimization of a large set of benchmark functions when compared with several other PSO variants and evolutionary algorithms. The DE approach used for comparison, which also showed promising experimental results, is called free search differential evolution (FSDE) [9]. Our implementations were first tested on the benchmarks reported in the original papers to minimize implementation discrepancies. Standard versions of the PSO and DE algorithms were also included in our experiments. The parameters for the algorithms are the ones proposed by the respective authors. For the scouting predator-prey optimizer we used φ1 = φ2 = 1.6, w was decreased from 0.4 to 0.2 and r = 0.0008. Population size was set to 20 and the algorithms were run for 1e4 iterations or until 2e5 objective function evaluations were performed. In Table 2 we present averages and standard deviations for the best values found for each test function, taken over 50 runs of each pair algorithm/function. The random number generator was initiated to ensure that all algorithms started with the same population for corresponding runs. Table 2. Experimental results for the final algorithm: average and standard deviations of the best values found over 50 runs of an algorithm for each function. PSO
DE
HMRPSO
FSDE
SPPO
f1
1.65169e-26 (5.54796e-26)
6.25047e-23 (4.4143e-22)
3.38079e-23 (1.4217e-23)
0.242096 (0.174053)
7.57306e-31 (3.02794e-30)
f2
316.126 (254.974)
56.2572 (397.747)
0.00226426 (0.00212582)
61.5534 (33.1706)
2.52261e-09 (2.40497e-09)
f3
15.8846 (9.53778)
0.000692142 (0.0016271)
5.19511e-09 (4.86953e-09)
1.13648 (0.807662)
6.72887e-10 (5.23577e-10)
f4
55.8172 (17.0791)
148.852 (84.0428)
17.8104 (5.6274)
7.72839 (11.3974)
0.0198992 (0.140708)
f5
0.476326 (2.91565)
5.47737 (6.9252)
1.27917e-12 (2.26036e-13)
0.150284 (0.0844456)
3.96305e-14 (8.27882e-15)
f6
0.0134364 (0.0169437)
0.0861482 (0.282405)
0.00777161 (0.0112705)
0.290238 (0.191083)
0.0524945 (0.0498488)
f7
0.639873 (0.137024)
0.389874 (0.194044)
0.553874 (0.0973316)
0.375873 (0.169706)
1.19387 (0.219842)
f8
-41.2553 (50.7622)
-76.2049 (67.5841)
-24.9186 (25.9715)
171.819 (215.74)
-150.326 (16.6553)
f9
5.15659 (2.34525)
15.976 (0.391461)
6.77497 (1.32032)
12.0213 (1.81459)
0.860774 (0.424196)
f10
0.0181309 (0.126894)
0.0621908 (0.216245)
5.56247e-22 (3.83363e-22)
55.1298 (78.7154)
5.86288e-28 (3.38157e-28)
As we can see in Table 2, the SPPO algorithm obtained substantially better average results in 8 of the 10 benchmark problems. For the remaining two func-
7
tions, the best result was achieved by the HMRPSO for f6 and by F SDE for f7 . All the hybrid algorithms performed consistently better than the basic versions, which can be observed in the columns marked PSO and DE in Table 2. These results confirm more extensive results presented in previous work [16], where, using a significantly larger problem set, we could also observe that the scouting predator-prey optimizer was generally competitive and frequently superior to other state-of-the-art PSO and DE based algorithms. In this work we aim to additionally address the different roles performed by each scout particle and also to illustrate how scout particles can be used to improve performance by using problem specific knowledge. The second set of experiments compares, under the same experimental conditions used previously, the standard PSO, the PSO with just the predator effect added (PPO), the PPO with an opposition learning based scout (OPPO), the PPO with a local search based scout (LPPO) and the final algorithm including all described behaviors, the SPPO. The results are presented in Table 3. Table 3. Experimental results for the general scout particles: average and standard deviations of the best values found over 50 runs of an algorithm for each function. PSO
PPO
OPPO
LPPO
SPPO
f1
1.65169e-26 (5.54796e-26)
1.71656e-28 (3.5187e-28)
1.63629e-27 (5.77022e-27)
0 (0)
7.57306e-31 (3.02794e-30)
f2
316.126 (254.974)
1260.18 (2219.77)
1712.43 (2802.44)
3.9881e-10 (4.35113e-10)
2.52261e-09 (2.40497e-09)
f3
15.8846 (9.53778)
31.3115 (46.2811)
24.5038 (40.7141)
4.62151e-10 (5.1526e-10)
6.72887e-10 (5.23577e-10
f4
55.8172 (17.0791)
0.139295 (0.853063)
0.0397984 (0.196951)
0.23879 (1.24822)
0.0198992 (0.140708)
f5
0.476326 (2.91565)
2.29725 (6.23978)
7.24576e-14 (2.50212e-14)
2.01927 (5.62937)
3.96305e-14 (8.27882e-15)
f6
0.0134364 (0.0169437)
0.139295 (0.853063)
0.0624817 (0.0569936)
0.0535327 (0.0535873)
0.0524945 (0.0498488)
f7
0.639873 (0.137024)
1.21987 (0.192725)
1.22987 (0.208248)
1.15387 (0.183181)
1.19387 (0.219842)
f8
-41.2553 (50.7622)
-153.179 (5.3920)
-153.365 (2.16092)
-150.644 (1.26064)
-150.326 (16.6553)
f9
5.15659 (2.34525)
0.855093 (0.420195)
0.920654 (0.434841)
0.797715 (0.413331)
0.860774 (0.424196)
f10
0.0181309 (0.126894)
5.41382e-27 (2.00007e-26)
2.0807e-26 (5.06554e-26)
4.46798e-28 (3.52975e-28)
5.86288e-28 (3.38157e-28)
These results help to understand the contribution of each component to the overall performance of the algorithm. The predator-prey algorithm improves substantially on the results of the simple PSO for f1 , f4 and f8 -f10 , with results slightly worse for the remaining functions. Overall, the predator-effect has a positive contribution since, for some functions, the improvement is of several orders of magnitude while, when a performance decrease occurs, is less marked.
8
When compared with the predator-prey optimizer, the introduction of an opposition learning based scout substantially improves the optimization results for f4 , f6 and, specially, for f5 . For the remaining functions the variation is mostly negligible, with exception of f10 . This illustrates the use of a scout particle that implements a general exploration behavior that is mostly beneficial for a specific problem. It also underlines that its inclusion does not substantially impact on the overall performance of the algorithm. The local search scout, again in comparison with the PPO, has a more uniform effect. There is improvement in almost all functions, with particular focus on the unimodal ones. When there is no improvement the difference is negligible, strengthening the idea that the addition of scout particles doesn’t disturb the general algorithm behavior. The local search scout performs a different role since, while the scouting is also general, i.e. not tailored for this particular problem, it seems to improve the algorithm performance over many problems.
Table 4. Experimental results for the problem specific scout particle: average and standard deviations of the best values found over 50 runs of an algorithm for each function; when the average is 0, the following value is the average number of function evaluations needed to find the optimum. HMRPSO
FSDE
SPPO
SPPO+
f1
7.3948e-26 (2.60705e-25)
1.04446 (7.38415)
0 [9200]
0 [7250]
f2
3372.4 (10336.1)
13183.7 (9154.16)
6.95768e-29 (3.01847e-28)
3.75725e-29 (2.04331e-28)
f3
31.8958 (225.537)
158.062 (102.318)
1.07958e-28 (5.73612e-28)
4.05921e-29 (1.33317e-28)
f4
0 [117000]
40.1661 (37.248)
0.0198992 (0.140708)
0 [151600]
f5
2.25064e-14 (2.2529e-14)
15.0721 (4.13056)
0 [16900]
0 [15150]
f6
0.0322881 (0.0425845)
0.648994 (1.87741)
0.0450732 (0.062597)
0.0471287 (0.0583248)
f7
0.405873 (0.130008)
1.51993 (2.52228)
0.335876 (0.158765)
0.293873 (0.142012)
f8
-39.5094 (11.0244)
751.95 (395.148)
-55.8945 (10.2041)
-53.3964 (9.4122)
f9
2.70569 (1.11255)
7.32397 (2.4941)
0.0223451 (0.0842491)
0.000777273 (0.0038465)
f10
5.19578e-06 (1.98551e-05)
173886 (239596)
3.77984e-27 (4.94146e-27)
3.87103e-28 (1.38232e-27)
For the third, and final, set of experiments, we changed the problems’ characteristics, by moving the optimum to 0 in 80% of dimensions. For the remaining dimensions, 80% were set to xmax and the rest to a random value in the function’s domain, which was also changed to [0, xmax ]. A new problem is randomly generated for each run of an algorithm. These changes, in one hand, make the
9
problems harder for evolutionary algorithms, by placing the solution in the border of the search space for most of the dimensions. This allows to better evaluate the robustness of the algorithms being compared. On the other hand, this knowledge about the optimum’s characteristics can be used to test the use of a scout built around specific problem knowledge. In these experiments we used a simple scout that, for each iteration, moved the particle, in a random dimension, to 0 (with 80% probability) or xmax (with 20% probability). The results for these experiments are presented in Table 4, where SPPO+ represents the scouting predator-prey algorithm with the extra scout. In this table, when the algorithm found the global optimum in all runs, we present in brackets the average number of used function evaluations, instead of the standard deviation. Looking at the results in Table 4, our first conclusion is that the problems’ new formulation was particularly challenging for the FSDE algorithm, which struggled considerably to optimize many of the benchmark functions. The HMRPSO algorithm performance decayed more gracefully, inclusively achieving the best results for f4 , but it also had great difficulties to optimize some functions, specially f2 , f3 , f9 and f10 . The SPPO algorithm performed at least as well as for the previous formulations, which suggests this algorithm to be more robust than both the FSDE and HMRPSO. The results for SPPO+, using the third scout particle, were superior in 8 of the 10 benchmark functions, when compared with those of the SPPO. These results illustrate the fact that even a single scout particle, using a specific behavior based on problem specific knowledge, can have a positive impact on the algorithm’s performance. Additionally, using scout particles, this impact can be achieved without performing global changes to the algorithm or using a large number of extra function evaluations.
4
Conclusions
In this paper we discussed the use of scout particles to improve the performance of an heterogeneous particle swarm optimizer algorithm. We added three different scout particles to the swarm, performing different roles, and thus contributing in different ways to the final behavior of the swarm. We used a local search based scout to improve the overall optimization performance of the algorithm and observed that an opposition learning based scout has a more limited effect, which still could have a large impact for specific functions. Finally we changed the experimental conditions and built a scout based on the knowledge about that changes to adjust the algorithm to the new conditions, with additional gains in performance. Overall, we can conclude that scout particles can be easily used to add new exploratory behaviors to the algorithm, without perturbing the general dynamic of the swarm, either by introducing global modifications or using extra resources, e.g. objective function evaluations. From the extensive experiments performed on the benchmark functions, it is apparent that the resulting scouting predatorpray optimizer is not only competitive but even superior to other state-of-the-art evolutionary, not only in optimization performance but also in robustness.
10
References 1. P. J. Angeline. Evolutionary optimization versus particle swarm optimization: Philosophy and performance differences. In Proc. of the 7th Int. Conf. on Evolutionary Programming, EP ’98, pages 601–610, London, UK, 1998. Springer-Verlag. 2. M. Beekman, A. Gilchrist, M. Duncan, and D. Sumpter. What makes a honeybee scout? Behavioral Ecology and Sociobiology, 61:985–995, 2007. 3. H.-G. Beyer and H.-P. Schwefel. Evolution strategies - a comprehensive introduction. Natural Computing, 1:3–52, 2002. 4. A. Engelbrecht. Heterogeneous particle swarm optimization. In Swarm Intelligence, volume 6234 of Lecture Notes in Computer Science, pages 191–202. Springer, 2010. 5. H. Gao and W. Xu. A new particle swarm algorithm and its globally convergent modifications. Systems, Man, and Cybernetics, Part B: Cybernetics, IEEE Transactions on, 41(5):1334 –1351, oct. 2011. 6. H. Gao and W. Xu. Particle swarm algorithm with hybrid mutation strategy. Applied Soft Computing, 11(8):5129 – 5142, 2011. 7. J. Kennedy and R. Eberhart. Particle swarm optimization. In Neural Networks, 1995. IEEE International Conference on, volume 4, pages 1942 –1948, 1995. 8. M. Montes de Oca, J. Pena, T. Stutzle, C. Pinciroli, and M. Dorigo. Heterogeneous particle swarm optimizers. In Evolutionary Computation, 2009. CEC ’09. IEEE Congress on, pages 698 –705, may 2009. 9. M. G. H. Omran and A. P. Engelbrecht. Free search differential evolution. In Proc. of the 11th Congress on Evolutionary Computation, CEC’09, pages 110–117, Piscataway, NJ, USA, 2009. IEEE Press. 10. Y. Petalas, K. Parsopoulos, and M. Vrahatis. Memetic particle swarm optimization. Annals of Operations Research, 156:99–127, 2007. 11. R. Poli. Analysis of the publications on the applications of particle swarm optimisation. J. Artif. Evol. App., 2008:4:1–4:10, January 2008. 12. R. Poli, J. Kennedy, and T. Blackwell. Particle swarm optimization. Swarm Intelligence, 1:33–57, 2007. 13. S. Rahnamayan, H. Tizhoosh, and M. Salama. Opposition-based differential evolution. Evolutionary Computation, IEEE Transactions on, 12(1):64 –79, feb. 2008. 14. Y. Shi and R. Eberhart. Empirical study of particle swarm optimization. In Evolutionary Computation, 1999. CEC 99. Proceedings of the 1999 Congress on, volume 3, pages 3 vol. (xxxvii+2348), 1999. 15. A. Silva, A. Neves, and E. Costa. An empirical comparison of particle swarm and predator prey optimisation. In Proceedings of the 13th Irish International Conference on Artificial Intelligence and Cognitive Science, AICS ’02, pages 103– 110, London, UK, 2002. Springer-Verlag. 16. A. Silva, A. Neves, and T. Gonçalves. An heterogeneous particle swarm optimizer with predator and scout particles. In Autonomous and Intelligent Systems, volume 7326 of Lecture Notes in Computer Science, pages 200–208. Springer, 2012. 17. R. Storn and K. Price. Differential evolution – a simple and efficient heuristic for global optimization over continuous spaces. Journal of Global Optimization, 11:341–359, 1997. 18. H. R. Tizhoosh. Opposition-based learning: A new scheme for machine intelligence. In Proc. of the Int. Conf. on Computational Intelligence for Modelling, Control and Automation - Vol. 01, CIMCA ’05, pages 695–701, Washington, DC, USA, 2005. 19. H. Wang, H. Li, Y. Liu, C. Li, and S. Zeng. Opposition-based particle swarm algorithm with cauchy mutation. In Evolutionary Computation, 2007. CEC 2007. IEEE Congress on, pages 4750 –4756, sept. 2007.