Simulating Swarm Behaviuors for Optimisation by Learning from Neighbours Ran Cheng
Yaochu Jin
Department of Computing University of Surrey Guildford, Surrey, GU2 7XH United Kingdom Email:
[email protected]
Department of Computing University of Surrey Guildford, Surrey, GU2 7XH United Kingdom Email:
[email protected]
t=t+1
Compete Loser
I. I NTRODUCTION Particle swarm optimization (PSO) is a popular nature inspired algorithm proposed by Kennedy and Eberhart in 1995 [1] to solved optimization problems. Although PSO has witnessed a rapid development during the past decades, premature convergence is still a problem it suffers[2], especially when the optimization problem is of high dimensional or when it has a large number of local optima [3], [4]. To alleviate premature convergence, researchers have proposed a number of PSO variants which can be categorized into the following three groups[5]: adaptation of the parameters [6], [7], [8], [9], [10], [11], hybridation with other search techniques [12], [13], [14], [15], [16] and introduction of various topological structures [17], [18], [19], [20], [21], [22]. Most recently, a conceptually new PSO has been proposed [23], the main components of which are summarized in Fig. 1. Different from most existing PSO variants, ComPSO does not use any memory, neither pbest nor gbest. Instead, a competition mechanism has been introduced, where two particles are randomly chosen for a competition. The winner of the competition will be directly passed to the next iteration and the loser is updated by learning from the winner and then also passed to the next generation. Such competition repeats until all particles in the swarm have participated in a competition once. It has been demonstrated that ComPSO exhibits high scalability to the search dimension and outperforms many state-of-the-art algorithms for large scale optimization.
Population P(t)
Winner
Abstract—Competitive particle swarm optimizer (ComPSO) is a novel swarm intelligence algorithm that does not need any memory. Different from the canonical particle swarm optimizer (PSO), neither gbest nor pbest needs to be stored in ComPSO, and the algorithm is extremely simple in implementation. ComPSO has shown to be highly scalable to the search dimension. In the original ComPSO, two particles are randomly chosen to compete. This work investigates the influence of the competition rule on the search performance of ComPSO and proposes a new competition rule operating on a sorted swarm with neighborhood control. Empirical studies have been performed on a set of widely used test functions to compare the new competition rule with the random strategy. Results show that the new competition rule can speed up the convergence with a big neighborhood size, while with a small neighborhood size, the convergence speed can be slowed down.
Update
Population P(t+1) Fig. 1. A diagram showing the main components of ComPSO. In a competition, two particles are randomly chosen from the current swarm. After a competition, the loser, whose fitness value is worse, will be updated using a specific update mechanism, whilst the winner will be directly passed to the population of the next iteration.
From Fig. 1, it can be seen that there are two main operators in ComPSO: competition and updating. Competition distinguishes the loser from the winner according to their fitness values and in updating, the loser learns from the winner using the following strategy: Vl,k (t + 1) = R1 (k, t)Vl,k (t) + R2 (k, t)(Xw,k (t) − Xl,k (t)) ¯ − Xl,k (t)). + φR3 (k, t)(X(t) Xl,k (t + 1) = Xl,k (t) + Vl,k (t + 1),
(1) (2)
where Xw,k (t), Xl,k (t), and Vw,k (t), Vl,k (t) are the positions and velocities of the loser and winner at the k-th round of competition in iteration t, respectively, R1 (k, t), R2 (k, t), R3 (k, t) ∈ [0, 1]n are three randomly gen¯ erated vectors, X(t) is the mean position value of all particles
in P (t), which is the only global information shared by the whole swarm, and φ is a parameter that controls the influence ¯ of X(t). One important factor that influence the consequence of the competition is the rule for determining which two particles should be competed. In the original ComPSO [23], two particles for competition are randomly chosen. Since the loser of a competition will learn from the winner, the rule for choosing competitors will no doubt influence the search dynamics of ComPSO. For example, if the winner happens to be the best in the current swarm, the loser will learn more as a result of the competition and will have a much large probability of winning a competition in the next generation. However, if the two competitors happen to be similar, the loser will gain little in the learning from the winner. In ComPSO, the two competitors are entirely randomly selected, and after each competition, both particles are removed from the current swarm, i.e., each particle can participate in competition only once. With this competition rule, M = m/2 times of competition will be carried out for a swarm of a size of m and only half of the particles in the current swarm will be updated. In this paper, we will propose a new competition rule for ComPSO. In the new competition rule, the particle territory (neighborhood) is taken into account to control the learning speed. In the experimental studies the test functions proposed in CEC05 special session [24] are adopted. Experimental results show that the new competition rule can speed up the convergence with a big neighborhood size, while with a small neighborhood size, the convergence speed can be slowed down. II. A S ORTED C OMPETITION RULE The random strategy for competition adopted in ComPSO offers a large degree of flexibility for particles to choose their competitors. Although such a strategy is effective in promoting diversity of the swarm [23], it may also slow down the convergence speed a lot for the following reasons: 1) The two competitors are randomly chosen, therefore, there is a large degree of stochasticity in search; 2) In each iteration, a particle can participate in a competition only once, thus, only half of the particles (losers) are updated and the other half (winners) remain unchanged. To tackle the above drawbacks of the random competition strategy in ComPSO, we propose in this work a new competition strategy with the following properties: Property 1. The selection is operated on a ordered sequence. Property 2. All particles except for the best one in the swarm may be updated during each iteration. To satisfy the first property above, an intuitive idea is to operate the competitor selection on a sorted swarm, satisfying: f (Xi (t)) ≥ f (Xi+1 (t)),
(3)
where f (X) is the objective function, i = 1, 2, ..., m − 1. With the sorted swarm, for any particle l, it is easy to find another particle w whose fitness value is better than it: f (Xl (t)) ≥ f (Xw (t)) ⇔ l ≤ w.
winners
loser
1
2
...
l-1
l
(4)
l+1
...
m-1
m
sorted swarm
Fig. 2. With the sorted swarm, particle l may compete with and learn from any of the particle (winner) from l + 1 to m.
In this way, selection of competitors is operated in a partial order sequence instead of a random sequence, so that Property 1 can be satisfied. To fulfil Property 2, the only thing we need to do is to make sure that each particle (except the m-th one) in the sorted swarm is to be chosen as a loser once during an iteration. In the sorted swarm, particle l can compete with particle w (the winner), where w ≥ l + 1, refer to Fig. 2. In real world, since animals have their own territories[?], competitions may only happen in a neighborhood. Taking this into account, a neighborhood control strategy is adopted in selecting the winner to be specified as follows: w ∈ {l + 1, l + 2, ..., min{l + N, m}}
(5)
where N ∈ {1, 2, ..., m − 1} can be considered as the neighborhood size in the fitness space. Particles can only learn from the particles constrained in their territories defined by N . Note that in this competitor selection strategy, since the swarm is sorted, for any territory of a size N + 1, there is only one loser and the other N particles are all winners, and the loser can choose only one winner to learn from in each iteration. To obtain a more quantitative understanding of the influence of the neighborhood size N on the competition mechanism, an expected learning increment of particle l can be defined as: ∆Ll (N, t) =
1 min{N, m − l}
∑
min{l+N,m}
|Xl (t) − Xw (t)|
w=l+1
(6) where Xl (t) and Xw (t) are the positions of particle l and w in iteration t. If we assume that two similar fitness values in the decision space imply two similar positions in the decision space: w1 ≤ w2 ⇒ |Xl (t) − Xw1 (t)| ≤ |Xl (t) − Xw2 (t)|,
(7)
the following deduction can be derived from (6) and (7): N1 ≤ N2 ⇒ ∆Ll (N1 , t) ≤ ∆Ll (N2 , t).
(8)
Equation (8) indicates that the bigger N is, the faster particles can learn from each other, thus leading to faster
convergence, and vice versa. In the next section, we will verify this statement empirically. With such a competition strategy, the new variant of ComPSO can be described as follows: t = 0; randomly initialize P (0); while terminal condition is not satisfied do do fitness evaluation for particles in P (t); sort P (t) according to fitness values; ¯ calculate X(t); for i = 1 → m − 1 do l = i; w = rand(l + 1, min{l + N, m}); update Xl (t) using (1)(2); end for t = t + 1; end while Note: rand(a, b) is a function which randomly returns an integer between a and b. III. E XPERIMENTAL STUDIES To examine the influence resulting from the new competition strategy, ComPSO with the new competitor selection strategy (denoted as ComPSO-s) is compared with ComPSO using the random competition rule on the first 14 test functions proposed in CEC05 special session[24]. Among the 14 functions, the first 5 functions (f1 to f5 ) are unimodal functions and the rest (f6 to f14 ) are multi-modal functions. Since ComPSO has already been shown to perform well on highdimensional (as high as 5000-D) problems in [23], only 30-D and 50-D functions are studied in the experiments in this paper. Similar to [23], the population size is set to 100 and φ is set to 0 in all the tests. For a neighborhood size N introduced in ComPSO-s, three cases N = 1, 50, 99 are tested. All results are averaged over 25 independent runs. For each independent run, the maximum FEs (fitness evaluations) is set to 5000 ∗ n, where n is the dimension of the test function. In the comparisons between different results, two-tailed t-tests were conducted at a significance level of α = 0.05. It can be observed from the statistical data summarized in Table I that, with a limited number of fitness evaluations (FEs), ComPSO-s with N = 99 tends to converge the fastest on the optimization of 30-D functions, especially on the unimodal ones. By contrast, ComPSO-s with N = 1 converges the most slowly. This observation confirms the theoretical analysis in (8). The convergence profiles are provided in Fig. 3. Similar experimental results can be obtained from the 50D functions, as summarized in Table II. However, it is not difficult to notice that, with the increase of the search dimension, the performance of the ComPSO-s and ComPSO become similar, which also can be observed from the convergence profiles in Fig. 4. Intuitively, as the search dimension increases, the sorted competition rule may become more likely to reduce the swarm diversity, which results in premature convergence; On the
other hand, since the maximum FEs (fitness evaluations) is set to 5000 ∗ n in the tests, where n is the dimension, a higher dimension n will allow more FEs and therefore more iterations, which may alleviate the problem of slow convergence of ComPSO. IV.
CONCLUSION
In this paper, we have investigated the influence of the competition mechanism on the performance of a recently proposed competitive particle swarm optimizer (ComPSO). A new competition strategy has been proposed by introducing a sorted competition rule and a neighborhood control strategy. For different neighborhood sizes, ComPSO has been shown to perform different convergence speeds. With a big neighborhood size, the convergence speed is fast and with a small neighborhood size, the convergence speed is slow. In future work, we will investigate further competition mechanisms and their impact on the convergence speed of ComPSO. For example, in this paper, although the neighborhood control is defined in the fitness space, however, in some optimization problems (e.g., multi-modal problems), similar fitness values may not imply similar solutions, therefore neighborhood control in the decision space may also be of interest. ACKNOWLEDGEMENT
This work was supported by Honda Research Institute Europe, EU FP7 (Grant No. 601062), the Project 111 (No.B08015) and the State Key Laboratory of Synthetical Automation for Process Industries. R EFERENCES [1] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, vol. 4. IEEE, 1995, pp. 1942–1948. [2] W.-N. Chen, J. Zhang, Y. Lin, N. Chen, Z.-H. Zhan, H. Chung, Y. Li, and Y.-h. Shi, “Particle swarm optimization with an aging leader and challengers,” IEEE Transactions on Evolutionary Computation, vol. 17, no. 2, pp. 241–258, 2013. [3] J. J. Liang, A. Qin, P. N. Suganthan, and S. Baskar, “Comprehensive learning particle swarm optimizer for global optimization of multimodal functions,” IEEE Transactions on Evolutionary Computation, vol. 10, no. 3, pp. 281–295, 2006. [4] Y. Yang and J. O. Pedersen, “A comparative study on feature selection in text categorization,” in Proceedings of International Conference on Machine Learning. Morgan Kaufmann Publishers, 1997, pp. 412–420. [5] Z.-H. Zhan, J. Zhang, Y. Li, and H.-H. Chung, “Adaptive particle swarm optimization,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 39, no. 6, pp. 1362–1381, 2009. [6] Y. Shi and R. Eberhart, “Parameter selection in particle swarm optimization,” in Evolutionary Programming VII. Springer, 1998, pp. 591–600. [7] Y. Shi and R. C. Eberhart, “Fuzzy adaptive particle swarm optimization,” in Proceedings of IEEE Congress on Evolutionary Computation, vol. 1. IEEE, 2001, pp. 101–106. [8] A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, “Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients,” IEEE Transactions on Evolutionary Computation, vol. 8, no. 3, pp. 240–255, 2004. [9] I. C. Trelea, “The particle swarm optimization algorithm: convergence analysis and parameter selection,” Information Processing Letters, vol. 85, no. 6, pp. 317–325, 2003. [10] M. Clerc, “The swarm and the queen: towards a deterministic and adaptive particle swarm optimization,” in Proceedings of IEEE Congress on Evolutionary Computation, vol. 3. IEEE, 1999, pp. 1951–1957.
10
6
10
10
10
10
5
10
4
10 0
10
9
10 2
−5
fitness value
fitness value
−10
10
−15
10
fitness value
10
10
0
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
8
10
−2
−20
10
10
7
10
−25
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
−30
10
6
−6
−35
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
−4
10
0
5
10
10
15
FEs
0
5
10
(a) f1
10
15
FEs
4
x 10
10
FEs
15 4
x 10
(c) f3
5
6
11
10
10
10
5
10
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
4
10
3
9
10
8
10
2
10
1
10
0
10
fitness value
fitness value
10
fitness value
5
(b) f2
10
4
10
7
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
6
10
5
10
4
−1
10
10
−2
10
3
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
−3
10
2
10 3
−4
10
0
4
x 10
0
5
10
10
15
FEs
1
0
5
10
FEs
4
x 10
(d) f4
10
15
0
5
10
FEs
4
x 10
(e) f5
15 4
x 10
(f) f6 3
5
10
10
1.328
10 4
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
3
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
1.327
10
1.326
10
2
1
10
0
10
fitness value
fitness value
fitness value
10 2
10
1.325
10
1.324
10
1
1.323
10
10 −1
10
1.322
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
−2
10
1.321
10
0
−3
10
0
5
10
15
FEs
0
5
10
FEs
4
x 10
(g) f7
10
15
0
5
10
FEs
4
x 10
(h) f8
15 4
x 10
(i) f9
2
7
10
10
2.9
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
2.8
10
6
10 2.7
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
2.5
10
fitness value
fitness value
2.6
10
1
10
5
10
2.4
10
4
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
2.3
10
2.2
10
0
0
5
10
15
FEs
10
3
0
5
10
(j) f10
10
15
FEs
4
x 10
0
5
10
FEs
4
x 10
(k) f11
(l) f12
4
10
1.16
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
3
10
1.15
fitness value
10
fitness value
fitness value
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
2
10
1.14
10
1.13
10
1
10
1.12
10 0
10
0
5
10
FEs
(m) f13 Fig. 3.
15 4
x 10
0
5
10
FEs
15 4
x 10
(n) f14 The convergence profiles of ComPSO-s and ComPSO on the 30-D functions.
15 4
x 10
6
10
10
10
10
10
5
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
5
10 0
10
9
10 4
fitness value
fitness value
−10
10
−15
10
fitness value
10
−5
10
3
10
8
10
2
−20
10
10
7
10
−25
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
−30
10
0
−35
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
1
10
0
5
10
10
15
FEs
6
0
5
10
(a) f1
10
15
FEs
4
x 10
0
5
(b) f2
6
15 4
x 10
(c) f3
5
10
10
FEs
4
x 10
12
10
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
10
10
5
8
10
fitness value
fitness value
fitness value
10
4
10
4
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
6
10
4
10
10
2
10
3
10
3
0
5
10
10
15
FEs
0
0
5
10
(d) f4
10
15
FEs
4
x 10
0
5
10
FEs
4
x 10
(e) f5
15 4
x 10
(f) f6 3
5
10
10
1.33
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
3
1.329
10
1.328
fitness value
fitness value
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
fitness value
4
10
2
10
1
10
10
1.327
10
2
10
1.326
10
0
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
1.325
−1
10
10
1
−2
10
0
5
10
15
FEs
0
5
10
FEs
4
x 10
(g) f7
10
15
10
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
3
6
fitness value
fitness value
10
2
5
10
2
10
1
10
15
FEs
10
4
0
5
10
(j) f10
10
15
FEs
4
x 10
0
5
10
FEs
4
x 10
(k) f11
(l) f12
4
10
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
3
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
1.38
10
fitness value
10
fitness value
fitness value
10
10
2
10
1.37
10
1
10
1.36
10
0
10
0
5
10
FEs
(m) f13 Fig. 4.
4
(i) f9
10
3
15 x 10
7
ComPSO−s (N = 1) ComPSO−s (N = 50) ComPSO−s (N = 99) ComPSO
5
10
FEs
4
0
5
(h) f8
4
10
0
4
x 10
15 4
x 10
0
5
10
FEs
15 4
x 10
(n) f14 The convergence profiles of ComPSO-s and ComPSO on the 50-D functions.
15 4
x 10
TABLE I T HE STATISTICAL RESULTS ( MEAN AND STANDARD
Test functions f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14
DEVIATION ) ON 30-D FUNCTIONS . I F ONE RESULT IS STATISTICALLY SIGNIFICANTLY BETTER THAN ALL OTHER RESULTS , IT IS HIGHLIGHTED .
ComPSO-s (N = 99) 0.00E+000(0.00E+000) 3.22E-006(7.26E-006) 1.62E+006(5.71E+005) 6.81E-003(1.53E-002) 2.71E+003(7.67E+002) 9.81E+001(2.01E+002) 1.46E-002(1.07E-002) 2.10E+001(6.43E-002) 2.54E+001(8.91E+000) 1.06E+002(6.71E+001) 7.72E+000(2.68E+000) 9.80E+003(6.02E+003) 3.40E+000(4.69E-001) 1.31E+001(2.83E-001)
ComPSO-s (N = 50) 5.05E-031(2.47E-030) 1.73E-005(1.50E-005) 1.72E+006(7.11E+005) 5.12E-002(7.60E-002) 2.40E+003(7.40E+002) 1.15E+002(1.43E+002) 1.45E-002(8.86E-003) 2.10E+001(5.68E-002) 2.44E+001(8.80E+000) 1.60E+002(3.88E+001) 1.00E+001(2.10E+000) 7.43E+003(6.17E+003) 8.07E+000(4.81E+000) 1.31E+001(2.03E-001)
ComPSO-s (N = 1) 1.39E-011(1.29E-011) 2.71E+003(1.31E+003) 6.55E+006(3.52E+006) 2.50E+004(5.76E+003) 4.08E+003(7.94E+002) 1.27E+003(2.69E+003) 6.16E-001(2.12E-001) 2.10E+001(6.11E-002) 1.60E+002(5.02E+001) 2.17E+002(9.92E+000) 4.00E+001(1.44E+000) 1.95E+004(1.72E+004) 8.47E+000(3.64E+000) 1.35E+001(1.24E-001)
ComPSO 1.48E-025(7.31E-026) 5.92E+000(3.35E+000) 2.06E+006(8.08E+005) 5.21E+001(5.96E+001) 2.30E+003(3.48E+002) 2.23E+002(7.71E+002) 1.19E-002(4.79E-003) 2.10E+001(4.27E-002) 8.95E+000(2.83E+000) 1.62E+002(9.69E+000) 5.15E+000(2.28E+000) 9.70E+003(7.90E+003) 5.23E+000(2.02E+000) 1.27E+001(3.50E-001)
TABLE II T HE STATISTICAL RESULTS ( MEAN AND STANDARD
Test functions f1 f2 f3 f4 f5 f6 f7 f8 f9 f10 f11 f12 f13 f14
DEVIATION ) ON 50-D FUNCTIONS . I F ONE RESULT IS STATISTICALLY SIGNIFICANTLY BETTER THAN THE OTHER , IT IS HIGHLIGHTED .
ComPSO-s (N = 99) 5.05E-031(2.47E-030) 1.43E+001(5.14E+001) 3.10E+006(1.15E+006) 5.28E+002(4.71E+002) 4.53E+003(8.28E+002) 1.13E+002(1.93E+002) 8.55E-003(1.46E-002) 2.12E+001(5.40E-002) 4.99E+001(1.23E+001) 2.81E+002(1.17E+002) 1.68E+001(2.69E+000) 4.22E+004(2.74E+004) 5.66E+000(7.82E-001) 2.28E+001(1.87E-001)
ComPSO-s (N = 50) 1.46E-029(2.76E-029) 4.51E+001(1.34E+002) 2.50E+006(7.41E+005) 5.60E+003(1.94E+003) 4.76E+003(1.04E+003) 1.17E+002(2.05E+002) 6.29E-003(1.02E-002) 2.12E+001(3.16E-002) 5.84E+001(1.53E+001) 3.50E+002(1.27E+001) 1.87E+001(2.51E+000) 6.69E+004(8.06E+004) 1.67E+001(1.18E+001) 2.31E+001(1.93E-001)
[11] M. Hu, T. Wu, and J. D. Weir, “An adaptive particle swarm optimization with multiple adaptive methods,” accepted by IEEE Transactions on Evolutionary Computation, 2012. [12] W.-J. Zhang and X.-F. Xie, “Depso: hybrid particle swarm with differential evolution operator,” in Proceedings of IEEE International Conference on Systems, Man and Cybernetics, vol. 4. IEEE, 2003, pp. 3816–3821. [13] N. Higashi and H. Iba, “Particle swarm optimization with gaussian mutation,” in Proceedings of IEEE Swarm Intelligence Symposium. IEEE, 2003, pp. 72–79. [14] J. Robinson, S. Sinton, and Y. Rahmat-Samii, “Particle swarm, genetic algorithm, and their hybrids: optimization of a profiled corrugated horn antenna,” in Proceedings of IEEE Antennas and Propagation Society International Symposium, vol. 1. IEEE, 2002, pp. 314–317. [15] B. Liu, L. Wang, Y.-H. Jin, F. Tang, and D.-X. Huang, “Improved particle swarm optimization combined with chaos,” Chaos, Solitons & Fractals, vol. 25, no. 5, pp. 1261–1271, 2005. [16] Y. V. Pehlivanoglu, “A new particle swarm optimization method enhanced with a periodic mutation strategy and neural networks,” IEEE Transactions on Evolutionary Computation, vol. 17, no. 3, pp. 436–452, 2013. [17] P. N. Suganthan, “Particle swarm optimiser with neighbourhood operator,” in Proceedings of IEEE Congress on Evolutionary Computation, vol. 3. IEEE, 1999, pp. 1958–1962. [18] J. Kennedy, “Small worlds and mega-minds: effects of neighborhood topology on particle swarm performance,” in Proceedings of IEEE Congress on Evolutionary Computation, vol. 3. IEEE, 1999, pp. 1931–
ComPSO-s (N = 1) 4.56E-009(8.70E-009) 1.32E+004(7.30E+003) 1.12E+007(4.13E+006) 7.75E+004(8.11E+003) 1.00E+004(1.38E+003) 2.93E+003(3.73E+003) 8.72E-001(1.64E-001) 2.11E+001(3.57E-002) 2.25E+002(1.19E+002) 4.29E+002(1.65E+001) 4.29E+002(1.65E+001) 7.53E+004(4.65E+004) 1.22E+001(4.34E+000) 2.32E+001(1.83E-001)
ComPSO 2.66E-028(1.64E-028) 1.51E+002(7.13E+001) 3.51E+006(1.03E+006) 6.26E+002(4.22E+002) 4.21E+003(5.81E+002) 7.13E+002(1.77E+003) 2.52E-003(4.43E-003) 2.12E+001(4.40E-002) 2.02E+001(3.86E+000) 3.32E+002(1.13E+001) 1.13E+001(2.79E+000) 3.25E+004(2.19E+004) 8.03E+000(2.80E+000) 2.26E+001(1.89E-001)
1938. [19] J. Kennedy and R. Mendes, “Population structure and particle swarm performance,” in Proceedings of IEEE Congress on Evolutionary Computation, vol. 2. IEEE, 2002, pp. 1671–1676. [20] R. Brits, A. P. Engelbrecht, and F. Van den Bergh, “A niching particle swarm optimizer,” in Proceedings of Asia-Pacific Conference on Simulated Evolution and Learning, vol. 2. Singapore: Orchid Country Club, 2002, pp. 692–696. [21] Z.-H. Zhan, J. Zhang, Y. Li, and Y.-H. Shi, “Orthogonal learning particle swarm optimization,” IEEE Transactions on Evolutionary Computation, vol. 15, no. 6, pp. 832–847, 2011. [22] B. Qu, P. Suganthan, and S. Das, “A distance-based locally informed particle swarm model for multimodal optimization,” IEEE Transactions on Evolutionary Computation, vol. 17, no. 3, pp. 387–402, 2013. [23] R. Cheng and Y. Jin, “The competitive swarm optimizer for large scale optimization,” submitted to IEEE Transactions on Cybernetics, 2013. [24] P. Suganthan, N. Hansen, J. Liang, K. Deb, Y. Chen, A. Auger, and S. Tiwari, “Problem definitions and evaluation criteria for the cec 2005 special session on real-parameter optimization,” Nanyang Technological University, Singapore, Tech. Rep, vol. 2005005, 2005.