Particle Swarm Optimization with Cognitive Avoidance Component Anupam Biswas1 , Anoj Kumar2 and K. K. Mishra3 Department of Computer Science & Engineering, Motilal Nehru National Institute of Technology Allahabad, Allahabad, India 1
[email protected], 2
[email protected], 3
[email protected] Abstract—This paper introduces cognitive avoidance scheme to the Particle Swarm Optimization algorithm. Random movements of particle influenced by personal best solution and global best solution may encourage to take an unfruitful move. This may delay in convergence towards the optimal solution. With the similar notion as particle’s own known best position attracts towards it (cognitive attraction), particle may avoid itself from taking moves around its own known worst position (cognitive avoidance). This concept is added to the standard Particle Swarm Optimization algorithm as cognitive avoidance component with an additional coefficient. Experimental results on well known benchmark functions shows considerable improvement in proposed approach.
I. I NTRODUCTION Particle swarm optimization (PSO) is a population-based optimization technique introduced by Kennedy and Eberhart [1]–[3] in 1995. The algorithm simulates animal social behaviors such as fish schooling, bird flocking. The algorithm is initialized with a population of random solutions called particles, like other population-based evolutionary algorithm such as genetic algorithms (GA) [4]–[7]. However, PSO does not use direct recombination operation to update its population. PSO works on behavior of particles in the swarm rather implanting Darwin’s principle of survival of fittest as in GA. Each particle is associated with position (candidate solution in search domain) and velocity. Position and Velocity of each particle is adjusted depending on the experience of each particle and experience of its neighbors. Each particle tracks best solution it attained so far called personal best (pbest) and best solution attained so far by its neighbor called global best (gbest). Influence of both pbest and gbest in movement of particles helps in converging to the global optimal solution. The position vector and the velocity vector of the ith particle in the d-dimensional search space at time t can be represented as Xi (t) = (xi1 , xi2 , xi3 , ...xid ) and Vi (t) = (vi1 , vi2 , vi3 , ...vid ) respectively. Current pbest and gbest vector of ith paricle can be represented as Pi (t) = (pi1 , pi2 , pi3 , ...pid ) and Gi (t) = (gi1 , gi2 , gi3 , ...gid ) respectively. Velocity at which particles moves to new position and the new position is evaluated with the following two equations: Vi (t + 1) = Vi (t) + C1 ∗ rand1 () ∗ (Pi (t) − Xi (t)) +C2 ∗ rand2 () ∗ (Gi (t) − Xi (t)) Xi (t + 1) = Xi (t) + Vi (t + 1)
c 978-1-4673-6217-7/13/$31.00 2013 IEEE
(1) (2)
Where C1 and C2 are positive constants called acceleration coefficients, rand1 () and rand2 () are two different uniformly distributed random numbers in range [0, 1]. First component of Equation 1 is the current velocity of the particle, which acting as inertia to the particle to move around the search space. Second component is the cognitive acceleration caused by of particle’s own best position, which represents particle’s awareness. Third component is social acceleration caused by best position of the swarm. This social attraction pulls each particle towards the best solution around the swarm and helps to attain global optimal solution. In order to improve efficiency of PSO, numbers of proposals has been put forwarded since introduction of original version of PSO in 1995. To balance the local and global search during the optimization process, Shi and Eberhart introduced the concept of inertia weight [8] to the original version of PSO algorithm with the first component i.e. the inertia or current velocity of particle: Vi (t + 1) = ω ∗ Vi (t) + C1 ∗ rand1 () ∗ (Pi (t) − Xi (t)) (3) +C2 ∗ rand2 () ∗ (Gi (t) − Xi (t)) Here ω is the inertia weight and remains fixed value for all generations. This version of PSO is generally considered as standard version of PSO. Other variants of PSO are Discrete PSO [9] where position and velocity vectors are comprised with discrete values, Binary PSO [10] where particles have to take binary decisions. The parameter tuning approaches draws attention of researchers these days. In this approach parameters of standard PSO algorithm (inertia weight, cognitive acceleration coefficient and social acceleration coefficient) are tuned for improving the optimal solution in the search space. Initially, values of these parameters are set as fixed. However, experimental results shows that it is better to set large values, it helps particles to explore search space as well as convergence towards the global optima, while smaller values results fine tuned solution around the current search area. Suitable parameter values reasonably balance the global and local exploration of the search space resulting good solution. With this notion Shi and Eberhart proposed Time Varying Inertia Weight [16] and Random Inertia Weight [11] to the standard PSO. Ratnaweera et. al proposes Time Varying Acceleration Coefficients [12] in
149
addition to the time varying inertia weight factor to standard PSO. In this paper we have proposed Cognitive avoidance approach to the particles in addition to cognitive acceleration and social acceleration to the standard PSO. This new addition helps particles to move in proper direction by avoiding probable mishaps or negotiates the effect of improper movements. Like cognitive acceleration serves as particle’s awareness of personal best, cognitive avoidance serves as particle’s awareness of its own worst position or personal worst (pworst). This new introduction to the standard PSO we termed as Particle Swarm Optimization with Cognitive Avoidance (PSOCA). Henceforth, we will use PSOCA for the proposed approach and PSO for standard version of PSO throughout the paper. Rest of the paper is organized as follows. In section II describes key motivational aspects of the proposal. Section III describes the proposed approach with a suitable example. Section IV provides details about experimental setups, benchmark functions and summarizes simulation results. II. M OTIVATION Particles of PSO track pbest and gbest in every iteration. Movements of any particle are decided based on these two known values experienced by individual particle and the swarm. Particles are accelerated towards both pbest and gbest or Particles are attracted towards these two. Though pbest and gbest influences particle’s movement but actual movement is effected by two random values rand1 () and rand2 () . However, this interference is necessary for the algorithm to improve its overall exploration of the solution domain. No doubt pbest and gbest motivates particles to move towards optimal value but exploration defined by random values may cause particles to move to some unfruitful positions. These unfruitful movements may degrade algorithms overall performance. Though these unfruitful movements may overcome in further iterations for the influence of pbest and gbest, but the effect of unnecessary movement is always there causing extra iterations to attain same position. Awareness of these pitfalls may improve overall performance of PSO by avoiding them. It is impossible to assure whether the next position is good or bad, it can only be predicted probabilistically depending on previous and present condition. Tracking of pbest and gbest is the best predictive notion incorporated in PSO and which motivates each particle to make movement nearby them considering that the probable good solution might be around the pbest or gbest. With this little greedy approach PSO works very well but as mentioned above every solution around the pbest or gbest is not always good resulting unbeatable consequences. An approach to avoid such situations is introduced to improve overall performance of PSO. A very similar mechanism that pbest and gbest does to a particle to attract towards themselves is introduced in PSOCA, where pworst (particle’s own known worst solution so far) pushes particles backward so that they can never be trapped again into it. This avoidance mechanism also reduces movement of particles towards other bad solutions around
150
Fig. 1.
Effect of cognitive avoidance in pso
the pworst. This is a kind of inverse greedy approach where particles are distracted instead of attracting as pbest and gbest does in the case of PSO. III. P ROPOSED APPROACH As the nature of solution space is unknown to the particles so the only possibility is to predict next good movements ensuring that particles are moving in the right direction. In PSO convergence towards the optimal solution is guided by two acceleration components (cognitive component and social component). However it can not be assured 100% that next position is better. This is only an assumption that solutions near pbest or gbest may be better. There is always a possibility of attaining bad solution due to interference of random parameters as described in previous section. Any misguidance may slow down overall convergence of algorithm towards the optimal solution. Therefore, it is very crucial to handle properly this kind of discrepancy to guide particles in appropriate direction to enhance efficiency and accuracy. Considering this issue, in this paper we have proposed an approach to avoid such situations. In our proposed approach each particle maintains its worst value that it has attained so far along with the pbest and gbest. With this known worst value particles tries to avoid further movement towards it, having the sense that solutions nearby the worst one may not be suitable. To define such avoidance scheme we have added a new component (convergence avoidance)to the existing velocity equation of PSO. Current pworst vector of ith particle can be represented as Wi (t) = (wi1 , wi2 , wi3 , ...wid ) where, d is the dimension of particle. Velocity equation is redefined as follows: Vi (t + 1) = ω ∗ Vi (t) + C1 ∗ rand1 () ∗ (Pi (t) − Xi (t)) +C2 ∗ rand2 () ∗ (Gi (t) − Xi (t)) (4) −C3 ∗ rand3 () ∗ (Wi (t) − Xi (t)) Fourth component in Equation 4 represents cognitive avoidance and is considered as negative since it represents distraction, which is opposite to cognitive acceleration and social
2013 International Conference on Advances in Computing, Communications and Informatics (ICACCI)
Fig. 2. Covergence of PSOCA compared with standard PSO by comparing mean fitness value through each generation in Ackley’s, Rastrigin’s, Rosenbrock’s and Schwefel’s function
TABLE I B ENCHMARK FUNCTIONS Function name f (x) =
Rastrigin
Ackley
n i=1
f (x) =
Schwefel
n−1
n 1
i=1
f (x) =
x2i − 10 cos 2πxi + 10
f (x) = (20 + e) − 20 exp − exp
Rosenbrock
Definition
n
i=1
1 n x2 n i=1 i
− 0.2
cos 2πxi
100 xi+1 − x2i
n i=1
xi sin
2
+ xi − 1 |xi |
2
TABLE II I NITIAL RANGE AND O PTIMA Function name
Range
Optimal solution
Griewank
[−600, 600]
f (x∗ ) = 0
Rosenbrock
[−2.048, 2.048]
f (x∗ ) = 0
Sphere
[−5.12, 5.12]
Schwefel
[−500, 500]
f (x∗ ) = 0 f (x∗ )
= −n × 418.9829
acceleration. To control the effect of this avoidance to a particle a cognitive avoidance coefficient C3 is used along with randomness rand3 () in range [0,1]. Position equation remains unaltered as in Equation 2. Effect of the newly added component is shown with an example in Figure 1. Black circle represnts current position of a particle. Black dimond represents next position of the particle guided by pbest and gbest only. White dimond represents next position influenced by cognitive avoidance component. Proposed approach avoids movement towards worst solution and pushes final solution towards either pbest or gbest. In this case it move towards pbest. A population of particles is initialized with randomly generated positions and velocites. Fitness of each particle is evaluated with user defined objective function. At each generation velocity of the particle is updated with Equation 4 and next positions of particles are evaluated with Equation 2. At each generation particles finds best position or worst position, notes down that position. Updates its current pbest, pworst and gbest. Generally, velocity of particles are controlled with predefined velocites. If any particle gains larger velocity than the predefined velocity, modulus of the velocity is considered for updation of the position.
2013 International Conference on Advances in Computing, Communications and Informatics (ICACCI)
151
TABLE III C OMPARISION OF PSO AND PSOCA Objective function
Dimension
10
Rastrigin’s function
20
30
10
Ackley’s function
20
30
10
Rosenbrock’s function
20
30
10
Schwefel’s function
20
30
152
Measures
PSO
PSOCA
Mean Median Minimum Maximum Std. Deviation Mean Median Minimum Maximum Std. Deviation Mean Median Minimum Maximum Std. Deviation
6.268238 4.974795 0.994959 13.929412 3.205177 35.679164 33.828545 14.924376 63.677127 11.133938 83.605148 81.586379 42.783599 152.227761 19.746506
5.890155 4.974795 1.989918 11.939504 2.725471 30.087520 29.351257 16.914289 53.727612 7.594830 80.671492 76.611665 32.834629 151.233051 26.351290
Mean Median Minimum Maximum Std. Deviation Mean Median Minimum Maximum Std. Deviation Mean Median Minimum Maximum Std. Deviation
0.000000 0.000000 0.000000 0.000000 0.000000 0.701055 0.000000 0.000000 3.125399 0.821616 2.379818 2.268669 0.931307 5.089657 0.878856
0.000000 0.000000 0.000000 0.000000 0.000000 0.511282 0.000000 0.000000 2.452552 0.790906 2.202456 2.011870 0.003490 4.919257 0.045834
Mean Median Minimum Maximum Std. Deviation Mean Median Minimum Maximum Std. Deviation Mean Median Minimum Maximum Std. Deviation
0.495389 0.015612 0.001164 4.009907 1.304553 12.151854 10.225332 0.437501 66.253883 9.862681 35.199659 24.475635 8.137773 85.072248 23.205804
0.587538 0.020135 0.000562 4.053867 1.397056 9.016740 9.617160 0.146889 15.589512 3.645567 33.586877 24.057692 10.343212 105.536037 23.058436
Mean Median Minimum Maximum Std. Deviation Mean Median Minimum Maximum Std. Deviation Mean Median Minimum Maximum Std. Deviation
-3726.731438 -3716.075534 -4071.390538 -3242.300441 186.940914 -6312.808570 -6306.932234 -7234.748399 -5556.836625 393.573309 -8259.610620 -8236.354402 -10023.035026 -7042.053798 584.277113
-3733.443933 -3725.944034 -4071.390538 -3321.267486 175.685358 -6388.208710 -6405.636278 -7372.923733 -5596.169278 386.027770 -8280.060558 -8334.988735 -9371.577700 -6923.615322 507.231340
2013 International Conference on Advances in Computing, Communications and Informatics (ICACCI)
In this particular, we have not considered any predefined velocity limits for particles. Let particles to move with any finite velocity if it moves outside the search space then that movement is controlled with defined limits of each dimensions. IV. S IMULATION RESULTS Four well known benchmark functions are used for performance evaluation as shown in Table I and their initial ranges shown in II. These benchmarks are widely used in evaluating performance of PSO methods [13]–[17]. Among these Rosenbrock’s function is unimodel, where as Ackley’s, Rastrigin’s and Schwefel’s functions are multimodel function All functions have global optimal solution at or near the origin except Schwefel’s function, which have global optimal solution at the edges of the solution domain. Since cognitive avoidance approach is added to PSO to improve overall performance so proposed PSOCA is compared with standard PSO only. We have considered five performance metrics to compare the quality of the optimal solution of PSOCA: Mean, Median, Minimum value, Maximum value and Standard deviation. Asymmetric initialization scheme [19] is not considered as benchmark functions cover entire solution space. Symmetric initialization is used where initial population is uniformly distributed over the entire solution space. All benchmark functions are tested with dimensions 10, 20 and 30. For each function and corresponding considered dimension, 50 trials are carried out to present five performance metrics. The effect of population size on performance of PSO has low significance as shown by Eberhart and Shi [15]. Population size of PSO generally set in range 20 to 60. However, as shown in [18] increment of population results slight improvement in optimal solution. Hence, we have used population size 40 for all the experimental study. We have also set maximum generation limit as 1000 for each run to made comparisons more precise. Acceleration coefficients C1 , C2 and newly added C3 kept as constant values 0.6, 1.5 and 0.4 respectively. All the performance metrics of optimal solutions over 50 trials are presented as in Table III. Bold figures as in the Table III are represents comparatively larger than other. For Rastrigin’s function all dimensions PSOCA performs better than PSO in terms of both mean. Although PSO attain least optimal value for dimensions 10 and 20, but larger values of mean and median indicates PSOCA outperforms PSO. In dimension 30 all the performance metrics shows better in PSOCA, where as deviation is observed higher. For Ackley’s function in dimensions 10 and 20 shows almost similar results, but in 30 PSOCA performs better than PSO. For Rosenbrock’s function in dimension 10 PSOCA performance is poor although it attain less optimal value than PSO. In dimension 20 PSOCA performs very well. In dimension 30 shows little improvement in performance than PSO. For Schwefel’s function in all dimensions PSOCA’s performance shows better than PSO. Covergence rate of PSOCA and PSO as presented in Figure 2 shows covergence rate of PSOCA is comparatively better for benchmark functions.
V. C ONCLUSION In this paper we have proposed PSOCA, a mechanism to improve performance of PSO by avoiding particle’s unfruitful movements. An additional component called cognitive avoidance is introduced to the velocity equation. Cognitive avoidance coefficient controls the effect of this component. The new addition to particle’s movement acts as an extra boost to the particle in convergence towards the optmal solution. It pushes particles either towards pbest or gbest in each generation resulting fast convergence. Performance of the proposed PSOCA is tested on four well known benchmark functions at various dimensions, shows significant improvement. Study of convergence rate on benchmark functions shows PSOCA converges faster than the PSO as well as reaches better solution than that of PSO. R EFERENCES [1] Kennedy J. and Eberhart R., Particle swarm optimization, in Procedings of IEEE International Conference on Neural Networks, 1995, pp. 19421948. [2] Eberhart, R. C., and Kennedy, J. . A new optimizer using particle swarm theory. Proc. Sixth International Symposium on Micro Machine and Human Science (Nagoya, Japan), IEEE Service Center, Piscataway, NJ, 39-43,1995. [3] Eberhart, R. C., Dobbins, R. W., and Simpson, P., Computational Intelligence PC Tools, Boston: Academic Press,1996. [4] Holland, J., Adaptation In Natural and Artificial Systems. University of Michigan Press, Ann Arbor, 1975 [5] Goldberg, D. ,A note on Boltzmann tournament selection for genetic algorithms and population-oriented simulated annealing. TCGA 90003, Engineering Mechanics, University of Alabama,1990. [6] Srinivas. M and Patnaik. L, Adaptive probabilities of crossover and mutation in genetic algorithms, IEEE Transactions on System, Man and Cybernetics, vol.24, no.4, pp.656-667, 1994. [7] Goldberg, D. Genetic Algorithms in Search, Optimization and Machine Learning. Addison-Wesley, Reading, MA, 1989. [8] Shi, Y.; Eberhart, R.C.,A modified particle swarm optimizer, in Procedings of IEEE International Conference on Evolutionary Computation, 1998, pp. 69-73. [9] E. Laskari, K. Parsopoulos, and M. Vrahatis, Particle swarm optimization for integer programming, in Procedings of IEEE Congress on Evolutionary Computation, May 2002, vol. 2, pp. 1582-1587. [10] Kennedy, J and Eberhart, R.C. A discrete binary version of the particle swarm algorithm, IEEE International Conference on Systems, Man, and Cybernetics, 1997. [11] Eberhart R. C. and Shi Y., Tracking and optimizing dynamic systems with particleswarms, in Procedings 2001 IEEE International Congress on Evolutionary Computation, pp. 94-100. [12] A. Ratnaweera, S. K. Halgamuge, and H. C. Watson, Self-organizing hierarchical particle swarm optimizer with time-varying acceleration coefficients, IEEE Transactions on Evolutionary Computation, vol. 8, pp. 240-255, 2004. [13] J. Kennedy, Stereotyping: Improving particle swarm performance with cluster analysis, in Procedings of IEEE International Conference on Evolutionary Computation,vol. 2, 2000, pp. 303-308. [14] P. J. Angeline, Using selection to improve particle swarm optimization, in Procedings of IEEE International Conference Computational Intelligence, 1998, pp.84-89. [15] Eberhart, R.C.; Shi, Y., Comparing inertia weights and constriction factors in particle swarm optimization, in Procedings of IEEE International Congress on Evolutionary Computation, vol. 1, 2000, pp. 84-88. [16] Shi, Y.; Eberhart, R.C., Empirical study of particle swarm optimization, in Procedings of IEEE International Congress on Evolutionary Computation, vol. 3, 1999, pp. 101-106. [17] P. N. Suganthan, Particle swarm optimizer with neighborhood operator, in Procedings of IEEE International Congress on Evolutionary Computation, vol. 3, 1999,pp. 1958-1962.
2013 International Conference on Advances in Computing, Communications and Informatics (ICACCI)
153
[18] F. van den Bergh and A. P. Engelbrecht, Effect of swarm size on cooperative particle swarm optimizers, in Procedings of Genetic Evolutionary Computation Conf. (GECCO-2001), San Francisco, CA, July 2001, pp.892-899. [19] Angeline, Peter J., Evolutionary optimization verses particle swarm optimization: Philosophy and the performance difference, in Lecture Notes in Computer Science, vol. 1447, Procedings of 7th International Conference on Evolutionary Programming VII, Mar. 1998, pp. 600-610.
154
2013 International Conference on Advances in Computing, Communications and Informatics (ICACCI)