Testing of a Modified Particle Swarm Optimization Algorithm Using Different Benchmark Functions Shawkat. Hamdan and Amr. El Zawawi Beirut Arab University, Electrical Engineering Department, Debbiah, Lebanon
[email protected] and
[email protected]
Abstract—— This paper presents a new optimization algorithm called modified particle swarm optimization MPSO technique. In order to demonstrate the advantages of this MPSO technique, it is tested using different benchmark functions and the results are compared with those obtained using the classical particle swarm optimization PSO algorithm. Keywords: Particle swarm optimization (PSO), Modified particle swarm optimization (MPSO), benchmark functions.
I.
INTRODUCTION
Recently, as an alternative to the conventional mathematical approaches, the heuristic optimization techniques such as genetic algorithms, Tabu search, simulated annealing, and recently-introduced particle swarm optimization (PSO) are considered as effective, realistic and powerful solution schemes to obtain the global or quasi-global optimums in power system optimization problems [1]. The algorithms do not require that the objective functions and the constraints have to be differentiable and continuous [2]. Unlike other heuristics techniques such as genetic algorithm (GA), PSO has a flexible and well-balanced mechanism to enhance and adapt to the global and local exploration and exploitation abilities within a short calculation time. Although PSO seems to be sensitive to the tuning of its parameters, many researches are still in progress in regulating these parameters[3]-[5]. PSO is a population based stochastic optimization technique developed by Eberhart and Kennedy in 1995, inspired by the social behavior of bird flocking or fish schooling [6]. The PSO mimics the behavior of individuals in a swarm to maximize the survival of the species. In PSO, each individual makes his decision using his own experience together with other individuals’ experiences [7]. The algorithm, which is based on a metaphor of social interaction, searches a space by adjusting the trajectories of moving points in a multidimensional space. The individual particles are drawn stochastically toward the position of their own previous best performance and the best previous performance of their neighbors [8]. The main advantages of the PSO algorithm are summarized as: simple concept, easy implementation, and robustness to control parameters, and computational efficiency when compared
with other mathematical algorithms and other heuristic optimization techniques. Since the introduction of the first PSO algorithm, many modifications of the classical or original algorithm have been proposed [9]. In this paper, a modified PSO (MPSO) approach is proposed which focuses on the use of the individual’s best position pbest and the global best position gbest values, found in each iteration, in order to find the final global best value faster than that found by the classical PSO technique. In order to show the advantage of This MPSO technique is tested on some benchmark functions and on other applications, like sphere equations, polynomial equations, transcendental equations and economic dispatch equations. The MPSO performance is compared with that of the classical particle swarm optimization PSO. II. OVERVIEW OF THE PARTICLE SWARM OPTIMIZATION ALGORITHM In 1995, Kennedy and Eberhart introduced the PSO method driven by the social behavior of organisms such as fish (schooling) and bird (flocking) [10]. Suppose the following scenario: a group of birds are randomly searching food in an area. There is only one piece of food in the area being searched. All the birds do not know where the food is. But they know how far the food is, after each iteration. The effective strategy to find the food is to follow the bird which is nearest to the food. PSO learned from the scenario and used it to solve the optimization problems. In PSO, each single solution is a "bird" in the search space. We call it "particle". All of particles have fitness values which are evaluated by the fitness function to be optimized, and have velocities which direct the flying of the particles. The particles fly through the problem space by following the current optimum particles. PSO is initialized with a group of random particles (solutions) and then searches for optima by updating generations. After every iteration, each particle is updated by following two "best" values. The first one is the best solution (fitness) it has achieved so far. The fitness value is also stored. This value is called “pbest”. Another "best" value that is tracked by the particle swarm optimizer is the best value, obtained so far by any particle in the population. This best value is a global best and called “gbest”. When a particle takes part of the population as its topological neighbors, the best value is a local best and
is called “lbest”. After finding the two best values, the particle updates its velocity and positions [11]. Let x and v denote a particle position and its corresponding flight velocity in a search space, respectively. Therefore, the ith particle is represented as xi= (xi1, xi2… xid) in the d-dimensional search space .The best previous position of the ith particle is recorded and represented as pbesti = (pbesti1, pbesti2, …,pbestid). The index of the best particle among all the particles in the group is represented by the gbest = (gbest1, gbest2… gbestd). The flight velocity for particle i is represented as vi = (vi1, vi2… vid). The modified velocity and position of each particle can be calculated using the current velocity and the distance from pbesti to gbesk as shown in the following formula: Vid(k+1) = w Vid(k) + c1r1( pbestid – Xid(k) ) + c2r2( gbestid – Xid(k) ) (1) Xid(k+1) = Xid(k) + Vid(k+1) i=1,2,……n, d=1,2,…….,m (2) Where w, c1, c2 >=0. n is the number of particles in a group; m is the number of members in a particle; w is the inertia weight factor; c1 and c2 are acceleration constants, or called learning factors. Usually c1 = c2 = 2. r1 and r2 are two random numbers within the range (0,1),Vid(k) , Xid(k) are the velocity and the current of particle i in the dth-dimensional search space at iteration k, respectively [10]. The pseudo code of the procedure is as follows: For each particle Initialize particle END Do For each particle Calculate fitness value If the fitness value is better than the best fitness value (pbest) in history set current value as the new pbest End Choose the particle with the best fitness value of all the particles as the gbest For each particle Calculate particle velocity according equation (1) Update particle position according equation (2) End While maximum iterations or minimum error criteria is not attained Particles velocities on each dimension are clamped to a maximum velocity Vmax. If the sum of accelerations would cause the velocity on that dimension to exceed Vmax, which is a parameter specified by the user. Then the velocity on that dimension is limited to Vmax [11]. III. MODIFIED PARTICLE SWARM OPTIMIZATION In this paper, we improved the classical PSO method to new approach. The important thing in this modified particle swarm optimization (MPSO) is the use of the
best previous position of the ith particle which is recorded and represented as pbesti = (pbesti1, pbesti2, …,pbestid) in the modified velocity equation (10) in the social term while the cognitive term rest unchangeable and use the best particle among all the particles in the group which is represented by the gbest=(gbest1, gbest2, …, gbestd) in the modified position equation (2) as follows: Vid(k+1) = w Vid(k) + c1r1( pbestid – Xid(k) ) + c2r2( gbestid – pbestid) (3) Xid(k+1) = gbestid + Vid(k+1)
i=1,2,…n, d=1,2,.,m
(4)
IV. NUMERICAL EXAMPLES The MPSO algorithm is numerically tested on examples of benchmark functions and was compared with classical PSO algorithm search method. Results of these testing examples are given below. It was implemented using the MATLAB R2007a, and the software program was executed on a Pentium IV system. A. Benchmark Functions We use the 2 non linear benchmark functions given in the table 1.the functions selected are very often encountered in optimization algorithm benchmarks[12]. These functions are good test cases because of their nonlinearity and oscillation around the optimal solutions, so there exists a high probability for each optimization technique to trap into local optima [13]. Functions F1and F2 are origin–centered functions [13]. De Jong’s Sphere function F1 is the most basic problem. It contains no local optima and provides a smooth gradient towards a broad global optimum. The Griewank function F2, introduces interdependency between the variables; this is why, this function disrupts the optimization techniques working on one variable at a time [12]. B. Simulation Results The benchmark functions tested in this paper provide a balance of uni-modal and multi-modal as well as easy and difficult functions to solve. The classical PSO and the MPSO are tested on each function. TABLE I. STANDARD MULTI-MODAL OBJECTIVE FUNCTIONS No.
Function
F1
Sphere or
Equation f
f*
f*= 0 DeJong's F1 F2
Griewank
f*= 0
f* are the minima
The test of the modified MPSO on the different benchmark functions shows that the MPSO algorithm requires less elapsed time and smaller number of iterations than that required for the classical PSO in
TABLE II. COMPARISON BETWEEN THE PERFORMANCE OF PSO AND MPSO ALGORITHMS IN SOLVING DEJONG’S F1 Function
Type of Algorithm
Elapsed time (s)
No. of Iterations
PSO MPSO
46.759394 7.982789
107/150 18/150
DeJong's F1
solutions x 0.0001 0.0000
value
y 0.0002 0.0000
z 0.0002 0.0000
0 0
Dejong’s F1 collected solutions using MPSO.
Dejong’s F1 dispersed solutions using classical PSO.
15
15
10
z s o lu t io n s
z s o lu t io n s
10
5
5
0 30
0 30
25 20
25 20 15 10 5 0
1
0
5
4
3
2
7
6
8
9
15
10
10 5 0
1
0
3
2
y solutions
y solutions
5
4
7
6
8
x solutions
x solutions
Figure 1. Dejong’s F1 dispersed solutions using classical PSO
Figure 2. Dejong’s F1 collected solutions using MPSO.
TABLE III. Griewank benchmark function
function
Elapsed time s
Griewank
PSO MPSO
25.052029 3.655455
searching the same solutions. Figures. 1 and 2 prove that the classical PSO shows dispersed solutions while the MPSO gives concentrated solutions.
iterations
solutions X1 0.0000 0.0000
56/150 16/150
Function Value
X2 0.0001 0.0000
0 0
Griewank collected solutions using MPSO. 5
4
3
Griewank dispersed solutions using classical PSO. 5
x 2 s o lu t io n s
2
4
1
3
2
-1
x 2 s o lu t io n s
0
1
-2
0
-3 -3
-1
-2
-3 -3
-2
-1
0
1 x1 solutions
2
3
4
5
Figure 3. Griewank dispersed solutions using classical PSO.
-2
-1
0
1 x1 solutions
2
3
4
5
Figure 4. Griewank collected solutions using MPSO Figures 3 and 4 prove the superiority of the MPSO algorithm over the classical PSO algorithm in searching the solutions for the Griewank function.
9
10
IV. CONCLUSION The Modification represented by the MPSO proved their efficiency in finding accurate solutions for the tested Dejong and Griewang functions using a lower number of iterations and using a neatly lower computer time.
REFERENCES [1]
[2]
[3]
[4]
[5]
[6] [7]
K. Y. Lee and M. A. El-Sharkawi;” Modern Heuristic Optimization Techniques with Applications to Power Systems, IEEE Power Engineering Society (02TP160), 2002. John G. Vlachogiannis and Kwang Y. Lee, Fellow IEEE; “Reactive Power Control Based On Particle Swarm MultiObjective Optimization”, Proceedings of the 13th International Conference on Intelligent Systems Application to Power Systems, 2005. Volume 6, Issue 10, 6-10 Nov. 2005. K. E. Parsopoulos and M.N. Vrahatis, “Recent Approaches to Global Optimization Problems Through Particle Swarm Optimization. NaturalComputing, 2002, 1, pp. 235-306. R. Storn, “System Design by Constraint Adaptation and Differential Evolution,” IEEE Transactions on Evolutionary Computation, 1999, 3(1), pp. 22-34. R. Storn, “System Design by Constraint Adaptation and Differential Evolution”; IEEE Transactions on Evolutionary Computation, 1999, 3(1), pp. 22-34. R. Storn and K. Price, “Differential Evolution - A simple and Efficient Heuristic for Global Optimization Over continuous Spaces,” Journal of Global Optimization, 1997, 11, pp. 341– 359. www.Swarmintelligence.org/Tutorials.Php H. Yoshida, K. Kawata, Y. Fukuyama, S. Takayama, and Y. Nakanishi, “A Particle Swarm Optimization for Reactive Power and Voltage Control Considering Voltage Security Assessment” ;IEEE Trans. Power Syst., vol. 15, pp. 1232–
[8]
[9]
[10]
[11] [12]
[13]
1239, Nov. 2000. M. Clerc and J. Kennedy, “The Particle Swarm-Explosion Stability and Convergence in a Multidimensional Complex Space” ;IEEE Trans. Evol.Comput., vol. 6, no. 1, pp. 58–73, Feb. 2002. Montes de Oca M. A., Stützle T., Birattari M., and Dorigo M.,”A Comparison of Particle Swarm Optimization Algorithms Based on Run-Length Distributions” Proceedings of the Fifth International Workshop on Ant Colony Optimization and Swarm Intelligence, Brussels, Belgium. September 2006. pp 1-12, Brussels, Belgium. Zhao Bo and Cao Yi-jia, ),“Multiple Objective Particle Swarm Optimization Technique for Economic Load Dispatch” J Zhejiang Univ SCI 2005 6A(5):420-427 ,Journal of Zhejiang University, China www.Swarmintelligence.org/tutorials.Php. Jin Yisu , Joshua Knowles, Lu Hongmei ,Liang Yizeng ,Douglas B. Kell ,” The Landscape Adaptive Particle Swarm Optimizer”, 2 January 2007. http://Pasargard.Cse.Shirazu.ac.ir/~mhaji/ec2/EcOpt/Project1.Html