2009 Ninth International Conference on Hybrid Intelligent Systems
Particle Swarm Optimization with group decision making Liang Wang
Zhihua Cui, Jianchao Zeng
Complex System and Computational Intelligence Laboratory Taiyuan University of Science and Technology Taiyuan, Shanxi, PR.China, 030024 e-mail:
[email protected]
Complex System and Computational Intelligence Laboratory Taiyuan University of Science and Technology Taiyuan, Shanxi, PR.China, 030024
[email protected] [email protected]
of its previous best position pi = ( pi1 , pi 2 ,....., pin ) ,a velocity along each dimension represented as vi = (vi1 , vi 2 ,....., vin ) ,according to the minimizing functions, the smaller objective function value is, the better fitness and the position is, the p i vector of the particle with
Abstract—The particle swarm optimization (PSO) is a stochastic optimization algorithm imitating animal behavior, which shows a bad performance when optimizing the multimodal and high dimensional functions. Each particle uses own experience and other’s to make decision, it is easy to trap into premature convergence, but group decision making with all the individuals to make decisions uses various experiences and viewpoints to get better plan for avoiding conformity. A new formal particle swarm optimization is advanced basing on group decision(GDPSO) ˈ it takes each particle as an individual decision-maker and uses the basic information of particle such as the position of individual history and fitness value to decide a new position, then using the position replaces the global best position( p gj ),So the space of searching is
the best fitness in the local neighborhood designated p g , which represents the best particle location in the entire swarm`. vij (t + 1) = wvij (t ) + c1r1 ( pij (t ) − xij (t )) + c2 r2 ( p gj (t ) − xij (t )) (1) xij (t + 1) = xij (t ) + vij (t + 1) (2)
where j shows particle's jth dimension, i is particle ith ˈ t represents as the generation of tth , c1 and c2 are user definable confidence parameters. which determine the relative influence of the social and the cognition components ,Typically, these are all set to values of 2.0, r1 and r2 are random numbers generated each for p i and p g between 0
expanded and the population diversity is increasedˈthrough the new improved algorithm, it can improve the convergence speed and the capacity of global searching, the premature convergence is avoided to some degree. Keywords; particle swarm optimization; convergence; group decision making
I.
premature
and 1, w is termed “inertia” weight and is used to control the impact of a particle’s previous velocity on the calculation of the current velocity vector, vmax is often used to limit the velocities of the particles and improve the resolution of the search space. The velocity formulae has three components, the first is the inertia which keeps the particle move next position ,the second is the cognition component, which represents individual learning, the third is the social cognition component, which represents individual learning from other particles guide to the global best. In order to improve the premature convergence, some researchers proposed the way through controlling the colonial diversity to enhance performance, resolving conflict and convergence [9], analyzing the distance between the particle and global historical best [10], increasing detection and answer environment and random initializing colonial several generations [11] which were used to increase the colonial diversity so that it can not plunge into premature convergence. This paper proposes a significant modification to the dynamics of particles in PSO, utilizing group decision to decide a position, then the new position instead of the swarm historical position causes the particle expand the domain at earlier stage and increase the colonial diversity, at later stage,
INTRODUCTION
Particle swarm optimization algorithm (PSO) is a stochastic optimization algorithm [1] [2], inspired by studies of various animal groups. It was a very important swarm intelligent algorithm and the iterative optimizing tool that was simple and easy to implement. Now the research about PSO mainly depend on improving algorithm and application, the application of PSO is widely used in every aspect, pattern recognition[3], the neural network training[4] ,for example, evolutionary optimization of transition probability matrices for credit decision-making[5], PSO-aided neuro-fuzzy classifier employing linguistic hedge concepts [6], PSO-based weighting method for linear combination of neural networks[7], fuzzy adaptive turbulent particle swarm optimization[8]. PSO is considered as simulating and evolutionary algorithm based on swarm, each particle as a potential solution to a problem in a D-dimensional space, flying around the search sphere at a variant velocity according to individual experience and swarm experience adjusting their velocity dynamically, the ith particle represented as xi = ( xi1 , xi 2 ,....., xin ) ,Each particle also maintains a memory 978-0-7695-3745-0/09 $25.00 © 2009 IEEE DOI 10.1109/HIS.2009.82
388
particles that have better fitness we can set proportion parameter, which can cause algorithm converge at global best position.
decision is
1
historical position of swarm p g with.
II.
A BRIEF INTRODUCTION OF GROUP DECISION MAKING Group decision making research[12], the earlier reference can be traced back to the french mathematician Borda published in 1784 about the selection and election paper Condorcet published in 1875 about condorcetjury theorem[13], but the group decision making take as a definition advanced by BlackD for the first time, which was widely used ,Group decision-making is a process for achieving a certain goal to make plans and carries out, group it can gather different people about their experience and information, using everybody's individual talent and intelligence makes up for lack of experience. Group decision making has four basic elements, such as the decision maker, plan, the basis and the goal whose process is mainly on the decision maker through inquiring which finds problems and fix a goal and makes the plan to implement. III.
1,if ( f worst ( P(t )) = fbest ( P (t ))), ° score j (t ) = ® f worst ( P(t )) − f ( p j (t )) (3) , otherwise. °f ( P ( t )) − f ( P ( t )) best ¯ worst
INTRODUCTION OF GDPSO ALGORITHM
The formula of its probability
πi =
πi
is
score j
e e score1 + e score2 .... + e scoren n
And in the formula pGD = ¦ π i ∗ pi
(4) (5)
1
We renew the equation of improved PSO vij (t + 1) = wvij (t ) + c1r1 ( pij (t ) − xij (t )) + c2 r2 ( pGD (t ) − xij (t )) (6) xij (t + 1) = xij (t ) + vij (t + 1) (7)
The steps of GDPSO Step1. According to the formula, initializing the position and velocity vector of each particle and setting the maximum iterations. Step2. Calculating the fitness value of each particle Step3. For each particle, comparing its fitness value with the best self experienced position, p i , if better, replacing
p i with the self best position. Step4. From the equation (4),(5),(6), calculating the position pGD we want and then replacing p g with pGD . Step5. For each particle, comparing its fitness value with the global best position p g , if better, replacing p g with its own position. Step6. Carrying on the update of the particle’s position and speeding according to the equation (6) and (7). Step7. If not achieving the condition, it is usually for good enough, and being back to the second step. Otherwise, the best result will be output.
in tth generation, the probability of the individual position is
πi
pGD .The key steps in
the following are how to determine the probability. f worst ( P (t )) = arg max{ f ( p j (t )) | j = 1, 2,..., n} is corresponding fitness value of the worst historical position of in tth generation the particle j f best ( X (t )) = arg max{ f ( p j (t )) | j = 1, 2,..., n} is corresponding fitness value of the best historical position of the particle j in the generation of t . The performance evaluation target score j (t ) of the particle j in the generation of t can be defined: from the formula (3), we may see that in the generation of t the more superior the particle of historical fitness value is, the grater its performance evaluation target score j (t ) is. This is equals to sort with their score.
PSO is an intelligent optimizing algorithm according to simulate the birds not butt and fly to swarm, but group decision also use individual character of the swarm, considering every particle information make group decision, this is the human being using others information ,it is human intelligent ,we want to introduce this thought to improve algorithm, which make particle have higher intelligent as human, standard PSO relies on the best individual historical and swarm historical for decision, but our group decision base on every particle ,so at earlier stage, we want to use group decision, at later stage we use original decision method. In this algorithm we should know the basic requisite which consists of decision-making goals and decisionmaking basis, it takes particle position, the best individual historical position, the best historical position of swarm and with all their fitness as the basis, the goal is made algorithm as possible as converge at global best position. From above, we get the two factors, now we can use decision-making information to decide which we want, for the minimizing function, the better fitness value which shows the better performance, according to this, we can through Linear weighting method to decide rely on the contribution of the particle fitness value given the different probability, we can get from it that all particle take part in making decision and search the good solution nearby the best fitness value, which shows equality in every particle. Supposed p max (t ) represent the best swarm position in tth generation, p j (t ) represent the individual historical position defined
n
pGD and pGD = ¦ π i ∗ pi , we replace the best
n
and ¦ π i =1ˈ the position which we use group 1
389
IV.
TABLE I.
SIMULATION EXPERIMENT
A. Setting up Algorithm Parameter In order to analyze the GDPSO convergence velocity, global searching and inertia weight which affect the performance, this paper selects SPSO and other improved algorithm to compare, through five benchmark functions to analyze, supposing colonial number 30, the dimensions of selected 5 trial functions are 30, 100, 200and 300. the simulation runs 30 times, each time set the maximum iteration 50*dimension, for the basic PSO, the traditional theory analysis the performance, which shows linear decline inertia weight w is favorable for the global searching between 0.9and 0.4, c1,c2 all take 2.0.
Alg SPSO TVAC GDPSO SPSO TVAC GDPSO SPSO TVAC GDPSO
TABLE II.
B. Benchmark Function This paper selects five Benchmark functions to make experiments, according to the function character, we can group two classes, one is single-models , the other is multimodels, Rosenbrock Function(f5) is a classic complicate optimizing function ,whose global best position is inside a smooth, long and narrow parabolic-shaped valley owing to supply less information for optimizing function, which makes it impossible judge searching direction and find the global best position, so Rosenbrock Function is used to evaluating the efficiency[14], Rastrigin Function(f9), Ackley Function (f10), Griewank Function(f11), these are multimodel functions, which mean having several extremes and having broad searching space using them to test the performance of search. (5) Rosenbrock Function f 5 ( x) = ¦ in=−11[100( xi +1 − xi2 )2 + ( xi − 1) 2 ], −30 ≤ xi ≤ 30
Alg SPSO TVAC GDPSO SPSO TVAC GDPSO SPSO TVAC GDPSO
TABLE III. Alg SPSO TVAC GDPSO SPSO TVAC GDPSO SPSO TVAC GDPSO
min( f5 ) = f 5 (1,...,1) = 0. (9) Rastrigin Function f 9 ( x) = ¦ in=1[ xi2 − 10 cos(2π xi ) + 10], −5.12 ≤ xi ≤ 5.12 f 9 ( x) = min( f9 ) = f 9 (0,..., 0) = 0. (10) Ackley Function § · 1 f10 ( x) = −20exp ¨¨ −0.2 ¦ in=1 xi2 ¸¸ n © ¹ §1 n · − exp ¨ ¦i =1 cos(2π xi ) + 20 + e ¸ ©n ¹
TABLE IV. Alg SPSO TVAC GDPSO SPSO TVAC GDPSO SPSO TVAC GDPSO
−32 ≤ xi ≤ 32 min( f10 ) = f10 (0,..., 0) = 0 (11) Griewank Function 1 §x · 2 f11 ( x) = ¦ in=1 xi − ∏ in=1 cos ¨ i ¸ + 1 4000 © i¹ −600 ≤ xi ≤ 600
min( f11 ) = f11 (0,...,0) = 0.
C. Comparable Experiments This paper implements three basic PSO algorithms, such as SPSO, MPSO-TVAC (TVAC) and GDPSO which set the same parameters as follows.
390
TABLE1 ROSENBROCK FUNCTION(F5)
Dim 100 100 100 200 200 200 300 300 300
Mean 4.1064e+002 2.8517e+002 9.4910e+001 2.9071e+003 8.0076e+002 1.9414e+002 2.3307e+004 1.4921e+003 2.9337e+002
Std 1.0584e+002 9.8129e+001 2.1830e-001 5.4258e+002 2.0604e+002 2.5178e-001 1.9726e+004 3.4571e+002 2.0247e-001
TABLE2 RASTRIGIN FUNCTION (F9)
Dim 100 100 100 200 200 200 300 300 300
Mean 9.3679e+001 8.4478e+001 0 2.2827e+002 1.9920e+002 0 3.5449e+002 2.7093e+002 0
Std 9.9635e+000 9.4568e+000 0 1.1196e+001 2.8291e+001 0 1.9825e+001 2.7093e+002 0
TABLE3 ACKLEY FUNCTION(F10)
Dim 100 100 100 200 200 200 300 300 300
Mean 3.3139e-001 4.6924e-001 8.8817e-016 2.1693e+000 6.9455e-001 8.8817e-016 2.8959e+000 7.6695e-001 8.8817e-016
Std 5.0105e-001 1.9178e-001 0 2.6126e-001 4.0884e-001 0 3.1470e-001 3.1660e-001 0
TABLE4 GRIEWANK FUNCTION (F 11)
Dim 100 100 100 200 200 200 300 300 300
Mean 2.1204e-003 2.2598e-002 0 8.8379e-002 8.0535e-002 0 5.5838e-001 3.9988e-001 0
Std 4.3056e-003 3.1112e-002 0 1.2785e-001 7.5014e-002 0 2.1158e-001 2.4034e-001 0
Figure 4 The Comparison Results of 100 Dimensions about Rastrigin Problem
Figure1 The Comparison Results of 100 Dimensions about Rosenbrock Problem
Figure 5 The Comparison Results of 200 Dimensions about Rastrigin Problem
Figure 2 The Comparison Results of 200 Dimensions about Rosenbrock Problem
Figure 6 The Comparison Results of 300 Dimensions about Rastrigin Problem
Figure 3 The Comparison Results of 300 Dimensions about Rosenbrock Problem
391
Figure 7 The Comparison Results of 100 Dimensions about Ackley Problem Figure 10 The Comparison Results of 100 Dimensions about Griewank Problem
Figure 8 The Comparison Results of 200 Dimensions about Ackley Problem Figure 11 The Comparison Results of 200 Dimensions about Griewank Problem
Figure 9 The Comparison Results of 300 Dimensions about Ackley Problem Figure 12 The Comparison Results of 300 Dimensions
about Griewank Problem.
392
For the single modal function f5, from table1and figure1, figure2, figure3, we can conclude the GDPSO have better value than the other two, it does not have evident effect comparing with the other two in low dimension, but as the division increases, the performance is better than the other two, which means having a larger search space at late stage. From table2 to table4, for multi-models functions of f9, f10 and f11, they have better mean value than the other two, from figure4 to figure9, when they runs at high speed, it is nearly converged to 0, which shows these algorithms have fast convergence speed and good performance of global search, f9 and f11 are the classical functions to test the performance of search, which can converge absolutely zero, this shows the GDPSO use the other particle information sufficiently and enlarge the search space to increase the population diversity, it does not have any signs into local best positions and quickly converge , we can use GDPSO to solve high dimensional functions. V.
[2]
Kennedy J, Eberhart RC. Particle swarm optimization. In: Proceedings of ICNN'95 - IEEE International Conference on Neural Networks. IEEE, Piscataway, NJ, USA, 1995: 1942-1948. [3] Rosenberger, C, Chehdi, K,"Unsupervised clustering method with optimal estimation of the number of clusters : Application to image segmentation", in Proc. IEEE International Conference on Pattern Recognition (ICPR), pp. 1656-1659, vol.1, Barcelona, September 2000. [4] A Local Linear Wavelet Neural Networkl Proceedings ofthe 5th World Congress on Intelligent Control and Automation, June 15-19, 2004, Hangzhou, P.R. China pp1954-1955. [5] Evolutionary optimization of transition probability matrices for credit decision-making,European Journal of Operational Research, In Press, Corrected Proof, Available online 22 January 2009Jingqiao Zhang, Viswanath Avasarala, Raj Subbu [6] A PSO-based weighting method for linear combination of neural networks Computers & Electrical Engineering, In Press, Corrected Proof, Available online 11 June 2008 S.H. Nabavi-Kerizi, M. Abadi, E. Kabir [7] A PSO-aided neuro-fuzzy classifier employing linguistic hedge concepts, Expert Systems with Applications, Volume 33, Issue 4, November 2007, Pages 1097-1109Amitava Chatterjee, Patrick Siarry [8] A fuzzy adaptive turbulent particle swarm optimization, Int. J. of Innovative Computing and Applications 2007 - Vol. 1, No.1 pp. 39 47. [9] Krink T, Vesterstrom JS, Riget J. Particle swarm optimization with spatial particle extension. In: Proc. of the IEEE Int’l Conf. on Evolutionary Computation. Honolulu: IEEE Inc., 2002. 1474í1497. [10] Kazemi BAL, Mohan CK. Multi-Phase generalization of the particle swarm optimization algorithm. In: Proc. of [11] the IEEE Int’l Conf. on Evolutionary Computation. Honolulu: IEEE Inc., 2002. 489í494.
CONCLUSION
This paper has proposed a new variation of the particle swarm optimization algorithm called GD-PSO, mainly on the thought of group decision making to update equation, at earlier stage, we replace the best historical position of swarm with the one decided, so in some sense, it expands the searching space, we can get the conclusion from experiments that population diversity is a very important factor to converge. ACKNOWLEDGMENT
[12] Hu XH, Eberhart RC. Adaptive particle swarm optimization: Detection and response to dynamic system. In: Proc. of the IEEE [13] Paroush J . Stay away from fair coins :a condorcet jury theorem [ J ] . Social Choice and Welfare ,1998 ,15 (1) :15 – 20.Ratnaweera A, Halgamuge SK, Watson HC. [14] Self-Organizing hierarchical particle swarm optimizer with timevarying acceleration coefficients. IEEE Trans. on Evolutionary Computation, 2004.8(3).
This work is supported by Natural Science Foundation of Shanxi under Grant 2008011027-2. REFERENCES [1]
Eberhart R,Kennedy J.New optimizer using particle swarm theory. In: MHS'95 Proceedings of the Sixth International Symposium on Micro Machine and Human Science. IEEE, Piscataway, NJ, USA, 1995: 3943.
393