Fourth International Conference on Natural Computation
Quantum Multi-objective Evolutionary Algorithm with Particle Swarm Optimization Method Zhiyong Li, Kun Xu, Songbing Liu, Kenli Li School of Computer and Communication, Hunan University, Changsha, 410082 P.R.China Email:
[email protected] NSGAII[3] use non-optimal sequencing mechanism to reach the forefront Pareto optimal approximation and use exclusion mechanism to guarantee the diversity of Pareto optimal solution, but the use of individual Pareto sorting method may not be good for the individual density of information, so it limits the search performance of the algorithm to a certain extent. Quantum-inspired Computation, a new method of natural calculating is proposed in recent years[4]. The experiments show that the introduction of the quantum theory in evolutionary algorithm has good spatial search capabilities, quantum multi-object evolutionary algorithm (QMOEA) have better approximation capacity in forefront of the Pareto optimal and better diversity. J. Sun, W.B. Xu put forward a quantum behavior of the particle swarm algorithm [5, 6], which can ensure global convergence. The experiments show that its performance is superior to traditional genetic algorithm. The main content of this study is to combine quantum computing strategy with particle swarm algorithm more effectively, and adopt a new strategy to control, adjust, optimize algorithm in overall searching and local optimization capabilities, and design more effective multi-objective optimization methods. This paper will combine quantum particles revolving door strategy with particle swarm algorithm, use dynamic balance weights to find the balance point of global search and local search and put forward a new method of calculating the distance to keep the Pareto optimal solution better diversified.
Abstract This paper proposes a novel algorithm for Multiobjective Optimization Problems based on Quantum Particle Swarm. To improve performance of original particle swarm optimization algorithm and avoid trapping to local excellent situations, this method constructs the new quantum solutions expression for multi-objective optimization particle swarm. It adopts the non-dominated sorting method for solutions population and use a new population diversity preserving strategy which is based on the Pareto max-min distance. The multidimensional 01 knapsack optimization problems are carried out and the results show that the proposed method can efficiently find Pareto optimal solutions that are closer to Pareto font and better on distribution. Especially, this proposed method is outstanding on more complex high-dimensional optimization problems.
1. Introduction Generally speaking, the methods to solve the traditional multi-objective optimization problems (MOP) are weighted mean method, constraints, goal programming and so on. Firstly, according to a strategy, these methods determine the trade-offs in multi-objective, and then they can be converted to a number of different single-objective problems. At last, the Pareto optimal solution of MOP is approximately equal to the optimal solution set of these single-objective optimization problems. Therefore, only a compromise solution can be gotten in each run of these methods, and the outcome could be completely different for different approaches. In recent years, Multi-objective Optimization Evolutionary Algorithm (MOEA) has attracted considerable attention. Evolutionary Algorithms contain evolutionary strategy (ES), evolutionary programming (EP), genetic algorithm (GA), particle swarm algorithm (PSO) and other branches. More representative in the algorithm, VEGA [1] treat the goal of the function simply, but the search space can not achieve a non-convex Pareto optimal solution, SPEA[2] method convergence rate is not ideal, particularly in the larger space search, the key lies in the use of cross-cross the two individuals, variation is normal variation and generated in the individual local area;
978-0-7695-3304-9/08 $25.00 © 2008 IEEE DOI 10.1109/ICNC.2008.785
2. Quantum Particle Swarm Optimization 2.1. Particle Swarm Optimization Kennedy and Eberhart first proposed Particle Swarm Optimization (PSO) in 1995[7, 8]. It is a parallel evolution of computing technology. And then the inertia weight is introduced for the solution to achieve better control the search space, gradually formed the current popularity of the elementary particle swarm algorithm[9]. The basic idea would look each individual as a particle which has no size and no quality in a Ddimensional space. Each particle flights in the search space with a certain speed and dynamically adjusts this speed based on individual and groups of flying
672
Based on the above quantum particle swarm algorithm for the basic idea, this paper proposes a order relationship which based on the Pareto dominant relationship to update the optimal values of individual particles and local optimal values, define a new max-min distance method and use this method to reduce non-disposable solution, use quantum particles revolving door strategy to update particle quantum angle. A new multi-objective optimization algorithm is as follow:
experience. Particle i Xi=(xi1, xi2,…, xiD), which experienced for the position Pi=(pi1, pi2,…, piD), in which the best position pbest, the current groups experienced all the particles in the best position pgbest, the speed of particles Vi=(vi1,vi2,…, vid). The particle I in the ddimensional space ( 1 ≤ d ≤ D ) following the campaign to follow equation:
vid (t + 1) = w ⋅ vid (t ) + c1rand ()( pbest − xid (t )) + c2 rand ()( p gbest − xid (t )) xid (t + 1) = xid (t ) + v(t + 1)
(2.1)
3. Quantum-Bit Particle Swarm Optimization based on the distance method
(2.2)
Where w is inertia weight, it makes particle movement inertia to maintain its ability to explore new areas. c1, c2 are constants. rand() is a the random number in area [0,1].
Osyczka and Kundu proposed a method which according to calculate the values of distance to solve the multi-objective optimization problems [11]. Its main idea is distributing the fitness according to each individual accessing to the distance between itself and previous generation Pareto solution. It proposed external penalty function convert multi-objective optimization problem to non-binding question. The parameters r is the extent of control of punishment, the parameters p1 is initial potential value. The distance method is sensitive settings of p1 and r. For any infeasible solution, the higher the value of r, the higher the distance of calculating. Fitness finally is close to ultimate value of 0. If too much individual fitness is 0, the search will not be able to carry out. Additionally, if the initial potential value too high, the difference between the fit values of different solutions will be very not obvious. It will lead to the selection of pressure too small. The convergence velocity will be very slow. On the other hand, if the initial potential value is too small, the calculated values will tend to 0. In order to solve these problems, in this paper, according to the definition of the max-min distance, propose a new distance-based method of quantum-bit particle swarm algorithm. It can avoid adverse effects which caused by the parameters.
2.2. Quantum Bit The solutions expression based on multi-state quantum bit coding can be transformed from ⎡ α 1 α 2 ... α m ⎤ to
[θ θ 1
2
...θ m ] in reference [10].
⎢ ⎣β
1
β 2 ... β
m
⎥ ⎦
Quantum revolving door adjustment strategies: ⎡α ′ ⎤ cos(Δ θ ) ⎢ i ⎥ = s in ( Δ θ ) ⎢⎣ β i ′ ⎥⎦
− sin ( Δ θ ) cos(Δ θ )
That may be express as:
⎡α i ⎤ ⎢β ⎥ ⎣ i⎦
θ i′ = θ i + Δθ ,
where
θi
is
the Quantum angle of the quantum particle.
2.3. Quantum Particle Swarm Optimization In reference [5, 6], from the perspective of quantum mechanics, a new PSO algorithm model is proposed. This model is based on the DELTA potential, which the particles have quantum behavior. Based on this model, a new PSO algorithm is proposed. The algorithm is simple and easy to achieve and have the advantage of less adjustable parameters, with good stability and convergence. In reference [10], all particles in population will be seen as smart group. We find partial quantum optimal angle and the overall quantum optimal angle in each iteration process. According to quantum revolving door dynamic adjust quantum angle. Its specific operations: the speed, location, the best individual and overall position respective is vij (t ) , θ ij (t ) , θ ijpbest , θ ijgbest , speed and
3.1. Max-min distance Defining: S is solution set. Its scale is n. The Euclidean Distance of any individual i in S compared to other individuals is defined as dji ( j=1,2,…,n, j≠i), and dmin is the smallest in dji. Defining: dmin={d1min, d2min, …, dnmin }, dmax-min is the biggest in dmin. We called dmax-min is max-min distance:
location of the iterative formula are as follows: vij (t +1) = w⋅ vij (t ) + c1 ⋅ rand () ⋅ (θ
pbest ij
−θi j (t)) + c2 ⋅ rand () ⋅ (θ
gbest ij
D (i) =
−θi j (t ))
n
∑
(s g n (d
m a x − m in
− d ij ) )
j =1, j ≠ i , where sgn(x) is sign function. We call D(i) is the density of max-min distance. We can see that if the individual particles are crowded, the value of D(i) is big; if the individual particles are sparse, the value of D(i) is small.
θ ij (t + 1) = θij (t ) + vij (t + 1) Where w is inertia weight, its value is set as follows. c1, c2 are constant, c1=c2=2. rand( ) is a the random number in area [0,1]. Vmax =±0.1π, Vmin =±0.001π.
673
(3) while (not termination condition) do Begin (3.1) t ←t +1 (3.2) Make P(t) by observing the state of Q(t-1) (3.3) Compare the solutions, calculate the number of the solutions dominated. (3.4) Calculate the max-min distances and D(x), select the optimal solution partial and overall optimal solution. (3.5) Make Q(t) by updating Q(t-1) on revolving gates. End End
3.2. The distance method of max-min distance Suppose t is iterative algebra, the potential value of new solution is F, D(i) is density of max-min distance. 1: t←1, p1=1 2: while (the number of solutions < the population size) 2.1 If the new solution is a Pareto solution and it dominates at least one Pareto solution, then 1 F = p m ax + p m ax ← F D (i ) Update Pareto solution set, the potential value of the new solution is F. 2.2 If the new solution is a Pareto solution but it does not dominate any Pareto solution, then 1 F = pm + D (i ) Add it to Pareto solution set, the potential value is F, where pm is the potential value of the nearest Pareto solution. If F> pmax then pmax←F。 2.3 If the new solution is not a Pareto solution, then 1 F = pm − D (i ) ,
4. Experimental results and analyses 4.1. Test problem We discuss the multi-objective knapsack problem (MOKP), which is difficult to solve (NP-hard). Multidimensional 0-1 knapsack problem can be formalized as follows: n
where pm is the potential value of the nearest Pareto solution. if F the biggest iterative algebra, algorithm termination or return 2.
m ax
j =1
n
s .t .
i j
x j ≤ b i , i ∈ M = {1, 2 , ⋅ ⋅ ⋅, m } ,
x j ∈ {0 ,1} , j ∈ N = { 1 ,2 , ⋅ ⋅⋅,n } .
Here, n is the number of items which can be chosen, m is the number of the constraint knapsacks, aij, bi, cj are non-negative numbers.
4.2. Experiments parameters
In reference [12], inertia weight is given like w(t)=0.9 (0.5t) / MaxNumber. The big inertia weight has good global convergence capacity. The small inertia weight has good local convergence capacity. So in this paper, a dynamic method to set inertia weight will be used.
N N + m +D(i )
∑a j =1
3.3. Improvement of inertia weight
wi =
F= ∑ c j x j
The experiments carried out in the matlab7.0. We optimize the MOKP with 250,500,750items and 2, 3, 4 knapsacks. The definitions of MOKP and the benchmark data are same as in reference [14]. We compared with SPEA2 、 NSGA2 and QMOEA algorithms. The parameters setting in the algorithms:
(3.1)
4.3. Experimental results
Where wi is the inertia weight and N is the number of population particles and m is the number which the particle i dominate. D(i) is density of max-min distance.
For 100 items and 250 items of 0-1 knapsack problem, the result shows below, using NSGA2 , QMOEA , QBPSO algorithms independent operation. We usually use S measurement[14] and D measurement[14] to judge the quality of the non-dominated solutions. The S method shows how wide the nondominated solutions dominate. The D method shows the diversity of the non-dominated solutions.
3.4. Algorithm process Begin (1) t ←0,Initialize Q(0) (2) A(t)={ },C(t)={ }
674
is closer to the Pareto optimal solution front than the other two algorithms.
Table 4.1 The parameters setting in the algorithms Parameters Value Population size 100 No. of generations 100 Crossover Prob.(NSGA2,SPEA2) 0.9 Mutation Prob.(NSGA2,SPEA2) 1/No.of item No. of observations (QMOEA) 1 No. of best solutions (QMOEA) 10 0.01л △θ(QMOEA)
Table 4.2 S value after run 30 times on average MOKPS S value after run 30 times on average (item-knapsack) NSGA2 SPEA2 QBPSO 250-2 8.53e+007 8.57e+007 8.87e+007 250-3 6.82e+011 6.76e+011 7.35e+011 250-4 5.08e+015 5.08e+015 5.80e+015 500-2 3.35e+008 3.36e+008 3.54e+008 500-3 5.17e+012 5.19e+012 5.75e+012 500-4 7.23e+016 7.22e+016 8.46e+016 750-2 6.94e+008 6.95e+008 7.51e+008 750-3 1.71e+013 1.71e+013 1.93e+013 750-4 3.56e+017 3.58e+017 4.24e+17 Table 4.3 D value after run 30 times on average MOKPS D value after run 30 times on average (item-knapsack) QMOEA QBPSO 250-2 2.487874e+001 2.854181e+001 250-3 5.193403e+001 1.158548e+002 250-4 6.180064e+001 1.327643e+002 500-2 3.866045e+001 4.120644e+001 500-3 5.745094e+001 1.203854e+002 500-4 6.639704e+001 1.420873e+002 750-2 4.090615e+001 4.648026e+001 750-3 5.745094e+001 1.291407e+001 750-4 6.696171e+001 1.348642e+002 Table 4.4 Growth rate of S value after run 30 times on average MOKPS Comparing with Comparing (item-knapsack) NSGA2 with SPEA2 250-2 3.986% 3.500% 250-3 7.770% 8.728% 250-4 14.17% 14.17% 500-2 5.671% 5.36% 500-3 11.22% 10.79% 500-4 17.01% 17.17% 750-2 8.213% 8.57% 750-3 12.86% 12.87% 750-4 19.10% 18.44%
Figure 4.1 2 knapsack-100 items
In table 4.2, the S value of QBPSO is obviously superior NSGA2 and SPEA2, in other words, the space dominated by the Pareto dominant solution is better than the other two algorithms. With the increase of item numbers and knapsack numbers, the effect of the algorithm is better. (Table 4.3) In table 4.3, the D value of QBPSO is obviously superior NSGA2 and SPEA2, in other words, the diversity of Pareto non-inferior solution set is better than the other two algorithms. With the increase of item numbers and knapsack numbers, the effect of the algorithm is better.
Figure 4.2 2 knapsack- 250 items
4.4. Discussion and analysis We can see clearly from figure 4.1 and figure 4.2 that QBPSO is clearly better than the other two algorithms not only the number of Pareto solutions we found but also the distribution and the quality of Pareto solutions. With the increase of item numbers and knapsack numbers, QBPSO
675
As the table shows, QBPSO algorithm can better solve the problem of multi-objective optimization, particularly for multi-dimensional complex optimization problems.
[4] Hey T. “Quantum computing: An introduction”. Computing & Control Engineering Journal, 1996, 10(3): 105-112. [5] J. Sun, W.B. Xu. “A Global Search Strategy of Quantumbehaved Particle Swarm Optimization”. Proceedings of IEEE Conference on Cybemetics and Intelligent Systems, 2004, 111-116. [6] J. Sun, B. Feng, W.B Xu, “Particle Swarm Optimization with Particles Having Quantum Behavior”. Proceedings of 2004 Congress on Evolutionary Computation, 2004, 325331. [7] Kennedy J, Eberhart R C. “Particle swarm optimization”. Proceeding of 1995 IEEE Intena-tional Conference on Neural Networks. New York, NY, USA: IEEE, 1995. 1942-1948. [8] Eberhart R C, Kennedy J. “A new optimizer using particle swarm theory “. Proceedings of the Sixth Intenational Symposium on Micro Machine and Human Science. New York, NY, USA: IEEE, 1995. 39-43. [9] E Ozcan, C Mohan. “Particle swarm optimization: surfing the waves”. Proceedings of the Congress on Evolutionary Computation. Washington D. C, USA: IEEE Press, 1999. 1939 -1944. [10] Wang Yan, Chun-yi Lu. “New quantum swarm evolutionary algorithm”. Journal of Chinese Computer Systems, 2006, 27(8):1479-1482. [11] Osyczka, A. and S. Kundu. “A new method to solve generalized multi-criterion optimization problems using genetic algorithm”. Structural Optimization, vol. 10, no. 2, pp. 94-99,1995. [12] Shi Y, Eberhart R C. “Empirical Study of Particle Swarm Optimization”. In: Proceedings of the 1999 Congress on Evolutionary Computation. Piscataway, NJ, IEEE Service Center, 1999, 1945-1950. [13] Lorie J, Savage L. “Three problem in capital rationing”. Journal of Business, 1955, 28: 229-239. [14] E. Zitzler and L. Thiele. “Multiobjective evolutionary algorithms: A comparative case study and the strength Pareto approach,” IEEE Transactions on Evolutionary Computation, vol. 3, no. 4, pp. 257-271, 1999.
5. Conclusions This paper proposes an improved PSO multi-objective optimization algorithm. The algorithm uses a new distance method to maintain the distributing performance of solution set. At the same time, the inertia weight of particle is set dynamically in our method, so as to effectively maintain the balance between global search and local search. We can get the following conclusions from the experimental results on multidimensional 0/1 knapsack problem: Compared with the traditional methods, this algorithm has better convergence performance and distributing performance of solution. To be specific, the S value is larger than it is in NSGA2 algorithm and SPEA2 algorithm about 10 percent on average, and the D value is also superior to the QMOEA algorithm. The results show that the algorithm can effectively solve the problem of multi-objective optimization. This algorithm can seek the Pareto Front of multiobjective optimization problems quickly. It shows better searching results especially for the multi-dimensional or complex MOP. Hence, our method is a good choice for the complex, high-dimensional MOP. The combination of Quantum-inspired calculation theory and the traditional evolutionary algorithm provides an effective solution for the complicated optimization problems. How to design more effective strategy so as to enhance the performance of our algorithm is the future work of this paper. Furthermore, theoretical analysis of the algorithm is also a worthy research direction.
6. Acknowledgements This research is supported by the National Natural Science Foundation of China (Grant No. 90715029) and the key project of Science Foundation of Hunan University (Grant No. 521101796)
References [1] Schaffer J D. “Genetic Algorithms and their Applications”: Proceedings of the first Int. Conf. On Genetic Algorithms, Lawrence Erlbaum, 1985. 93-100. [2] Zitcher E. “Evolutionary Algorithms for Multiobjective Evolutionary Optimization: Method and Application”. In: PhD Thesis, Swiss Federal Institute of Technology (ETH). Zurich, Switzerland, Nov, 1999. [3] Kalyanmoy Deb, Amrit Pratap, T Meyarivan. “A fast and elitist multiobjective geneticalgorithm:NSGA-II”. IEEE Transactions on Evolutionary Computation, 2002, 6(2):182-197.
676