PSOTS: A Particle Swarm Optimization Toolbox in Scilab Rui Qi
Baogang Hu
Paul-Henry Courn`ede
Laboratory of Applied Mathematics Ecole Centrale Paris 92295, Chatenay-Malabry, France LIAMA/NLPR Institute of Automation Chinese Academy of Sciences 100190, Beijing, China Email:
[email protected]
LIAMA/NLPR Institute of Automation Chinese Academy of Sciences 100190, Beijing, China Email:
[email protected]
Laboratory of Applied Mathematics Ecole Centrale Paris 92295, Chatenay-Malabry, France EPI Digiplante INRIA Saclay Ile-de-France Parc Orsay Universit´e 91893 Orsay cedex, France Email:
[email protected]
Abstract—This paper introduces a generic toolbox of Particle Swarm Optimization developed in the platform of Scilab (PSOTS), with friendly designed interface by TCL/TK. It is developed for a variety of complex problems, including single objective optimization problems, multi-objective optimization problems, continuous problems, discrete problems, and mixed integer problems. The toolbox PSOTS is suitable in the context of non-convex optimization and does not rely on the computation of the derivatives of the objective functions. PSOTS can be also linked with other simulation programs. PSOTS is platform independent, as it is developed in Scilab. It can be distributed and modified freely and can be extended easily. PSOTS provides many technical configurations. Besides, real-time display of convergence curve and optimal solutions are optional according to users’ requirements. The particle distribution in the searching space can be demonstrated. Compared with the existing heuristic optimization algorithms in the latest version of Scilab, the convergence accuracy, the convergence rate and the time consumed by using PSOTS are all outstanding.
I. I NTRODUCTION Optimization problems can be classified into several classes: constrained and unconstrained problems, linear and nonlinear problems, continuous and discrete (or integer) problems, single objective and multi-objective problems. Generally speaking, each class of problems requires specific optimization algorithms used to search solutions. According to principles of searching extremes, optimization algorithms can be classified as classical optimization algorithms based on gradient of objective functions [1] (e.g. Steepest Descent Algorithm, Conjugate Gradient Algorithm, Newton Algorithm) and heuristic optimization algorithms (e.g. Genetic Algorithm [2], Particle Swarm Optimization [3], Simulated Annealing [4]), local optimization algorithms and global optimization algorithms, and direct and iterative (numerical) optimization algorithms. With the fast development of industrial applications, optimization algorithms encounter more and more challenges. For some optimization problems, there is no explicit analytical formula, and the gradient information cannot be gained. The dimension of problems get higher and higher, and more and more multimodal problems are required to be optimized. For these __________________________________________
978-1-4244-4453-3/09/$25.00 ©2009 IEEE
problems, classical optimization algorithms are sometimes not satisfying. Especially for multi-objective optimization problems, the optimal solutions are not unique. Moreover, classical optimization algorithms return only one solution at each iteration. Hence, population-based heuristic optimization algorithms, which return a set of solutions at each iteration and do not require the derivative information of objective functions, are more convenient for solving these kinds of problems. In order to apply heuristic optimization algorithms conveniently and widely, some researchers have integrated them into different kinds of toolboxes [5][6][7]. However, most of them are implemented in VB, C/C++, Java, Fortran, or MATLAB. Compared to these previous achievements, we have found interesting to develop a toolbox in Scilab [8] which is unique in combining the two following advantages: • •
it is an open source platform for numerical computation and is free of charge, it is a high-level language, which makes it simple for learning and teaching.
Consequently, we believe that this toolbox can be widely used, not simply as a ’black box’, but also as a basis to understand the principles of optimization algorithms by making it possible for students and researchers to easily read, change or tune algorithms and algorithm parameters. Particle Swarm Optimization (PSO) proposed by Kennedy and Eberhart [3] is an iterative, population based, heuristic optimization algorithm. It belongs to the category of global optimization algorithms. Each individual (Particle) in the population (Swarm) represents one point in the parameter space. Individuals will converge to the optimal solutions according to social knowledge of the population and their own cognitive knowledge during the searching process. PSO has a faster convergence rate than other heuristic algorithms for a wide range of optimization problems [9] and it has few parameters to adjust. It has be successfully applied to a variety of complex problems, e.g. design problems [10],
΄΄ʹͣͪ͑͞͡͡
combinatorial optimization [11], and a variety of tasks, e.g. training of artificial neural network [12], control [13], data mining [14], power systems [15][16], signal processing [17], plant growth system [18]. For a review, we refer to [19] and [20]. In the latest version of Scilab (Scilab 5.1.1), Genetic Algorithm (GA) and Simulated Annealing (SA) are already integrated, while PSO is not. The existing toolboxes of PSO are specific for a given kind of problems, e.g. for single objective problems [21][22] or multi-objective problems [5] to our knowledge, for continuous problems [22][21][23] or discrete problems ([11] with a specific application to the Traveling Salesman Problems (TSP)). The toolbox GenOpt developed by Wetter [24] is more generic, which can be applied to solve either continuous, or discrete, or mixed integer problems (i.e. problem variables include both continuous and integer variables.). However, GenOpt is only for single objective optimization problems. Moreover, it is developed in Java language. As the widely requirement of optimization algorithms in the academic and industrial fields and the advantages of PSO mentioned in the previous paragraph, we develop a generic toolbox of PSO in Scilab (PSOTS) for a variety of complex problems, including single objective optimization problems, multi-objective optimization problems, continuous problems, discrete problems, and mixed integer problems. PSOTS, which can be downloaded from the web site http://psots.sourceforge.net/, can be investigated conveniently, be distributed and modified freely and be extended easily according to users’ requirements. The rest of paper is organized as follows. The basic principles of PSO are described briefly in section II. In section III, the features of PSOTS are introduced. Application examples of PSOTS are demonstrated in section IV. The performance comparison between PSOTS and two heuristic optimization algorithms integrated in the latest version of Scilab is introduced in section V. Finally, the conclusion is given in section VI. II. BASIC PRINCIPLES OF STANDARD PSO PSO is a kind of heuristic optimization algorithms. It is motivated from simulating certain simplified animal social behaviors such as bird flocking, and is first proposed by Kennedy and Eberhart in 1995 [3]. Similar to GA, it is an iterative, population-based method. The particles are described by their two instinct properties: position and velocity. The position of each particle represents a point in the parameter space, which a possible solution of the optimization problem, and the velocity is used to change the position. The particle properties are time-variant. They are updated by Eqn.1 and Eqn.2. k+1 k vij = ω · vij + c1 r1 Pij − xkij +c2 r2 Pgj − xkij = + i = 1, 2, . . . , N p
xk+1 ij
xkij
k+1 vij
(1) (2)
j = 1, 2, . . . , N d where N p is the number of particles in the population; N d is the number of variables of the problem (i.e. dimension k of a particle); vij is the j th coordinate component of the th velocity of the i particle at iteration k; Pij is the j th coordinate component of the best position recorded by the ith particle during the previous iterations; Pgj is the j th coordinate component of the best position of the global best particle in the swarm, which is marked by g; xkij is the j th coordinate component of the current position of particle i at the k th iteration; ω is the inertia weight, c1 , c2 are the acceleration coefficients, r1 , r2 are the uniformly distributed random values between 0 and 1. The last two items on the right side of Eqn.1 are considered as cognition knowledge and social knowledge of a particle. The effect of parameters and their recommended values are discussed in [25]. During the decades of development, many variants of PSO have been proposed by researchers, in order to enhance the convergence accuracy or adapt it to specific problems. A review of PSO is given in [26]. According to the strategy of choosing the global best particle g, standard PSO can be divided into global PSO, where the global best particle is chosen from the whole swarm, and local PSO, where the global best particle g is chosen in neighborhood particles. The effect of neighborhood structure is analyzed in [27]. The pseudocode of standard PSO is presented as follows. • Step 1. Initialize a population of particles and of velocities, uniformly distributed within the feasible space, and set stop criteria. • Step 2. Evaluate the objective functions, for each particle. • Step 3. Update the best position for each particle and the global best position in the population. • Step 4. Update velocities and positions using Eqn.1 and Eqn.2. • Step 5. Verify stop criteria. If all stop criteria are not satisfied, go to Step 2; otherwise, the program terminates, and the optimal solutions are output. III. F EATURES OF PSOTS PSOTS can solve maximization or minimization problems without transforming the formulas of optimization problems. It is able to deal with single and multi-objective optimization problems. It can show convergence curve in real-time and particle distribution in the searching space. PSOTS considers different strategies for choosing the best particle g, including global PSO and local PSO. The neighborhood structure that we adopt in PSOTS, which is involved in local PSO, is that the neighborhoods of a particle are located at its left and right with equal number which is determined by users. PSOTS also includes two variants of PSO for single objective optimization problems. The first variant aims to improve convergence rate and accuracy of standard PSO. It is proposed by He et al.[28], named PSO with Passive Congregation (PSOPC). The formula for updating particle velocities is as expressed by Eqn.3,
΄΄ʹͣͪ͑͞͡͡
k+1 k vij = ω · vij + c1 r1 Pij − xkij + c2 r2 Pgj − xkij +c3 r3 Prj − xkij
•
(3)
The difference between standard PSO and PSOPC is the fourth item on the right side of Eqn.3. Prj is the j th coordinate component of the best position of the rth particle chosen randomly in the swarm. It is used to avoid converging to the local optimum. The seconde variant of PSO in PSOTS is proposed by Clerc in 2000 to solve TSPs [11], denoted by PSO TSP. To adapt standard PSO for TSPs, elements in each particle are sequence index of visiting cities (i.e. the order of visiting cities), which are integer values. Each particle thus represents a complete path. The velocity is a list of transposition. The addition and subtraction operators in the formula of updating velocity as expressed by Eqn.1 are considered as transposition between cities. The acceleration coefficients and inertia weight control the length of velocity list. The maximal length of velocity list is determined by users. The specific algorithm that is included in PSOTS for multi-objective optimization problems is the mixture of the algorithms proposed by Mostaghim and Teich [29] and by Tripathi et al. [30]. The optimal solutions for multi-objective optimization problems are defined such that for these solutions, performance on one objective cannot be improved without sacrificing performance on at least another. The solutions satisfying this property form the Pareto front [29]. To adapt the original PSO for multi-objective problems and to find the optimal solutions known as Pareto front, the velocity of each particle is updated by Eqn.4, which is slightly different from the one in the standard PSO as expressed by Eqn.1. k+1 k vij = ω k · vij + c1 r1 Pij − xkij + c2 r2 P lgj − xkij (4) To obtain various solutions at a given iteration, the unique global best position is substituted by a local guide best position for each particle, denoted by P lij for the j th coordinate component of the position of particle i in Eqn.4. An archive with limited size is imposed to the algorithm to record all the optimal solutions. The total number of solutions inside the Pareto front is thus controlled by the archive size. If the number of optimal solutions does not achieve the archive size, all of them are accepted and added to the archive; otherwise, the most similar optimal solutions inside the archive, which is evaluated by the criteria of the nearest distance between each two solutions, will be eliminated. The local guide best position of each particle is determined by the Sigma method [29] according to the criterion of the nearest distance between the solution in the archive and the particle. For more details, we refer to [29]. The pseudocode of the PSO algorithm for multi-objective optimization problems is presented as follows. • Step 1. Initialize a population of particles and of velocities, uniformly distributed within the feasible space, and set stop criteria. • Step 2. Evaluate the objective functions, for each particle.
•
•
• •
Step 3. Update the best position for each particle during its own searching trajectory. Step 4. Update the archive: if the archive size is not reached, the optimal solutions are added; otherwise, the most similar solutions inside the archive according to the distance criteria will be eliminated. Step 5. Update the local guide best position for each particle, by comparing the distance between the particle and the solutions in the archive. Step 5. Update velocities and positions as Eqn.4 and Eqn.2. Step 5. Verify stop criteria. If all stop criteria are not satisfied, go to Step 2; otherwise, the program terminates, and the optimal solutions are output.
The variants of PSO integrated into PSOTS are listed in Table I. TABLE I VARIANTS OF PSO INTEGRATED INTO PSOTS. Algorithm Global PSO Local PSO PSOPC PSO TSP PSO for multi-objective problem
Reference [3] [27] [28] [11] [29][30]
Besides the variants of PSO listed in Table I, PSOTS can include other two types of PSO [31]: (1) if the inertia weight in Eqn.1 (ω) is set to be 0.6 and the acceleration coefficients (c1 and c2 ) are set to be 1.7, it is the PSO of Trelea type 1; (2) if ω is set to be 0.729 and c1 and c2 are set to be 1.494, it is the PSO of Trelea type 2. These two types of PSO are not listed inside the option of PSO variants in the PSOTS interface, as the two types of PSO share the same optimization procedure as the standard PSO except the values of the inertia weight and the acceleration coefficients, which are easy to turn by users in the interface. IV. E XAMPLES In this section, several kinds of optimization problems are chosen to demonstrate the performance of PSOTS. PSOTS is run in Windows XP platform in the version of Scilab 4.0 and Scilab 5.1.1. A. DeJong function DeJong function is widely chosen as a benchmark function to test and to compare performances of optimization algorithms, as its formula is simple and it has an unique, analytical optimal solution. In addition, it is used as the example function to demonstrate the performance of optimization algorithms in the latest version of Scilab (Scilab 5.1.1). Hence, DeJong function as expressed by Eqn.5 is first chosen to test the performance of PSOTS: f (x1 , x2 , . . . , xn ) =
n i=1
x2i
(5)
΄΄ʹͣͪ͑͞͡͡
In order to demonstrate its features, especially the swarm distribution, the dimension of variables (n) is set to be 2. DeJong function is also used to compare the performances of PSOTS and the other two heuristic optimization algorithms that are integrated in the latest version of Scilab (simulated annealing and genetic algorithms). The comparison results are given in section V. The PSOPC approach is chosen to solve the optimization problem of minimization of DeJong function. The population size is set to be 100, the acceleration coefficients (c1 , c2 and c3 ) are 0.5, 0.5 and 0.6 respectively, and the inertia weight decreases from 0.9 to 0.7 linearly as expressed by Eqn.6. The parameter values of PSOPC are chosen as recommended in [28]. The average optimal value of DeJong function found by PSOTS after 100 iterations among 10 independent runnings is the theoretical one, that is 0 at the origin coordinate. The numerical test was performed within the searching range from −100 to 100 for each variable. The maximal number of iterations, which is 100, is set to be the criterion to terminate the program. PSOTS found the global optimal solution at iteration 49 at the best of times and at iteration 57.3 on average among 10 independent runnings. The convergence curve of PSOTS and the swarm distribution after 100 iterations are shown in Fig.1 and Fig.2. ωk =
M AXIT ER − k · (ωstart − ωend ) + ωend M AXIT ER
(6)
where ω k is the inertia weight at iteration k; ωstart and ωend are the initial and the final values of inertia weight respectively; M AXIT ER is the maximal number of iterations.
Fig. 2. Swarm distribution at the last iteration by PSO for minimization of DeJong function, swarm size being 100.
PSOTS can solve this kind of problems by using the approach PSO TSP. The number of cities can be set any value, and their locations can be generated randomly by PSOTS or be input from a file predefined by users. To demonstrate the PSOTS performance for solving TSPs, a known TSP namely Burma14 is chosen from the database of TSP: TSPLIB95 [32]. There are 14 cities. The known shortest path length is 3323 based on the geographical distance. The maximal number of iterations is the criterion to terminate the program. PSOTS found the optimal solution at iteration 190 at the best of times with the population size 100 among 10 independent runnings. 6 out of 10 runnings got the theoretical optimal solution, and the average optimal solution found by PSOTS is 3340.6. The optimal paths at the 10th iteration and at the last iteration are shown in Fig.3. The parameters of PSO TSP are set as follows: the inertia weight decreases linearly from 0.6 to 0.4 as expressed by Eqn.6; the acceleration coefficients are uniformly distributed random numbers between 0 and 1; the number of city transposition time (i.e. maximal length of velocity list) is set to be 9. C. Yield optimization of plant
Fig. 1.
Convergence curve by PSO for minimization of DeJong function.
B. Traveling Salesman Problem (TSP) In this section, we demonstrate the performance of PSOTS for solving TSPs, which is another kind of classical optimization problems. The objective of TSPs is to find a shortest path among cities, and each city should be arrived once and only once. It belongs to discrete optimization problems.
Due to the economic importance of crops, agronomists, geneticists, and physiologists are interested in improving the yield of economic valuable organs of plants (e.g. cobs of maize, leaves of tea plants, root of sugar beet, wood of trees). As underlined by Letort et al. [33] and Qi et al. [34], plant growth models provide new insight to improve breeding strategy: species parameters in the model should have a strong genetic determinism and a first step towards ideotype design is the determination of the best set of parameters to maximize yield in given environmental conditions. In this section, we aim at finding the best set of parameters of a functional-structural plant growth model, namely GreenLab [35], to maximize fruit yield. A single stem plant without branches is chosen as the investigation object, with a total number of 20 phytomers (elementary botanical units), and yield at plant age 26. GreenLab is implemented in the software
΄΄ʹͣͪ͑͞͡͡
and the corresponding fruit yield of the plant is 130.08 g. The optimization procedure is run independently for 3 times, and the optimal solutions found by each optimization procedure are identical. The parameter values of PSOTS are the same as the ones in section IV-A.
(a)
Fig. 4. Simulation result of fruit yield with respect to the fruit sink strength, fruit number being 16 and the first fruit being on the 5th internode.
(b) Fig. 3. Optimal path for a known TSP namely Burma14 from the database TSPLIB95 (a) at the 10th iteration and (b) at the last iteration.
GreenScilab developed in Scilab platform [36]. GreenScilab can output topological and physiological information during plant growth, especially biomass quantities. As GreenScilab is an open source software and is implemented in Scilab language, it is easy to write the formula of the objective function that can be called by PSOTS. Fruit yield is the result of the complex interaction of biophysical processes. The introduction of plant growth mechanisms and the principles of GreenLab are out of the scope of the paper. For more information, we refer to [35]. In this paper, the fruit sink strength (a physiological parameter corresponding to the ability of fruit organs to attract biomass during allocation), number of fruits and their positions are chosen to be the optimized variables. To illustrate this optimization problem, we first consider optimization only on the fruit sink strength. The other parameters of GreenLab are set to be constant. The simulation result of fruit yield with respect to the fruit sink strength is shown in Fig.4, fruit number being 16 and the first fruit being on the 5th phytomer. The simulation result shows that there is an optimal fruit sink strength (about 5). After 300 iterations, the optimal fruit sink strength within the range from 0 to 100 found by PSOTS using PSOPC is 4.57
We now turn to the more complex problem of fruit yield maximization, with optimized variables consisting of a continuous real variable, i.e. the fruit sink strength, and two integer variables, i.e. number of fruits and their positions. The optimization problem is thus a mixed integer problem. PSOTS is able to deal with this kind of problems. After 300 iterations, the optimal fruit sink strength is 4.11, the number of fruits is 17 and the first fruit position is on the 4th phytomer. The corresponding optimal fruit yield is 131.36 g. The optimization procedure is also run independently for 3 times and the optimal solutions found by the independent 3 times optimization procedures are identical. D. Multi-objective optimization problem A benchmark function widely used to test the performances of optimization algorithms for multi-objective problems by researchers [37][29] is adopted here, to demonstrate the performance of PSOTS. The test function is as expressed by Eqn.7. n g(x2 , . . . , xn ) = 1 + 9 ( i=2 xi ) /(n − 1) h(f1 , g) = 1 − f1 /g (7) f1 (x1 ) = x1 f2 (x1 , . . . , xn ) = g(x2 , . . . , xn )h(f1 , g) where n is the number of variables, which is set to be 10 in this section; f1 and f2 are the optimization objectives. The variables vary between 0 and 1. After 200 iterations, the optimal solutions of the multiobjective problem (Pareto front) are shown in Fig.5. The parameters of PSOTS are as follows. The size of archive and swarm is identically 500. The inertia weight decreases linearly
΄΄ʹͣͪ͑͞͡͡
from 0.7 to 0.4 and the acceleration coefficients (c1 ,c2 ) are identically 2.
in Scilab, and the other parameter values of PSOTS are the same as used in section IV-A. For each algorithm, we run it 10 times independently. The average value of the consumed time, success rate and convergence rate of all 10 independent runnings are listed in Table II and Table III. A. Time cost We adopt CPU time to compare the time consumed by each algorithm. From Table II, we found that the time consumed by using PSOTS is less than those obtained with GA and SA. However, the time consumed by using PSOTS is not stable, increasing as the problem dimension increases (i.e. number of variables). It is because in the PSOTS code, there is a loop that is related to the problem dimension. B. Convergence accuracy
Fig. 5.
Pareto front of the function as expressed by Eqn.7 by PSOTS.
V. P ERFORMANCE COMPARISON In the latest version of Scilab, two heuristic optimization algorithms are integrated as inner function: function optim ga of GA and function optim sa of SA, for single objective problems. In this section, we compare the performances of GA, SA, and PSOTS by employing two widely accepted criteria. The first criterion is the error between the optimal function value fobj by optimization algorithms and the known analytical optimal value fanal , which satisfies |fobj − fanal | < 0.001 in our experiment and the second is the maximum number of objective function evaluation, which is set to be 10000 and 200000 respectively. When the first criterion is satisfied, we think that this algorithm is successful in this experiment. When the second criterion is satisfied, the consumed time is recorded and returned right way and the current optimal solution is recorded, no matter whether the algorithm converges to the global optimal value. DeJong function is used to compare the performances of GA, SA, and PSOTS, with increasing number of dimensions. The searching range for each dimension is between −100 and 100, and the population size is set to be 100 for GA and PSOTS. To compare the consumed time of each algorithm, the number of objective function evaluation is used as the criterion. The number of objective function evaluation for GA and PSO is population size times number of iterations, while for SA, it equals to the multiplication of number of temperature decrease and number of iterations during each temperature stage. Hence, if the maximal number of iterations for GA and PSOTS is set to be 100, the number of temperature decrease is set to be 100 and the number of iterations during each temperature stage is 100; if the maximal number of iterations for GA and PSOTS is set to be 2000, the number of temperature decrease and the number of iterations during each temperature stage are set to be 200 and 1000 respectively. The other parameter values of GA and SA are the default values
No matter what the complexity of the optimization problem is, the optimal solutions found by PSOTS are all better than that found by GA and SA, as listed in Table II and III. In addition, comparing the success rate, we found that PSOTS successfully solves more complex problems that GA and SA deal with with low success rate or even they completely failed to solve. PSOTS is more suitable for computationally expensive optimization problems. C. Convergence rate Even though GA and SA found the optimal solution of the problem when the number of objective function evaluation is large, 200000 for instance as shown in Table III, the solutions returned by GA after 10000 iterations are much worse, as listed in Table II. Even though the solutions returned by SA are acceptable, all the optimal solutions found by PSOTS are better. PSOTS has a faster convergence rate than GA and SA that are integrated in the latest version of Scilab. VI. C ONCLUSION A generic PSO toolbox based on Scilab (PSOTS) is developed, with a friendly interface designed in TCL/TK. It is able to deal with continuous, discrete and even mixed integer problems. Besides single objective problems, PSOTS provides a specific particle swarm optimization algorithm to deal with multi-objective optimization problems. It is easy to link with other simulation programs, for example, GreenScilab which is a software simulating virtual plant growth. PSOTS can be distributed and modified freely and be extended easily according to users’ requirements. It is suitable and useful for research and teaching, as the platform used for its development is open source and is free of charge. In the present version of PSOTS, only unconstrained optimization problems can be solved. The PSO algorithms for constrained optimization problems will be integrated to PSOTS soon. ACKNOWLEDGMENT This work is supported by National Natural Science Foundation of China (NSFC) (No. 60703043) and National 863 High-Tech Research Plan of China (No. 2008AA10Z218).
΄΄ʹͣͪ͑͞͡͡
TABLE II C OMPARISON OF OPTIMAL RESULTS OF D E J ONG FUNCTION OVER 10 INDEPENDENT RUNNINGS , FUNCTION EVALUATION BEING 10000. Dimension 2 4 6 8 10
Time consuming (s) PSOPC GA SA 0.35 10.90 2.74 0.60 10.96 2.77 0.86 10.99 2.80 1.11 11.07 2.78 1.35 10.95 2.78
Success rate PSOPC GA SA 100% 100% 100% 100% 100% 100% 100% 30% 30% 100% 0% 0% 0% 0% 0%
2 4 6 8 10
Time PSOPC 6.97 12.12 17.33 22.09 26.85
consuming GA 216.65 216.90 220.84 219.47 220.72
(s) SA 54.99 55.09 55.33 55.19 55.34
Success rate PSOPC GA SA 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 60% 0% 100% 10% 0%
R EFERENCES [1] J. A. Snyman, Practical Mathematical Optimization: An Introduction to Basic Optimization Theory and Classical and New Gradient-Based Algorithms. Springer Berlin, 2005. [2] C. Houck, J. Joines, and M. Kay, “A genetic algorithm for function optimization: A matlab implementation,” North Carolina State University, Technical Report, 1996. [3] J. Kennedy and R. Eberhart, “Particle swarm optimization,” in Proc. IEEE Conf. on Neural Networks, vol. 4, Piscataway, NJ, 1995, pp. 1942– 1948. [4] E. Triki, Y. Collette, and P. Siarry, “A theoretical study on the behavior of simulated annealing leading to a new cooling schedule,” European Journal of Operational Research, vol. 166, pp. 77–92, 2005. [5] Pudn programmers united develop net. http://en.pudn.com. [6] The Genetic Algorithm Optimization Toolbox (GAOT) for Matlab 5. http://www.ise.ncsu.edu/mirage/GAToolBox/gaot/. [7] Drafts and source codes of particle swarm optimization. http://clerc.maurice.free.fr/pso. [8] Scilab The open source platform for numerical computation. http://www.scilab.org/. [9] J. Kennedy and R. Eberhart, Swarm Intelligence. Morgan Kaufmann Publishers, 2001. [10] R. Perez and K. Behdinan, “Particle swarm approach for structural design optimization,” Computers & Structures, vol. 85, no. 19-20, pp. 1579–1588, 2007. [11] M. Clerc, “Discrete particle swarm optimization - illustrated by the traveling salesman problem,” 2002, http://www.mauriceclerc.net. [12] M. Meissner, M. Schmuker, and G. Schneider, “Optimized Particle Swarm Optimization (OPSO) and its application to artificial neural network training,” BMC Bioinformatics, vol. 7, no. 125, 2006. [13] S. Ghoshal, “Optimizations of PID gains by particle swarm optimizations in fuzzy based automatic generation control,” Electric Power Systems Research, vol. 72, no. 3, pp. 203–212, 2004. [14] T. Sousa, A. Silva, and A. Neves, “Particle swarm based data mining algorithms for classification tasks,” Parallel Computing, vol. 30, no. 5-6, pp. 767–783, 2004. [15] J. Medeiros and R. Schirru, “Identification of nuclear power plant transients using the particle swarm optimization algorithm,” Annual of Nuclear Energy, vol. 35, no. 4, pp. 576–582, 2008. [16] Y. d. Valle, S. Mohagheghi, and R. G. Harley, “Particle swarm optimization: Basic concepts, variants and applications in power systems,” IEEE Transactions on Evolutionary Computation, vol. 12, no. 2, pp. 171–195, 2008. [17] J.-H. Wen, H.-L. Hung, and Y.-F. Huang, “A modified particle swarm optimization for multiuser detection in ds-cdma communication systems,” WSEAS Transactions on Communications, vol. 6, no. 8, 2007.
GA, SA
AND
PSO,
NUMBER OF OBJECTIVE
Optimal solution (average / minimum) PSOPC GA SA 0/0 0/0 2.40e− 6 / 0 4.32e− 5 / 2.30e− 5 1.67e− 4 / 2.00e− 5 1.44e− 4 / 3.20e− 5 2.92e− 4 / 6.90e− 5 6.04 / 2.39e− 4 1.28e− 3 / 6.58e− 4 9.42e− 4 / 7.13e− 4 73.94 / 2.07 3.42e− 3 / 2.01e− 3 3.31e− 3 / 1.08e− 3 176.76 / 26.07 5.96e− 3 / 2.75e− 3
TABLE III C OMPARISON OF OPTIMAL RESULTS OF D E J ONG FUNCTION OVER 10 INDEPENDENT RUNNINGS , FUNCTION EVALUATION BEING 200000. Dimension
BY
PSOPC 0/0 0/0 0/0 0/0 0/0
BY
GA, SA
AND
PSO,
Optimal solution (average GA 0/0 4.60e− 5 / 1.00e− 6 3.69e− 4 / 8.10e− 5 1.05e− 3 / 6.70e− 4 2.41e− 3 / 6.21e− 4
NUMBER OF OBJECTIVE
/ minimum) SA 0/0 4.08e− 5 / 6.00e− 6 3.74e− 4 / 1.75e− 4 1.52e− 3 / 1.13e− 3 2.99e− 3 / 2.34e− 3
[18] R. Qi, V. Letort, M. Kang, d. R. P. Courn`ede, P-H., and T. Fourcaud, “Application of the GreenLab model to simulate and optimize wood production and tree stability: a theoretical study,” Silva Fennica, vol. 43, no. 3, pp. 465–487, 2009. [19] R. Eberhart and Y. Shi, “Particle swarm optimization: developments, applications and resources,” in IEEE Congress on Evolutionary Computation (CEC 2001), Seoul,Korea, 2001. [20] R. Poli, “An analysis of publications on particle swarm optimisation applications,” Department of Computer Science, University of Essex, Technical Report CSM-469, 2007. [21] PSO TOOLBOX. http://psotoolbox.sourceforge.net/. [22] B. Birge, “PSOt - a particle swarm optimization toolbox for use with Matlab,” in 2003 IEEE Swarm Intelligence Symposium SIS’03, 2003. [23] L. d. S. Coelho and C. A. Sierakowski, “A software tool for teaching of particle swarm optimization fundamentals,” Advances in Engineering Software, vol. 39, pp. 877–887, 2008. [24] M. Wetter, “Genopt generic optimization program, user manual version 3.0.0,” Simulation Research Group, Building Technologies Department, Environmental Energy Technologies Division Lawrence Berkeley National Laboratory, Berkeley, Technical Report LBNL-2077E, 2009. [25] Y. Shi and R. Eberhart, “A modified particle swarm optimizer,” in IEEE Congress on Evolutionary Computation, Anchorge Alaska, 1998, pp. 69–73. [26] M. Song and G. Gu, “Research on particle swarm optimization: A review,” in Third International Conference on Machine Learning and Cybernetics, Shanghai, 2004, pp. 2236–2241. [27] J. Kennedy and R. Mendes, “Population structure and particle swarm performance,” in IEEE Congress on Evolutionary Computation, Honolulu Hawaii USA, 2002, pp. 1671–1676. [28] S. He, Q. Wu, J. Wen, J. Saunders, and R. Paton, “A particle swarm optimizer with passive congregation,” BioSystems, vol. 78, pp. 135–147, 2004. [29] T. J. Mostaghim, S., “Strategies for finding good local guides in multi-objective particle swarm optimization (MOPSO),” in IEEE Swarm Intelligence Symposium, 2003, pp. 26–33. [30] P. Tripathi, S. Bandyopadhyay, and S. Pal, “Multi-objective particle swarm optimization with time variant inertia and acceleration coefficients,” Information Sciences, vol. 177, pp. 5033–5049, 2007. [31] I. C. Trelea, “The particle swarm optimization algorithm: convergence analysis and parameter selection,” Information Processing Letters, vol. 85, pp. 317–325, 2003. [32] TSPLIB. http://www.iwr.uni-heidelberg.de/groups/comopt/software /TSPLIB95/. [33] V. Letort, P. Mahe, P. Courn`ede, P. de Reffye, and B. Courtois, “Quantitative genetics and functional – structural plant growth models: Simulation of quantitative trait loci detection for model parameters and application to potential yield optimization,” Annals of Botany, vol. 101, pp. 1243–1254, 2008.
[34] R. Qi, Y. Ma, B. Hu, P. de Reffye, and P.-H. Courn`ede, “Optimization of source-sink dynamics in plant growth for ideotype breeding: a case study on maize,” Computers and Electronics in Agriculture, 2009, accepted. [35] P. de Reffye, E. Heuvelink, D. Barth´el´emy, and P.-H. Courn`ede, “Plant growth models,” in Ecological Models. Vol. 4 of Encyclopedia of Ecology (5 volumes), S. Jorgensen and B. Fath, Eds. Elsevier (Oxford), 2008, pp. 2824–2837. [36] M. Kang, R. Qi, P. de Reffye, and B. Hu, “GreenScilab: A toolbox simulating virtual plants in the Scilab environment,” in MESM’2006, 8th International Middle Eastern Simulation Multiconference, M. Al-Akaidi, Ed., European Technology Institute. Alexandria, Egypt: EUROSIS, 2006, pp. 174–178. [37] J. Fieldsend and S. Singh, “A multi-objective algorithm based upon particle swarm optimization, an efficient data structure and turbulence,” in 2002 U.K. Workshop on Computational Intelligence.
΄΄ʹͣͪ͑͞͡͡