Expert Systems with Applications 37 (2010) 1676–1683
Contents lists available at ScienceDirect
Expert Systems with Applications journal homepage: www.elsevier.com/locate/eswa
Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems Leandro dos Santos Coelho * Pontifical Catholic University of Paraná, Graduate Program in Industrial and Systems Engineering, Automation and Systems Laboratory, PUCPR/PPGEPS Imaculada Conceição, 1155, 80215-901 Curitiba, PR, Brazil
a r t i c l e
i n f o
Keywords: Particle swarm optimization Quantum computation Mechanical design Gaussian distribution Continuous optimization Engineering design Swarm intelligence
a b s t r a c t Particle swarm optimization (PSO) is a population-based swarm intelligence algorithm that shares many similarities with evolutionary computation techniques. However, the PSO is driven by the simulation of a social psychological metaphor motivated by collective behaviors of bird and other social organisms instead of the survival of the fittest individual. Inspired by the classical PSO method and quantum mechanics theories, this work presents novel quantum-behaved PSO (QPSO) approaches using mutation operator with Gaussian probability distribution. The application of Gaussian mutation operator instead of random sequences in QPSO is a powerful strategy to improve the QPSO performance in preventing premature convergence to local optima. In this paper, new combinations of QPSO and Gaussian probability distribution are employed in well-studied continuous optimization problems of engineering design. Two case studies are described and evaluated in this work. Our results indicate that Gaussian QPSO approaches handle such problems efficiently in terms of precision and convergence and, in most cases, they outperform the results presented in the literature. Ó 2009 Elsevier Ltd. All rights reserved.
1. Introduction Recently, a new class of metaheuristics, called swarm intelligence, was proposed (Bonabeau, Dorigo, & Theraulaz, 1999; Dorigo & Stützle, 2004; Kennedy, Eberhart, & Shi, 2001). The swarm intelligence is an emerging research field that presents features of selforganization and cooperation principles among group members bio-inspired on social insect societies. Swarm intelligence is inspired by nature, based on the fact that the live animals of a group contribute with their individual experiences to the group, rendering it stronger to face other groups. The most familiar representatives of swarm intelligence in optimization problems are: the food-searching behavior of ants (Dorigo & Di Caro, 1999), particle swarm optimization (Shi & Eberhart, 2000), bacterial colony (Sierakowski & Coelho, 2005), and spider colonies (Bourjot, Chevier, & Thomas, 2003). In this context, the development of bio-inspired methodologies based on particle swarm optimization (PSO) systems is a relevant research area with applications in areas such as control systems (Coelho, Oliveira, & Cunha, 2005), data mining (Sousa, Silva, & Neves, 2004), manufacturing (Andrés & Lozano, 2006), robotics (Coelho & Sierakowski, 2008), structural reliability (Elegbede, 2005), power systems (Chuanwen & Bompard, 2005), electromagnetics (Adly & Abd-El-Hafiz, 2004), clustering (Chen & Zhao, * Tel.: +55 41 3271 13 33; fax: +55 41 3271 13 45. E-mail address:
[email protected] 0957-4174/$ - see front matter Ó 2009 Elsevier Ltd. All rights reserved. doi:10.1016/j.eswa.2009.06.044
2009), dynamic question generation (Cheng, Lin, & Hunag, 2009), classification (Marinakis, Marinaki, & Dounias, 2008), support vector machines (Lin, Ying, Chen, & Lee, 2008), communication networks (Huang, Chuang, & Yang, 2008), reliability-redundancy optimization (Coelho, 2009), inventory planning (Tsou, 2008), image segmentation (Maitra & Chatterjee, 2008), and others. The particle swarm optimization (PSO) originally developed by Kennedy and Eberhart in 1995 (Eberhart & Kennedy, 1995; Kennedy & Eberhart, 1995) is a population-based swarm algorithm. Similarly to genetic algorithm (Goldberg, 1989), an evolutionary algorithm approach, PSO is an optimization tool based on a population, where each member is seen as a particle, and each particle is a potential solution to the problem under analysis. Each particle in PSO has a randomized velocity associated to it, which moves through the space of the problem. However, unlike genetic algorithm, PSO does not have operators, such as crossover and mutation. PSO does not implement the survival of the fittest individuals; rather, it implements the simulation of social behavior. At the end of the 19th century, classical mechanics encountered major difficulties in describing motions of microscopic particles with extremely light masses and extremely high velocities, and the physical phenomena related to such motions. This forced scientists to rethink the applicability of classical mechanics and lead to fundamental changes in their traditional understanding of the nature of motions of microscopic objects (Pang, 2005). The studies of Bohr, de Broglie, Schrödinger, Heisenberg and Bohn in 1920s
L.d.S. Coelho / Expert Systems with Applications 37 (2010) 1676–1683
inspire the conception of a new area, the quantum mechanics (Schweizer, 2001). Quantum mechanical computers were proposed in the early 1980s (Benioff, 1980) and the description of quantum mechanical computers was formalized in the late 1980s (Deutsch, 1985). Many efforts on quantum computers have progressed actively since the early 1990s (Vedral & Plenio, 1998) because these computers shown to be more powerful that classical computers on various specialized problems (Han & Kim, 2002). There are well-known quantum algorithms such as Shor’s quantum algorithm (Shor, 1994) and Grover’s database search algorithm (Grover, 1996). Recently, the concepts of quantum mechanics, physics and computing have motivated the generation of optimization methods, see Spector, Barnum, Bernstein, and Swamy (1999), Narayanan and Moore (1996), Hogg and Portnov (2000), Protopescu and Barhen (2002), Han and Kim (2002), Hirvensalo (2002), Bulger, Baritopa, and Wood (2003), and Wang, Tang, and Wu (2005). Inspired by the PSO and quantum mechanics theories, this work contributes by presenting some approaches of quantum-behaved PSO (QPSO) based on Gaussian distribution for continuous optimization. The application of Gaussian sequences instead of random sequences with uniform distribution in QPSO is a powerful strategy to improve the QPSO’s performance in preventing premature convergence to local optima. In order to study the performance of the QPSO approaches, two well-studied engineering design examples were solved and the best results obtained by these approaches over 50 trials were compared with those reported in the literature. In this context, the engineering designs evaluated here are: (i) a tension/compression spring design, and (ii) design of a pressure vessel. The rest of the paper is organized as follows: Section 2 describes the features of PSO for continuous optimization, while Section 3 explains the QPSO approaches. Section 4 describes the procedure of constraints handling based on penalties. Section 5 then details the case studies of engineering design optimization. Section 6 presents the results of the optimization and compares methods to solve case studies of engineering problems. Lastly, Section 7 outlines the conclusion of paper.
2. Classical particle swarm optimization The proposal of PSO algorithm was put forward by several scientists who developed computational simulations of the movement of organisms such as flocks of birds and schools of fish. Such simulations were heavily based on manipulating the distances between individuals, i.e., the synchrony of the behavior of the swarm was seen as an effort to keep an optimal distance between them. In theory, at least, individuals of a swarm may benefit from the prior discoveries and experiences of all the members of a swarm. The fundamental point of developing PSO is a hypothesis in which the exchange of information among creatures of the same species offers some sort of evolutionary advantage. Similarly to other population-based algorithms, PSO exploits a population of search points to probe the search space. Each individual in particle swarm, referred to as a ‘particle’, represents a potential solution. Each particle utilizes two important kinds of information in decision process. The first one is their own experience; that is, they have tried the choices and know which state has been better so far, and they know how good it was. The second one is other particle’s experiences; that is, they have knowledge of how the other agents around them have performed. Each particle in PSO keeps track of its coordinates in the problem space, which are associated with the best solution (fitness) it has achieved so far. This value is personal best (pbest). Another
1677
‘‘best” value that is tracked by the global version of the particle swarm optimizer is the overall best value and its location obtained so far by any particle in the population. This location is global best (gbest). Each particle moves its position in search domain and updates its velocity according to its own flying experience and neighbor’s flying experience toward its pbest and gbest locations (global version of PSO). Acceleration is weighted by random terms, with separate random numbers being generated for acceleration toward pbest and gbest locations, respectively. The basic elements of standard PSO are briefly stated and defined as follows: Particle xi ðtÞ; i ¼ 1; . . . ; n: It is a potential solution represented by a n-dimensional vector, where n is the number of optimization variables. Swarm: It is an apparently disorganized population of moving particles that tend to cluster together while each particle seems to be moving in a random direction. Individual best position pi ðtÞ; i ¼ 1; . . . ; n: As a particle moves through the search space, it compares the fitness value at the current position to the best fitness value it has ever attained at any time up to the current time. Global best position, pg ðtÞ: It is the best position among all individual best positions achieved so far. Particle velocity v i ðtÞ; i ¼ 1; . . . ; n: It is the velocity of the moving particles, which is represented by a n-dimensional vector. According to the individual best and global best positions, the particles velocity is updated. After obtaining the velocity updating, each particle position is changed to the next generation. The procedure for implementing the global version of PSO is given by the following steps: Step 1. Initialization of swarm positions and velocities: Initialize a population (array) of particles with random positions and velocities in the n-dimensional problem space using uniform probability distribution function. Step 2. Evaluation of particle’s fitness: Evaluate each particle’s fitness value. We are minimizing, rather than maximizing, the fitness function in this paper. Step 3. Comparison to pbest (personal best): Compare each particle’s fitness with the particle’s pbest. If the current value is better than pbest, then set the pbest value equal to the current value and the pbest location equal to the current location in a n-dimensional space. Step 4. Comparison to gbest (global best): Compare the fitness with the population’s overall previous best. If the current value is better than gbest, then reset gbest to the current particle’s array index and value. Step 5. Updating of each particle’s velocity and position: Change the velocity, v i , and position of the particle, xi , according to Eqs. (1) and (2):
v i ðt þ 1Þ ¼ w v i ðtÞ þ c1 ud ½pi ðtÞ xi ðtÞ þ c2 Ud ½pg ðtÞ xi ðtÞ xi ðt þ 1Þ ¼ xi ðtÞ þ Dt v i ðt þ 1Þ
ð1Þ ð2Þ
where w is the inertia weight; i = 1, 2, . . . , N indicates the number of particles of population (swarm); indicates the iterations; t ¼ 1; 2; . . . ; t max , v i ¼ ½v i1 ; v i2 ; . . . ; v in T stands for the velocity of the ith particle,xi ¼ ½xi1 ; xi2 ; . . . ; xin T stands for the position of the ith particle of population, and pi ¼ ½pi1 ; pi2 ; . . . ; pin T represents the best previous position of the ith particle. Positive
1678
L.d.S. Coelho / Expert Systems with Applications 37 (2010) 1676–1683
constants c1 and c2 are the cognitive and social components, respectively, which are the acceleration constants responsible for varying the particle velocity towards pbest and gbest, respectively. Index g represents the index of the best particle among all the particles in the swarm. Variables ud and Ud are two random functions in the range [0, 1]. Eq. (1) represents the position update, according to its previous position and its velocity, considering Dt ¼ 1. Step 6. Repeating the evolutionary cycle: Return to Step 2 until a stop criterion is met, usually a sufficiently good fitness or a maximum number of iterations (generations).
3. Quantum-behaved particle swarm optimization (QPSO) In terms of classical mechanics, a particle is depicted by its position vector xi and velocity vector v i , which determine the trajectory of the particle. The particle moves along a determined trajectory in Newtonian mechanics, but this is not the case in quantum mechanics. In quantum world, the term trajectory is meaningless, because xi and v i of a particle cannot be determined simultaneously according to uncertainty principle. Therefore, if individual particles in a PSO system have quantum behavior, the PSO algorithm is bound to work in a different fashion (Sun, Feng, & Xu, 2004; Sun, Xu, & Feng, 2005). In the quantum model of a PSO called here QPSO, the state of a particle is depicted by wavefunction wðx; tÞ (Schrödinger equation) (Schweizer, 2001; Levin, 2002), instead of position and velocity. The dynamic behavior of the particle is widely divergent form that of that the particle in classical PSO systems in that the exact values of xi and v i cannot be determined simultaneously. In this context, the probability of the particle’s appearing in position xi from probability density function jwðx; tÞj2 (Liu, Xu, & Sun, 2005). Employing the Monte Carlo method, the particles move according to the following iterative equation (Cai et al., 2008; Liu et al., 2005; Sun et al., 2004; Sun et al., 2005; Xi, Sun, & Xu, 2008):
xi ðt þ 1Þ ¼ p þ b jMbest i xi ðtÞj lnð1=uÞ; if k P 0:5 xi ðt þ 1Þ ¼ p b jMbest i xi ðtÞj lnð1=uÞ; if k < 0:5
ð3Þ
where b is a design parameter called contraction–expansion coefficient (Clerc & Kennedy, 2002) u and k are values generated using the uniform probability distribution functions in the range [0, 1]. The global point called Mainstream Thought or Mean Best (Mbest) of the population is defined as the mean of the pbest positions of all particles and it given by
Mbest ¼
N 1 X p ðtÞ N d¼1 g;d
ð4Þ
where g represents the index of the best particle among all the particles in the swarm. In this case, the local attractor (Clerc & Kennedy, 2002) to guarantee convergence of the PSO presents the following coordinates:
p¼
c1 pi;d þ c2 pg;d c1 þ c2
ð5Þ
The procedure for implementing the QPSO is given by the following steps (Coelho, 2008; Coelho & Mariani, 2008): Step 1. Initialization of swarm positions: Initialize a population (array) of particles with random positions in the n-dimensional problem space using a uniform probability distribution function. Step 2. Evaluation of particle’s fitness: Evaluate the fitness value of each particle. We are minimizing, rather than maximizing, the fitness function in this paper.
Step 3. Updating of global point: Calculate the Mbest using Eq. (4). Step 4. Comparison to pbest (personal best): Compare each particle’s fitness with the particle’s pbest. If the current value is better than pbest, then set the pbest value equal to the current value and the pbest location equal to the current location in the n-dimensional space. Step 5. Comparison to gbest (global best): Compare the fitness with the population’s overall previous best. If the current value is better than gbest, then reset gbest to the current particle’s array index and value. Step 6. Updating of particles’ position: Change the position of the particles where c1 and c2 are two random numbers generated using a uniform probability distribution in the range [0, 1]. Step 7. Repeating the evolutionary cycle: Loop to Step 2 until a stop criterion is met, usually a sufficiently good fitness or a maximum number of iterations (generations).
3.1. Quantum-behaved particle swarm optimization using Gaussian mutation Various new versions of PSO have been presented in the last few years. Most PSO algorithms use uniform probability distribution to generate random numbers (see Gaing, 2003; Kennedy & Eberhart, 1995; Moustafa, Mekhamer, Moustafa, El-Sherif, & Mansour, 2004). However, new approaches using Gaussian, Cauchy and exponential probability distributions to generate random numbers to updating the velocity equation of PSO have been proposed (Coelho & Alotto, 2008; Coelho & Herrera, 2007; Coelho & Lee, 2008; Higashi & Iba, 2003; Kennedy, 2003; Krohling, 2004; Krohling & Coelho, 2006a, 2006b; Secrest & Lamont, 2003). In this paper, following the same line of study, we present new results of mutation operator in QPSO using Gaussian probability distribution. Generating random numbers using Gaussian distribution sequences with zero mean and unit variance for the stochastic coefficients of PSO may provide a good compromise between the probability of having a large number of small amplitudes around the current points (fine tuning) and a small probability of having higher amplitudes, which may allow particles to move away from the current point and escape from local minima. In this work, firstly, random numbers are generated using the absolute value of the Gaussian probability distribution with zero mean and unit variance, i.e., absðNð0; 1ÞÞ. These new QPSO approaches combined with mutation operator are described as follows: Approach 1 – G-QPSO(1): Parameter u of Eq. (3) is modified by the following equation:
xi ðt þ 1Þ ¼ p þ b jMbesti xi ðtÞj lnð1=GÞ; if k P 0:5 xi ðt þ 1Þ ¼ p b jMbesti xi ðtÞj lnð1=GÞ; if k < 0:5
ð6Þ
where G ¼ absðNð0; 1ÞÞ. Approach 2 – G-QPSO(2): Parameters c1 and c2 of Eq. (5) are modified by the following equation:
p¼
G pi;d þ g pg;d Gþg
ð7Þ
where g ¼ absðNð0; 1ÞÞ. Approach 3 – G-QPSO(3): This approach uses Eqs. (6) and (7). 4. Constraints handling When applying PSO, QPSO, and G-QPSO approaches to the optimization of engineering design, a key issue is how the algorithm
L.d.S. Coelho / Expert Systems with Applications 37 (2010) 1676–1683
handles constraints relating to the problem. The literature proposes several methods for constraint handling in evolutionary algorithms and swarm intelligence approaches (Coello, 2002; Deb, 2000). These methods can be grouped into categories, such as methods that preserve solution feasibility, penalty-based methods, methods that clearly distinguish between feasible and unfeasible solutions, and hybrid methods (Michalewicz & Schoenauer, 1996). It is usual to handle constraints in optimization methods based on the concept of penalty functions (which penalize unfeasible solutions) (Michalewicz & Schoenauer, 1996). That is, one attempts to solve an unconstrained minimization problem in the search space S using a modified fitness function, such as:
ev alðxÞ ¼
f ðxÞ;
if x 2 F
f ðxÞ þ penaltyðxÞ; otherwise
ð8Þ
where penalty(x) is zero if no constraint is violated, and is positive otherwise. Usually, the penalty function is based on a distance measure to the nearest solution in the feasible region F or on the effort to repair the solution. The methodology proposed for constraint handling is divided into two steps. The first step aims at finding solutions for the decision variables that lie within user-defined upper (ub) and lower (lb) bounds, that is, x 2 ½ub; lb. Whenever a lower bound or an upper bound constraints is not satisfied, a repair rule is applied, according to Eqs. (9) and (10), respectively:
xi ¼ xi þ rand½0; 1 fubðxi Þ lbðxi Þg
ð9Þ
xi ¼ xi rand½0; 1 fubðxi Þ lbðxi Þg
ð10Þ
where rand½0; 1 is a uniformly distributed random value between 0 and 1. In the second step decision variables are considered inequalities ðg i ðxÞ 6 0Þ. In this work we minimize the objective function in examples 1 and 2 (see details in the next section), and thus, the objective function equation is rewritten as:
ev alðxÞ ¼
8 < f ðxÞ; : f ðxÞ þ r q
if g i ðxÞ 6 0 n P
g i ðxÞ; if g i ðxÞ > 0
ð11Þ
i¼1
where q is a positive constant (arbitrarily set to 5000) and r is the number of constraints g i ðxÞ that were not satisfied. 5. Numerical examples Several case studies taken from the optimization literature have been used to demonstrate the performance of evolutionary algorithms and swarm intelligence methodologies, such as the one presented in Arora (1989) and Rao (1996). To study the performance of the PSO, QPSO, and G-QPSO approaches, two examples of well-studied engineering design were solved and the best results obtained through the mentioned optimization approaches in 50 trials were compared with those reported in the literature. PSO, QPSO, and G-QPSO methods were implemented using Matlab 6.5 to run on a PC compatible with Pentium IV, a 3.2 GHz processor and 2 GB of RAM (Random Access Memory). In all the experiments, to start the PSO, QPSO and G-QPSO approaches, the population size is set to 20 particles, and the maximum generations (set as stopping condition) is set to 400 generations (first example) and 100 generations (second example). A total of 8000 and 2000 cost function evaluations were made with each optimization approach in each run to the first and second examples of engineering design, respectively. For the PSO, QPSO, and G-QPSO approaches, the operational parameters control the balance between exploitation (using the
1679
existing material in the swarm to best effect) and exploration (searching for better particles). The setup of parameters c1 ; c2 ; w (classical PSO) and b (QPSO and G-QPSO) affect the convergence rate and robustness of the search procedure. Their optimal values are dependent both on objective function characteristics, maximum number of generations (stopping criterion) and on the population size. Usually, suitable values for those parameters can be found by experimentation after a few tests using different values. The proper choice of w and b may lead to good performance under several learning strategies while a wrong choice may result in performance deterioration under any learning strategy of PSO. Suitable selection of inertia weight w provides a balance between global and local exploration and exploitation, and on average results in less iterations required to find a sufficiently optimal solution. In this work, the classical PSO uses also the inertia weight with a linear reduction equation with initial and final values of 0.7 and 0.4. QPSO and G-QPSO use b based on a gradual decreasing from 0.7 to 0.01 with a linear decreasing rate. 5.1. Example 1: Design of a pressure vessel The first case study deals with a pressure vessel design. In this case, a cylindrical pressure vessel with two hemispherical heads is designed for minimum fabrication cost. Four variables are identified: thickness of the pressure vessel, T s , thickness of the head, T h , inner radius of the vessel, R, and length of the vessel without heads, L (see Fig. 1). In this case, the variable vectors are given (in inches) by
ðT s ; T h ; R; LÞ ¼ ðx1 ; x2 ; x3 ; x4 Þ ¼ X
ð12Þ
The variables R and L are treated as continuous variables and T s and T h as discrete variables with a constant increment of 0.0625 in., according to the nonlinear programming problem outlined in an earlier study (Kannan & Kramer, 1994). The objective function is the combined cost of materials, forming and welding of the pressure vessel. The mathematical model of mixed-integer optimization problem is expressed as (Kannan & Kramer, 1994):
min f ðXÞ ¼ 0:6224x1 x3 x4 þ 1:7781x2 x23 þ 3:1661x21 x4 þ 19:84x21 x3
ð13Þ
subject to the following constraints:
g 1 ðXÞ ¼ x1 þ 0:0193x3 6 0
ð14Þ
g 2 ðXÞ ¼ x2 þ 0:0954x3 6 0 4 g 3 ðXÞ ¼ px23 x4 px33 þ 1; 296; 000 6 0 3 g 4 ðXÞ ¼ x4 240 6 0
ð15Þ
Fig. 1. Diagram of the pressure vessel used as the first example.
ð16Þ ð17Þ
1680
L.d.S. Coelho / Expert Systems with Applications 37 (2010) 1676–1683
The following ranges of the design variables were used (Coello, 2000): 1 6 x1 6 99; 1 6 x2 6 99; 1000 6 x3 6 2000, and 1000 6 x4 6 2000. The values for x1 and x2 were considered as integer multiples of 0.0625 in. In the PSO, QPSO, and G-QPSO approaches, each parameter is coded independently. When dealing with integers, the particles in dimensions x1 and x2 is truncated to integers.
Table 2 Best result using G-QPSO(1) approaches for the first example.
5.2. Example 2: Minimization of the weight of a tension/compression spring Next, we consider the design of a tension/compression spring to be designed for minimum weight subject to constraints on minimum deflection, shear stress, surge frequency, limits on the outside diameter, and on design variables. The design variables are the wire diameter ðx1 Þ, the mean coil diameter ðx2 Þ, as shown in Fig. 2, and also the number of active coils ðx3 Þ. The design optimization problem involves three continuous variables and four nonlinear inequality constraints, such that
min f ðXÞ ¼ ðx3 þ 2Þx1 x22
ð19Þ
4x21 x1 x2 1 þ 160 5108x22 12; 566 x1 x32 x42 140:45x2 g 3 ðXÞ ¼ 1 60 x21 x3 x 1 þ x2 g 4 ðXÞ ¼ 160 1:5 g 2 ðXÞ ¼
G-QPSO (1)
x1 x2 x3 x4 g1
Ts Th R L Eq. (14)
0.8125 0.4375 42.0984 176.6372
g2
Eq. (15)
g3 g4 f(X)
Eq. (16) Eq. (17) Eq. (13)
3:5881 102 0.2179 63.3628 6059.7208
8:7999 107
This problem has already been solved by several researchers, including Sandgren (1988), who used a branch-and-bound approach, Kannan and Kramer (1994), who used an augmented Lagrangian Multiplier approach, Deb (1997), using GeneAS (Genetic Adaptive Search), Cao and Wu (1997), who employed an approach of improved evolutionary programming, Cao and Wu (1998), who used cellular automata based on a genetic algorithm, Lin, Wang, and Hwang (1999), who proposed a hybrid differential evolution method, Hu, Eberhart, and Shi (2003), using particle swarm optimization, and Schmidt and Thierauf (2005), who applied a threshold accepting algorithm with differential evolution. Simulations are conducted to study the behavior of the PSO, QPSO, and G-QPSO approaches in order to collect statistics of the simulation results. Performances are then assessed on the basis of the best fitness values and the statistics results of PSO, QPSO, and G-QPSO approaches are reported in Table 1. Simulation results had shown that for the first example, the QPSO and proposed G-QPSO strategies perform satisfactorily. The results in Table 1 shows that the classical PSO was outperformed by QPSO and G-QPSO approaches for the first example. Table 1 with best results in bold font. From Table 1, it can be observed that G-QPSO(1) and G-QPSO(3) approaches are robust and find solutions which are better than the best solution found using PSO and QPSO. The solutions found by G-QPSO approaches present smaller standard deviation and mean values than the results obtained by PSO and QPSO. The best mean from the 50 runs performed was f(X) = 6440.3786 using G-QPSO(1). Furthermore, the best solutions obtained were f(X) = 6059.7208 using G-QPSO(1) too. The best result is presented in Table 2. The best result obtained by G-QPSO(1) in this work for Example 1 was compared with the five best results reported in the literature, and are presented in Table 3. The best solution from the 50 runs obtained by G-QPSO(1) is still about 3.6% superior to the best solution previously reported in the literature.
ð18Þ
x31 x3 60 71; 785x42
Parameters
6.1. Example 1
subject to the following constraints:
g 1 ðXÞ ¼ 1
Design variables
ð20Þ ð21Þ ð22Þ
The following ranges of the design variables were used: 0:25 6 x1 6 1:3; 0:05 6 x2 6 2:0, and 2 6 x3 6 15 (Ray & Liew, 2003).
6. Comparison of simulation results The performance of the PSO, QPSO, and G-QPSO approaches developed here were tested on the two case studies described in the previous section. The convergence of PSO, QPSO, and G-QPSO approaches was analyzed based on a comparison with results reported in the literature.
6.2. Example 2 This problem was solved previously by Belengundu (1982), using eight optimization approaches. Arora (1989), Coello (2000),
Fig. 2. Tension/compression string used as the second example.
Table 1 Results of PSO, QPSO, and G-QPSO approaches for the first example. Optimization method
Mean time (s)
Tested approaches PSO QPSO G-QPSO(1) G-QPSO(2) G-QPSO(3)
10.31 10.51 10.66 10.66 10.66
Objective function in 50 runs Worst (maximum)
Best (minimum)
Mean
Median
Standard deviation
14,076.3240 8017.2816 7544.4925 9934.0961 7544.4925
6693.7212 6059.7209 6059.7208 6059.7110 6059.7110
8756.6803 6839.9326 6440.3786 6912.9997 6464.8166
8424.4837 6818.2384 6257.5943 6771.6813 6370.8227
1492.5670 479.2671 448.4711 761.4074 465.1386
1681
L.d.S. Coelho / Expert Systems with Applications 37 (2010) 1676–1683 Table 3 Comparison of results for the design of a pressure vessel. Design variables
Sandgren (1988)
Zhang and Wang (1993)
Cao and Wu (1997)
Deb (1997)
Coello (2000)
This paper using G-QPSO(1)
x1 ðT s Þ x2 ðT h Þ x3 ðRÞ x4 ðLÞ g1
1.1250 0.6250 48.3807 11.7449 0.1913
1.1250 0.6250 58.2900 43.6930 0.0250
1.0000 0.6250 51.1958 90.7821 0.0119
0.9375 0.5000 48.3290 112.6790 0.004750
0.8125 0.4375 40.3239 200.0000 0.0034324
0.8125 0.4375 42.0984 176.6372
g2
0.1634
0.0689
0.1366
0.038941
0.052847
g3 g4 f(X)
75.8750 128.2551 8048.6190
6.5496 196.3070 7197.7000
13,584.5631 149.2179 7108.6160
3652.876838 127.321000 6370.7035
27.105845 40.0000 6288.7445
3:5881 102 0.2179 63.3628 6059.7208
8:7999 107
Table 4 Results of PSO, QPSO, and G-QPSO for the second example. Optimization method
Mean time (s)
Objective function in 50 runs Worst (maximum)
Best (minimum)
Mean
Median
Standard deviation
9.54 9.79 9.89 9.91 9.95
0.071802 0.018127 0.015869 0.017759 0.016182
0.012857 0.012669 0.012666 0.012665 0.012670
0.019555 0.013854 0.012996 0.013524 0.013236
0.013373 0.013854 0.012719 0.012957 0.012746
0.011662 0.001341 0.000628 0.001268 0.000833
Tested approaches PSO QPSO G-QPSO(1) G-QPSO(2) G-QPSO(3)
by G-QPSO(2) is superior to the best solution previously reported in the literature.
Table 5 Best result using G-QPSO(2) for the second example. Design variables
Parameters
G-QPSO(2)
x1 x2 x3 g1
d D N Eq. (19)
0.051515 0.352529 11.538862
g2
Eq. (20)
g3 g4 f(X)
Eq. (21) Eq. (22) Eq. (18)
7. Conclusion
4:8341 105 5
3:5774 10 4.0455 0.73064 0.012665
Ray and Saini (2001) and Ray and Liew (2003) also solved this problem using other optimization methods. The results of PSO, QPSO, and G-QPSO approaches are presented in Table 4. Table 4 with best results in bold font. As can be seen, for this example, the QPSO reached solutions closer to the solution of all G-QPSO approaches. The best mean and median values from the 50 runs performed was using G-QPSO(1). However, the best solution obtained was using G-QPSO(2) with f(X) = 0.012665 (see Table 5). In this context, the best solution of G-QPSO(2) is just slightly better than the solution found by G-QPSO(1). The best result obtained by G-QPSO(2) in this work for Example 2 were compared with results reported in the literature and are presented in Table 6. The best solution from the 50 runs obtained
In this paper, new G-QPSO approaches are proposed and applied to solve engineering design problems. The possibilities of exploring the QPSO efficiency combined with Gaussian distribution sequences are successfully presented, as illustrated by two case studies. The simulation results from 50 runs presented in this paper demonstrate that the G-QPSO(1) and G-QPSO(2) approaches tested are efficient methods to improve the QPSO’s performance in preventing premature convergence to local minima. The proposed G-QPSO approaches performed consistently well on the two case studies, with better results than previously published solutions for these problems. In terms of convergence, the simulation results show that the GQPSO(1) converge to obtain solutions closer to the good solution and present a small standard deviation. The aim of future works is to investigate the use of G-QPSO for optimization in control systems, reliability engineering, and electric power systems. Furthermore, others relevant studies will can be realized, such as: (i) comparative analysis of several approaches of QPSO based on simulated annealing local search, and (ii) design and test of adaptive penalty functions for constrained problems.
Table 6 Comparison of results for the design of a tension/compression spring. Design variables
Arora (1989)
Belengundu (1982)
Coello (2000)
Ray and Saini (2001)
Ray and Liew (2003)
This paper using G-QPSO(2)
x1 ðdÞ x2 ðDÞ x3 ðNÞ g1
0.053396 0.399180 9.185400 0.000019
0.0500000 0.3159000 14.250000
0.051480 0.351661 11.632201
0.050417 0.321532 13.979915
0.0521602 0.3681587 10.648442
0.051515 0.352529 11.538862
0.000018
1:2672 103 0.1490
3:3366 103 0.1357
1:925 103 0.1556
7:4527 109 0.1314
4:8341 105
g2 g3 g4 f(X)
4.123842 0.698283 0.0127302737
3.9383 0.7561 0.01273027
4.0263 0.7312 0.0127047834
3.8994 0.7520 0.0130602
4.0758 0.7198 0.012669249
3:5774 105 4.0455 0.73064 0.012665
1682
L.d.S. Coelho / Expert Systems with Applications 37 (2010) 1676–1683
Acknowledgement This work was supported by the National Council of Scientific and Technologic Development of Brazil – CNPq – under Grant 309646/2006-5/PQ. References Adly, A. A., & Abd-El-Hafiz, S. K. (2004). Field computation in non-linear magnetic media using particle swarm optimization. Journal of Magnetism and Magnetic Materials, 272–276(1), 690–692. Andrés, C., & Lozano, S. (2006). A particle swarm optimization algorithm for partmachine grouping. Robotics and Computer-Integrated Manufacturing, 22(5–6), 468–474. Arora, J. S. (1989). Introduction to optimum design. New York, NY, USA: MGraw-Hill. Belengundu, A. D. (1982). A study of mathematical programming methods for structural optimization. Department of Civil and Environmental Engineering, University of Iowa, Iowa, USA. Benioff, P. (1980). The computer as a physical system: A microscopic quantum mechanical Hamiltonian model of a computers as represented by Turing machines. Journal of Statistical Physics, 22(5), 563–591. Bonabeau, E., Dorigo, M., & Theraulaz, G. (1999). Swarm intelligence: From natural to artificial systems. New York, USA: Oxford University Press. Bourjot, C., Chevier, V., & Thomas, V. (2003). A new swarm mechanism based on social spiders colonies: From web weaving to region detection. Web Intelligence and Agent Systems: An International Journal, 1(1), 47–64. Bulger, D., Baritopa, W. P., & Wood, G. R. (2003). Implementing pure adaptive search with Grover’s quantum algorithm. Journal of Optimization Theory and Applications, 116(3), 517–529. Cai, Y., Sun, J., Wang, J., Ding, Y., Tian, N., Liao, X., et al. (2008). Optimizing the codon usage of synthetic gene with QPSO algorithm. Journal of Theoretical Biology, 254(1), 123–127. Cao, Y. J., & Wu, Q. H. (1997). Mechanical design optimization. In IEEE conference on evolutionary computation (pp. 443–446), Indianapolis, USA. Cao, Y. J., & Wu, Q. H. (1998). A cellular automata based genetic algorithm and its application in mechanical design optimization. In UKACC international conference on control (pp. 1593–1598), Swansea, Wales. Cheng, S. C., Lin, Y. T., & Hunag, Y. M. (2009). Dynamic question generation system for web-based testing using particle swarm optimization. Expert Systems with Applications, 36(1), 616–624. Chen, D., & Zhao, C. (2009). Data-driven fuzzy clustering based on maximum entropy principle and PSO. Expert Systems with Applications, 36(1), 625–633. Chuanwen, J., & Bompard, E. (2005). A self-adaptive chaotic particle swarm algorithm for short term hydroelectric system scheduling in deregulated environment. Energy Conversion and Management, 46(17), 2689–2696. Clerc, M., & Kennedy, J. F. (2002). The particle swarm: Explosion, stability and convergence in a multi-dimensional complex space. IEEE Transactions on Evolutionary Computation, 6(1), 58–73. Coelho, L. S. (2008). A quantum particle swarm optimizer with chaotic mutation operator. Chaos, Solitons and Fractals, 37(5), 1409–1418. Coelho, L. S. (2009). An efficient particle swarm approach for mixed-integer programming in reliability-redundancy optimization applications. Reliability Engineering and System Safety, 94(4), 830–837. Coelho, L. S., & Alotto, P. (2008). Global optimization of electromagnetic devices using an exponential quantum-behaved particle swarm optimizer. IEEE Transactions on Magnetics, 44(6), 1074–1077. Coelho, L. S., & Herrera, B. M. (2007). Fuzzy identification based on a chaotic particle swarm optimization approach applied to a nonlinear yo-yo motion system. IEEE Transactions on Industrial Electronics, 54(6), 3234–3245. Coelho, L. S., & Lee, C.-S. (2008). Solving economic load dispatch problems in power systems using chaotic and Gaussian particle swarm optimization approaches. International Journal of Electrical Power & Energy Systems, 30(5), 297–307. Coelho, L. S., & Mariani, V. C. (2008). Particle swarm approach based on quantum mechanics and harmonic oscillator potential well for economic load dispatch with valve-point effects. Energy Conversion and Management, 49(11), 3080–3085. Coelho, J. P., Oliveira, P. N. M., & Cunha, J. B. (2005). Greenhouse air temperature predictive control using the particle swarm optimization algorithm. Computers and Electronics in Agriculture, 49(3), 330–344. Coelho, L. S., & Sierakowski, C. A. (2008). A software tool for teaching of particle swarm optimization fundamentals. Advances in Engineering Software, 39(11), 877–887. Coello, C. A. C. (2000). Use of a self-adaptive penalty approach for engineering optimization problem. Computers in Industry, 4(2), 113–127. Coello, C. A. C. (2002). Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state of the art. Computer Methods in Applied Mechanics and Engineering, 191(11–12), 1245–1287. Deb, K. (1997). GeneAS: A robust optimal design technique for mechanical component design. In D. Dasrupta & Z. Michalewicz (Eds.), Evolutionary algorithms in engineering applications (pp. 497–514). Berlin: Springer-Verlag. Deb, K. (2000). An efficient constraint handling method for genetic algorithms. Computer Methods in Applied Mechanics and Engineering, 186(2–4), 311–338. Deutsch, D. (1985). Quantum theory, the Church-Turing principle and the universal quantum computer. Proceedings of Royal Society of London, A400, 97–117.
Dorigo, M., & Di Caro, G. (1999). The ant colony optimization meta-heuristic. In D. Corne, M. Dorigo, & F. Glover (Eds.), New ideas in optimization (pp. 11–32). NY, USA: McGraw-Hill. Dorigo, M., & Stützle, T. (2004). Ant colony optimization. A Bradford book. Cambridge, MA: The MIT Press. Eberhart, R. C., & Kennedy, J. F. (1995). A new optimizer using particle swarm theory. In Proceedings of 7th international symposium on micro machine and human science (pp. 39–43), Japan. Elegbede, C. (2005). Structural reliability assessment based on particles swarm optimization. Structural Safety, 27(2), 171–186. Gaing, Z.-L. (2003). Particle swarm optimization to solving the economic dispatch considering the generator constraints. IEEE Transactions on Power Systems, 18(3), 1187–1195. Goldberg, D. E. (1989). Genetic algorithms in search optimization and machine learning. Reading, MA, USA: Addison Wesley. Grover, L. K. (1996). A fast quantum mechanic algorithm for database search. In Proceedings of 28th ACM symposium theory of computing (pp. 212–219), Philadelphia, Pennsylvania, USA. Han, K.-H., & Kim, J.-H. (2002). Quantum-inspired evolutionary algorithm for a class of combinatorial optimization. IEEE Transactions on Evolutionary Computation, 6(6), 580–593. Higashi, N., & Iba, H. (2003). Particle swarm optimization with Gaussian mutation. In Proceedings of the IEEE swarm intelligence symposium (pp. 72–79), Indianapolis, IN, USA. Hirvensalo, M. (2002). Quantum computing – Facts and folklore. Natural Computing, 1(1), 135–155. Hogg, T., & Portnov, D. S. (2000). Quantum optimization. Information Sciences, 128(3–4), 181–197. Huang, C. J., Chuang, Y. T., & Yang, D. X. (2008). Implementation of call admission control scheme in next generation mobile communication networks using particle swarm optimization and fuzzy logic systems. Expert Systems with Applications, 35(3), 1246–1251. Hu, X., Eberhart, R. C., & Shi, Y. (2003). Engineering optimization with particle swarm. In IEEE conference on swarm intelligence, Indianapolis, IN, USA. Kannan, B. K., & Kramer, S. N. (1994). An augmented Lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. Journal of Mechanical Design Transactions of the ASME, 116, 318–320. Kennedy, J. F. (2003). Bare bones particle swarms. In Proceedings of the IEEE swarm intelligence symposium (pp. 80–87), Indianapolis, IN, USA. Kennedy, J. F., & Eberhart, R. C. (1995). Particle swarm optimization. In Proceedings of IEEE international conference on neural networks (pp. 1942–1948), Perth, Australia. Kennedy, J. F., Eberhart, R. C., & Shi, Y. (2001). Swarm intelligence. San Francisco, CA, USA: Morgan Kaufmann Pub.. Krohling, R. A. (2004). Gaussian Swarm: A novel particle swarm optimization algorithm. In Proceedings of the IEEE conference on cybernetics and intelligent systems (CIS) (pp. 372–376), Singapore. Krohling, R. A., & Coelho, L. S. (2006a). PSO-E: Particle swarm with exponential distribution. In IEEE world congress on computational intelligence, proceedings of IEEE congress on evolutionary computation (pp. 5577–5582), Vancouver, Canada. Krohling, R. A., & Coelho, L. S. (2006b). Coevolutionary particle swarm optimization using gaussian distribution for solving constrained optimization problems. IEEE Transactions on Systems, Man and Cybernetics - Part B: Cybernetics, 36(6), 1407–1416. Levin, F. S. (2002). An introduction to quantum theory. Cambridge University Press. Lin, Y.-C., Wang, F.-S., & Hwang, K.-S. (1999). A hybrid method of evolutionary algorithms for mixed-integer nonlinear optimization problems. In Proceedings of congress on evolutionary computation (Vol. 3, pp. 2159–2166), Washington, DC, USA. Lin, S. W., Ying, K. C., Chen, S. C., & Lee, Z. J. (2008). Particle swarm optimization for parameter determination and feature selection of support vector machines. Expert Systems with Applications, 35(4), 1817–1824. Liu, J., Xu, W., & Sun, J. (2005). Quantum-behaved particle swarm optimization with mutation operator. In Proceedings of 17th international conference on tools with artificial intelligence, Hong Kong, China. Maitra, M., & Chatterjee, A. (2008). A hybrid cooperative-comprehensive learning based PSO algorithm for image segmentation using multilevel thresholding. Expert Systems with Applications, 34(2), 1341–1350. Marinakis, Y., Marinaki, M., & Dounias, G. (2008). Particle swarm optimization for pap-smear diagnosis. Expert Systems with Applications, 35(4), 1645–1656. Michalewicz, Z., & Schoenauer, M. (1996). Evolutionary algorithms for constrained parameter optimization problems. Evolutionary Computation, 4(1), 1–32. Moustafa, Y. G., Mekhamer, S. F., Moustafa, Y. G., El-Sherif, N., & Mansour, M. M. (2004). A modified particle swarm optimizer applied to the solution of the economic dispatch problem. In International conference on electrical, electronic, and computer engineering, ICEEC (pp. 724–731), Cairo, Egypt. Narayanan, A., & Moore, M. (1996). Quantum-inspired genetic algorithms. In Proceedings of IEEE international conference on evolutionary computation (pp. 61– 66), Nagoya, Japan. Pang, X.-F. (2005). Quantum mechanics in nonlinear systems. River Edge, NJ, USA: World Scientific Publishing Company. Protopescu, V., & Barhen, J. (2002). Solving a class of continuous global optimization problems using quantum algorithms. Physics Letters A, 296, 9–14. Rao, S. S. (1996). Engineering optimization (3rd ed.). John Wiley and Sons.
L.d.S. Coelho / Expert Systems with Applications 37 (2010) 1676–1683 Ray, T., & Liew, K. M. (2003). Society and civilization: An optimization algorithm based on the simulation of social behavior. IEEE Transactions on Evolutionary Computation, 7(4), 386–396. Ray, T., & Saini, P. (2001). Engineering design optimization using a swarm with an intelligent information sharing among individuals. Engineering Optimization, 33(3), 735–748. Sandgren, E. (1988). Nonlinear integer and discrete programming in mechanical design. In Proceedings of the ASME design technology conference (pp. 95–105), Kissimine, FL, USA. Schmidt, H., & Thierauf, G. (2005). A combined heuristic optimization technique. Advances in Engineering Software, 36(1), 11–19. Schweizer, W. (2001). Numerical quantum dynamics. MA, USA: Hingham. Secrest, B. R., & Lamont, G. B. (2003). Visualizing particle swarm optimization – Gaussian particle swarm optimization. In Proceedings of the IEEE swarm intelligence symposium (pp. 198–204), Indianapolis, IN. Shi, Y., & Eberhart, R. C. (2000). Experimental study of particle swarm optimization. In Proceedings of fourth world conference on systems, cybernetics and informatics, Orlando, FL, USA. Shor, P. W. (1994). Algorithms for quantum computation: Discrete logarithms and factoring. In Proceedings of 35th annual symposium on foundations of computer science (pp. 124–134). Sante Fe, NM, USA. Sierakowski, C. A., & Coelho, L. S. (2005). Study of two swarm intelligence techniques for path planning of mobile robots. In Proceedings of 16th triennial world congress of the IFAC, Prague, Czech Republic.
1683
Sousa, T., Silva, A., & Neves, A. (2004). Particle swarm based data mining algorithms for classification tasks. Parallel Computing, 30(5–6), 767–783. Spector, L., Barnum, H., Bernstein, H. J., & Swamy, N. (1999). Finding a better-thanclassical quantum AND/OR algorithm using genetic programming. In Proceedings of congress on evolutionary computation (pp. 2239–2246), Washington, DC, USA. Sun, J., Feng, B., & Xu, W. (2004). Particle swarm optimization with particles having quantum behavior. In Proceedings of congress on evolutionary computation (pp. 325–331), Portland, OR, USA. Sun, J., Xu, W., & Feng, B. (2005). Adaptive parameter control for quantum-behaved particle swarm optimization on individual level. In Proceedings of IEEE international conference on systems, man and cybernetics (pp. 3049–3054), Big Island, HI, USA. Tsou, C. S. (2008). Multi-objective inventory planning using MOPSO and TOPSIS. Expert Systems with Applications, 35(1-2), 136–142. Vedral, V., & Plenio, M. (1998). Basics of quantum computation. Progress in Quantum Electronics, 22(1), 1–39. Wang, L., Tang, F., & Wu, H. (2005). Hybrid genetic algorithm based on quantum computing for numerical optimization and parameter estimation. Applied Mathematics and Computation, 171(2), 1141–1156. Xi, M., Sun, J., & Xu, W. (2008). An improved quantum-behaved particle swarm optimization algorithm with weighted mean best position. Applied Mathematics and Computation, 205(2), 751–759. Zhang, C., & Wang, H. P. (1993). Mixed-discrete nonlinear optimization with simulated annealing. Engineering Optimization, 17(3), 263–280.