Advanced particle swarm assisted genetic algorithm for constrained ...

4 downloads 499 Views 324KB Size Report
Rank-based multi-parent crossover · Constrained optimization · Feasible ..... GA for the global search, and a parallel local search operator incorporating clustering ... simple (μ, λ) evolutionary strategy is used as the search engine. modified ...
Comput Optim Appl DOI 10.1007/s10589-014-9637-0

Advanced particle swarm assisted genetic algorithm for constrained optimization problems Manoj Kumar Dhadwal · Sung Nam Jung · Chang Joo Kim

Received: 15 July 2013 © Springer Science+Business Media New York 2014

Abstract A novel hybrid evolutionary algorithm is developed based on the particle swarm optimization (PSO) and genetic algorithms (GAs). The PSO phase involves the enhancement of worst solutions by using the global-local best inertia weight and acceleration coefficients to increase the efficiency. In the genetic algorithm phase, a new rank-based multi-parent crossover is used by modifying the crossover and mutation operators which favors both the local and global exploration simultaneously. In addition, the Euclidean distance-based niching is implemented in the replacement phase of the GA to maintain the population diversity. To avoid the local optimum solutions, the stagnation check is performed and the solution is randomized when needed. The constraints are handled using an effective feasible population based approach. The parameters are self-adaptive requiring no tuning based on the type of problems. Numerical simulations are performed first to evaluate the current algorithm for a set of 24 benchmark constrained nonlinear optimization problems. The results demonstrate reasonable correlation and high quality optimum solutions with significantly less function evaluations against other state-of-the-art heuristic-based optimization algorithms. The algorithm is also applied to various nonlinear engineering optimization problems and shown to be excellent in searching for the global optimal solutions. Keywords Evolutionary computation · Particle swarm · Genetic algorithm · Rank-based multi-parent crossover · Constrained optimization · Feasible population

M. K. Dhadwal · S. N. Jung (B) · C. J. Kim Department of Aerospace Information Engineering, Konkuk University, Seoul, Korea e-mail: [email protected] M. K. Dhadwal e-mail: [email protected] C. J. Kim e-mail: [email protected]

123

M. K. Dhadwal et al.

1 Introduction In recent decades, evolutionary computation has emerged as an efficient and powerful technique to solve a variety of design optimization problems. The real world problems can often be posed as highly nonlinear, convex or non-convex, and smooth or non-smooth along with a large number of design variables and constraints, often faced with multiple local optima. In a conventional sense, the optimization methods can be classified into derivative-based and derivative-free algorithms. The derivative-based algorithms utilize the gradient information of a continuous function to find the optimum solutions. For the functions with discontinuities, the derivatives may not exist, and hence the derivative-based algorithms can fail to reach the optimum solution. Moreover, these algorithms strongly depend on the choice of the starting points, and therefore cannot guarantee the convergence to a global optimum. Besides, the computation of the derivative either exact or an approximation leads to an increase of the computational cost. The derivative-free algorithms do not require the derivative or gradient information but use the function evaluations. These are population-based and considered to be effective in finding the global optimum. The particle swarm optimization (PSO) and genetic algorithms belong to such category known as metaheuristic algorithms [1,2]. The two main features of the metaheuristic algorithms are the intensification (exploitation) and diversification (exploration) [3]. The latter is intended to explore the search space globally, whereas the former is meant to direct the search in a small region of the search space. The diversification is generally employed to avoid getting trapped near the local optima using the randomization of the population. The intensification, on the other hand, speeds up the convergence to reach the best possible solutions. There are numerous algorithms proposed for the global optimization. One of the popular metaheuristic algorithms is the genetic algorithms (GAs) originally introduced by Holland [4] in 1975. The GAs are inspired from the natural evolution, which involves selection, crossover, mutation and inheritance. These are typically based on the binary representation of the solutions, which are modified in each generation or iteration to improve the value of the objective function. The GAs have been extended to handle the floating-point variables directly using the real number representation leading to the so-called real-coded GAs (RCGAs) [5]. GAs typically find the global optimum solution but suffer from a slow convergence rate. The efficiency of RCGAs can be improved using the multi-parent based crossover operators instead of two parent crossover operators. Some of the well-known multi-parent crossover operators are the unimodal distribution crossover (UNDX) [6], the parent centric crossover (PCX) [7], the triangular crossover [8] and the multi-parent crossover (MPC) [9]. The UNDX performs better for problems with epistasis. However, it can not generate offsprings in some areas having small population size.The UNDX may therefore fail to reach the optimal point, if the optimal point lies in one of these areas unless a large population size is used [6]. The PCX has a high speed of the convergence to reach the optimum by the selection of the parent with best fitness [7]. However, GA using PCX as crossover operator has been proven inadequate in solving separable multimodal problems with higher dimensionality and depends on the problem type [10]. The triangular crossover [8] performs well for the problems with optimal solution

123

Advanced particle swarm assisted genetic algorithms

at the feasible region boundary. The MPC is proposed by Elsayed et al. [9] motivated from the heuristic crossover and the mutation operator of differential evolution. The MPC accompanied by a randomized mutation operator has been implemented in the GA and tested on a set of constrained optimization problems resulting in a superior performance compared to the adaptive differential evolution. In RCGAs, the mutation operator helps in maintaining the diversity of the population. The mutation operator depends on two factors: first, the probability or mutation rate; and second, the amount of perturbation. There are several mutation operators available in the literature for RCGA. Among them, Polynomial mutation [11] based on the polynomial distribution function has been widely used for solving single and multi-objective optimization problems successfully. Deb [11], Deb and Srivastava [12] also proposed the self-adaptive mutation probability varying with the number of generations. The polynomial mutation with variable mutation probability has been selected to be used in the current study. In 1995, Kennedy and Eberhart [13,14] developed the PSO algorithm based on the swarm intelligence such as the flock of birds flying in formation and searching for food. The velocity and position of the population is updated based on the personal best (exploration of the search space) and global best positions (intensified search in the direction of global best). The balance between the exploration and exploitation is controlled by cognitive and social parameters, respectively. The inertia weight is used to preserve the previous velocity. PSO is known to be efficient in obtaining the optimal solution, but it shows a tendency toward premature convergence and may get trapped in a local optimum. To exploit the best features of both PSO and GA, several hybridization procedures have been attempted by dividing the population for each of the algorithms, which are executed either sequentially or simultaneously. Shi et al. [15,16] proposed a PSOGA based hybrid algorithm where both of the algorithms were run simultaneously using different sub-populations. The approach was tested on a set of unconstrained optimization problems, and showed good performance both in terms of convergence rate and the capability to locate the global optimum. Takahama et al. [17] proposed a -constrained hybrid of PSO and GA for the constrained optimization problems. The constraints were relaxed using the -constrained method in the initial iterations, and gradually reduced to zero after a fixed number of control iterations. The hybrid algorithm exhibited relatively good performance though it was tested only for a limited set of three problems. Moreover, the the efficiency of the crossover and mutation operators was not taken care of seriously. Zhang et al. [18] proposed a hybrid of GA and PSO with a modified arithmetic crossover and a mutation operator based on the PSO. The increase in convergence rate was obtained even though the performance of the algorithm on other multimodal problems and the constrained optimization problems was not fully exploited. Wahed et al. [19] integrated PSO and GA to solve the nonlinear unconstrained optimization problems. The solutions were updated using the PSO followed by the single point crossover and real-valued mutation operations. The majority of the efforts on hybridization of PSO and GA mentioned above utilized the existing simple crossover and mutation operators. The collective effects of the crossover operator, constraint handling schemes, and niching relevant to the effi-

123

M. K. Dhadwal et al.

ciency of the hybrid of PSO and GA have not been investigated thoroughly. There is a strong need to combine suitable crossover and mutation operators with a competent constraint handling technique for developing an efficient and powerful hybrid method. In the present study, an advanced particle swarm assisted genetic algorithm (PSGA) is proposed to assess the above issues. To handle the constraints effectively, a feasible population based relaxation scheme proposed by Zhang [20] is implemented. The larger value of constraint relaxation in the initial generations helps in the exploration of the solutions. The niching is implemented in the GA based on the Euclidean distance between the offspring and parent solutions to avoid previously-visited regions in the search space. The unique features of the present work include: (a) only the worst solutions are enhanced in the particle swarm phase where a global-local best inertia weight (GBIW) is used to speed up the convergence; (b) to improve the convergence and stability of the RCGAs, a new rank-based multi-parent crossover (MPC) operator is introduced based on the parents’ ranks partially adopted from the differential evolution programming; (c) the stagnation check is performed and the solutions are randomized if trapped in a local optimum; (d) the parameters are self-adaptive requiring no adjustments based on the problem characteristics. The efficiency and efficacy of the proposed algorithm are investigated on a set of benchmark constrained problems as well as complex engineering optimization problems with the continuous/discrete and real/integer variables. 2 Optimization problem description The constrained nonlinear optimization problem is defined as: Minimize f (x) subject to g j (x) ≤ 0, j = 1, . . . , m h k (x) = 0, k = 1, . . . , p xil ≤ xi ≤ xiu , i = 1, . . . , n

(2.1)

where, f (x) is the objective function to be minimized, g j (x) are the inequality constraints, h j (x) are the equality constraints, and x is an n-dimensional vector with xil and xiu as the lower and upper bounds, respectively. The constraint violation φ(x) is defined as the absolute sum of all constraints given by φ(x) =

p m     |h k (x)| max[0, g j (x)] + j=1

(2.2)

k=1

The value of φ(x) greater than zero implies the violation of the constraints, or in other words, the infeasible solution. There are a number of constraint handling methods for evolutionary algorithms which are available in the literature. Among them, two of the methods have been recently developed and have been demonstrated quite effective in solving the constrained optimization problems. Both methods involve the relaxation of the constraints which gradually approaches zero with the increase in the number of

123

Advanced particle swarm assisted genetic algorithms

iterations. The first one is proposed by Takahama et al. [17] based on -constrained method primarily for the problems having equality constraints. The second method is proposed by Zhang [20], hereafter referred to as μ-constrained method, based on the fraction of feasible individuals present during a generation. The  constrained method requires additional parameters which may affect the efficiency and stability of the optimization algorithm. On the contrary, μ-constrained method is parameter-free and only requires the fraction of feasible individuals F f eas which is updated in each optimization iteration t according to the following equation. μ(t + 1) = μ(t)(1 + F f eas )

(2.3)

As the number of feasible individuals approaches the total population size, the constraint violation gradually reduces to zero, thereby moving the individual toward the feasible region. In the present work, μ-constrained relaxation scheme is used along with the feasibility-based approach is used to rank the population as described below [11]. • A feasible solution is preferred when compared to an infeasible one. • When two feasible solutions are compared, the one with better objective function is preferred. • When two infeasible solutions are considered, the one with small constraint violation is preferred. The above statements can be written in mathematical form as: ⎧ ⎪ ⎨φ1 (x) ≤ μ(t), φ2 (x) > μ(t) ( f 1 (x), φ1 (x)) < ( f 2 (x), φ2 (x)) i f φ1 (x), φ2 (x)≤μ(t), f 1 (x)< f 2 (x) (2.4) ⎪ ⎩ φ1 (x), φ2 (x) > μ(t), φ1 (x) < φ2 (x) 3 Particle swarm assisted genetic algorithm (PSGA) The PSGA is a hybrid of improved PSO and RCGAs. Since the particle swarm phase is used only for the worst solutions, this phase assists the GA phase essentially through improving the efficiency of the overall procedure. In the first step, the population is randomly initialized over the search space with a uniform distribution. The population is then moved through two sequential phases to find the best feasible solution. The first phase involves the enhancement of population with worst fitness using the PSO. The objective function and constraint violation of the population are evaluated, and the individuals are ranked using a pair-wise comparison, as given in Eq. (2.4). The population then proceeds to the next phase through the GA. The parents are selected using a binary tournament selection scheme. The crossover and mutation operations are then performed, and the population is again ranked according to the values of the objective function and constraint violation. The solutions are directed to the stagnation check phase where a solution is randomized if there is little or no change in the value of the corresponding objective function. The iteration loop is continued until the termination criteria are satisfied. The termination criteria can be the maximum number

123

M. K. Dhadwal et al.

Initialization of Population Objective function/constraints evaluation

Termination criteria satisfied?

Yes

Optimum solution

No Particle swarm phase Objective function/constraints evaluation RMP-GA phase Stagnation Check Update Population Fig. 1 Flowchart of the proposed PSGA optimization algorithm

of function evaluations or no improvement in the objective function for a successive number of generations. The constraints are handled using the separation of feasible population. The constraint violation is initially relaxed according to the Eq. (2.3). At the initial generation, the individuals with constraint violation less than the median constraint violation of the population are considered as feasible. As the generations increase, the relaxation of the constraint violation is decreased pushing the individuals towards the feasible region. The unique feature of the constraint handling scheme is that the parameters are self-adaptive and updated automatically with the generations. To maintain the diversity in the population and re-evaluating the same area in the search space, niching strategy is applied based on the Euclidean distance between the parent and the offspring. If the offspring lies within the critical distance from the parent, the offspring is rejected. The flowchart for PSGA is shown in Fig. 1. The particle swarm phase and the genetic algorithm employed in the current approach are described in detail in the next sections. 3.1 Particle swarm phase PSO was introduced by Eberhart and Kennedy [14] inspired by the swarm behavior of a flock of birds flying in formation in the search for food. The movement of each particle or individual is controlled by the social and cognitive abilities. The velocity and position of the worst population are updated based on the current personal best position (cognitive influence) and the global best position (social influence) achieved in the whole swarm. A limit on maximum velocity is imposed considering the bounds of the design variables. The velocity and the position are updated by using Eqs. (3.1) and (3.2), respectively.

123

Advanced particle swarm assisted genetic algorithms





t t Pbest vi,t+1 − xi,t j + c2 r2 x Gbest − xi,t j j j = ω vi, j + c1 r1 x i, j

(3.1)

t+1 t xi,t+1 j = x i, j + vi, j

(3.2)

where, ωt is the inertia weight at the t-th iteration, which determines how much the previous velocity is conserved. r1 and r2 are uniformly distributed random numbers in the range [0,1]. c1 and c2 are the cognitive and social acceleration coefficients, respectively , the superscript Pbest indicates the personal best of i-th solution, whereas Gbest indicates the global best among all the solutions at t-th iteration. 3.1.1 Inertia weight The GBIW developed by Arumugam and Rao [21] is used in the current approach. This method depends on the ratio of the global best objective function to the personal best objective function, shown in Eq. (3.3). GBIW is used to improve the efficiency of the algorithm based on the ratio of the global best objective value to the personal best objective value. As the ratio approaches unity, the inertia weight reduces to a very low value limiting the search to a very small area in the vicinity of the optimal solution. The limits on the velocity and the position of the particles are imposed based on the upper and the lower bounds of the design variables. ωt = 1.1 −

Gbest Pbest avg

(3.3)

where, Pbestavg is the average of the personal best objective function value of all the particles, and Gbest is the best objective function found in the whole swarm until the t-th iteration. 3.1.2 Acceleration coefficients The acceleration coefficients of each particle are varied based on the ratio of the global best to the personal best values of the objective function, as suggested by Arumugam and Rao [21]. Both the cognitive and social acceleration coefficients are set to the same value as given by Eq. (3.4). c = c1 = c2 = 1.0 +

Gbest Pbest i

(3.4)

As the personal best approaches the global best, the value of the acceleration coefficients is increased from 1.0 to 2.0. This helps in balancing the local and global search in the particle swarm phase. PSO typically has a faster rate of convergence, but it has a tendency to converge to a local optimum solution if multiple optima are present. In the present algorithm, this problem is avoided by using the parameters based on the global-local best objective function values. It must be noted that the best solutions are not updated in this phase.

123

M. K. Dhadwal et al.

This phase essentially contributes to the overall efficiency of the hybrid algorithm. The pseudocode for the particle swarm phase is presented in Algorithm 1. Algorithm 1 Particle swarm phase 1: Initialize the population position and velocity at the beginning of the algorithm 2: Update inertia weight in each iteration, according to Eq. (3.3) 3: for i = 1 to Population Si ze do 4: Update acceleration coefficients, according to Eq. (3.4) 5: for j = 1 to N o. o f Dimensions do 6: Update the velocity, according to Eq. (3.1) 7: Update the position, according to Eq. (3.2) 8: end for 9: if f (xi )

Suggest Documents