Solving Constrained Optimization Problems by the ε Constrained Particle Swarm Optimizer with Adaptive Velocity Limit Control Tetsuyuki Takahama
Setsuko Sakai
Department of Intelligent Systems Hiroshima City University Asaminami-ku, Hiroshima, 731-3194 Japan
[email protected]
Faculty of Commercial Sciences Hiroshima Shudo University Asaminami-ku, Hiroshima, 731-3195 Japan
[email protected]
by F and the search space in which every point satisfies the upper and lower bound constraints be denoted by S (⊃ F). There exist many studies on solving constrained optimization problems using evolutionary computations[1], [2] and particle swarm optimization[3]. These studies can be classified into several categories according to the way the constraints are treated as follows: (1) Constraints are only used to see whether a search point is feasible or not[4]. Approaches in this category are usually called death penalty methods. In this category, the searching process begins with one or more feasible points and continues to search for new points within the feasible region. When a new search point is generated and the point is not feasible, the point is repaired or discarded. In this category, generating initial feasible points is difficult and computationally demanding when the feasible region is very small. If the feasible region is extremely small, as in problems with equality constraints, it is almost impossible to find initial feasible points. (2) The constraint violation, which is the sum of the violation of all constraint functions, is combined with the objective function. The penalty function method is in this category[5], [6], [7], [8], [9], [10]. In the penalty function method, an extended objective function is defined by adding the constraint violation to the objective function as a penalty. The optimization of the objective function and the constraint violation is realized by the optimization of the extended objective function. The main difficulty of the penalty function method is the difficulty of selecting an appropriate value for the penalty coefficient that adjusts the strength of the penalty. If the penalty coefficient is large, feasible solutions can be obtained, but the optimization of the objective function will be insufficient. On the contrary, if the penalty coefficient is small, high quality (but infeasible) solutions can be obtained as it is difficult to decrease the constraint violation. (3) The constraint violation and the objective function are used separately. In this category, both the constraint violation and the objective function are optimized by a lexicographic order in which the constraint violation precedes the objective function. Deb[11] proposed a method in which the extended
Abstract— The ε constrained method is an algorithm transformation method, which can convert algorithms for unconstrained problems to algorithms for constrained problems using the ε level comparison that compares the search points based on the constraint violation of them. We proposed the ε constrained particle swarm optimizer εPSO, which is the combination of the ε constrained method and particle swarm optimization. In the εPSO, the agents who satisfy the constraints move to optimize the objective function and the agents who don’t satisfy the constraints move to satisfy the constraints. But sometimes the velocity of agents becomes too big and they fly away from feasible region. In this study, to solve this problem, we propose to divide agents into some groups and control the maximum velocity of agents adaptively by comparing the movement of agents in each group. The effectiveness of the improved εPSO is shown by comparing it with various methods on well known nonlinear constrained problems. Keywords—constrained optimization, nonlinear optimization, particle swarm optimization, ε constrained method, α constrained method
I. I NTRODUCTION Constrained optimization problems, especially nonlinear optimization problems, where objective functions are minimized under given constraints, are very important and frequently appear in the real world. In this study, the following optimization problem (P) with inequality constraints, equality constraints, upper bound constraints and lower bound constraints will be discussed. (P) minimize subject to
f (x) gj (x) ≤ 0, j = 1, . . . , q hj (x) = 0, j = q + 1, . . . , m li ≤ xi ≤ ui , i = 1, . . . , n,
(1)
where x = (x1 , x2 , · · · , xn ) is an n dimensional vector, f (x) is an objective function, gj (x) ≤ 0 and hj (x) = 0 are q inequality constraints and m − q equality constraints, respectively. Functions f, gj and hj are linear or nonlinear real-valued functions. Values ui and li are the upper bound and the lower bound of xi , respectively. Also, let the feasible space in which every point satisfies all constraints be denoted
c 2006 IEEE 1–4244–0023–6/06/$20.00 °
683
CIS 2006
II. T HE ε CONSTRAINED METHOD
objective function that realizes the lexicographic ordering is used. Takahama and Sakai proposed the α constrained method[12] and ε constrained method[13], which adopt a lexicographic ordering with relaxation of the constraints. Runarsson and Yao[14] proposed the stochastic ranking method in which the stochastic lexicographic order, which ignores the constraint violation with some probability, is used. These methods were successfully applied to various problems. (4) The constraints and the objective function are optimized by multiobjective optimization methods. In this category, the constrained optimization problems are solved as the multiobjective optimization problems in which the objective function and the constraint functions are objectives to be optimized[15], [16], [17], [18]. But in many cases, solving multiobjective optimization problems is a more difficult and expensive task than solving single objective optimization problems. It has been shown that the methods in the third category have better performance than methods in other categories in many benchmark problems. Especially, the α and the ε constrained methods are quite new and unique approaches to constrained optimization. We call these methods algorithm transformation methods, because these methods does not convert objective function, but convert an algorithm for unconstrained optimization into an algorithm for constrained optimization by replacing the ordinal comparisons with the α level and the ε level comparisons in direct search methods. These methods can be applied to various unconstrained direct search methods and can obtain constrained optimization algorithms. We showed the advantage of the α constrained methods by applied the methods to Powell’s method[12], the nonlinear simplex method[19], [20], genetic algorithms (GAs)[21] and particle swarm optimization (PSO)[22]. These methods can optimize problems with severe constraints including equality constraints effectively through the relaxation of the constraints. Recently, we proposed the ε constrained method and the ε Constrained Particle Swarm Optimizer (εPSO)[13], which is the combination of the ε constrained method and PSO. In the εPSO, the agent or point which satisfies the constraints will move to optimize the objective function and the agent which does not satisfy the constraints will move to satisfy the constraints, naturally. But sometimes the velocity of agents becomes too big and they fly away from feasible region. In this study, to solve this problem, we propose to divide agents into some groups and control the maximum velocity of agents adaptively by comparing the movement of agents in each group. The effectiveness of the εPSO with adaptive velocity limit control is shown by comparing it with various methods on some well known problems where the performance of many methods in all categories were compared. The ε constrained methods and the εPSO are described in Section II and III, respectively. The εPSO with adaptive velocity limit control is defined in Section IV. In Section V, experimental results on some constrained problems are shown and the results of the improved εPSO are compared with those of other methods. Finally, conclusions are described in Section VI.
A. Constraint violation and ε level comparison In the ε constrained method, constraint violation φ(x) is defined. The constraint violation can be given by the maximum of all constraints or the sum of all constraints. φ(x) φ(x)
max{max{0, gj (x)}, max |hj (x)|} (2) j j ∑ ∑ = ||max{0, gj (x)}||p + ||hj (x)||p (3)
=
j
j
where p is a positive number. The ε level comparison is defined as an order relation on the set of (f (x), φ(x)). If the constraint violation of a point is greater than 0, the point is not feasible and its worth is low. The ε level comparisons are defined basically as a lexicographic order in which φ(x) precedes f (x), because the feasibility of x is more important than the minimization of f (x). This precedence can be adjusted by the parameter ε-level. Let f1 (f2 ) and φ1 (φ2 ) be the function values and the constraint violation at a point x1 (x2 ), respectively. Then, for any ε satisfying ε ≥ 0, ε level comparisons