Constrained Real Parameter Optimization with a ... - Semantic Scholar

2 downloads 187 Views 951KB Size Report
can efficiently search multi-modal fitness landscapes. We proposed a constrained optimizer based on DE by incorporating the idea of gradient-based repair with ...
Constrained Real Parameter Optimization with a Gradient Repair based Differential Evolution Algorithm Soumen Sardar1, Sayan Maity1, Swagatam Das1, and P. N. Suganthan2

1

Dept. of Electronics and Telecommunication Engg., Jadavpur University, Kolkata 700 032, India. School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798, Singapore E-mail: [email protected], [email protected] , [email protected], [email protected] 2

Abstract— Till now several constraint handling techniques have been proposed to be used in conjunction with Evolutionary Algorithms (EAs). Optimizing the constrained objective functions are computationally difficult, especially if the constraints are non-linear and nonconvex. Differential Evolution (DE) is a simple, fast, and robust population-based global optimizer that can efficiently search multi-modal fitness landscapes. We proposed a constrained optimizer based on DE by incorporating the idea of gradient-based repair with a DE/rand/1/bin scheme. If the individual candidate solutions generated by DE are infeasible we apply gradient-based repair to convert those into feasible solutions. Thus as the generations progress, the ratio between feasible search space and the whole search space is enhanced. This in turn accelerates the search process and leads to an efficient search for very high quality solutions within the feasible region. To handle the equality constraints we have used a small tolerance parameter. The 18 problems given in special session and competition on “Single Objective Constrained Real Parameter Optimization” under Congress on Evolutionary Computation (CEC 2010) are solved by gradient repair based DE and have been compared with the results obtained with two other recent and best-known constrained optimizers published. Keywords-Differential Evaluation; Constrained Optimization;

Constraint handling ; Gradient Based Repair.

I.

INTRODUCTION

Most of the real world optimization problems involve finding a solution that not only is optimal, but also satisfies one or more constraints. A general formulation for constrained optimization may be given in the following way:

G T Find X = [ x1 , x 2 ,..., x D ] , G to minimize: f ( X ) ,

subjected to:

G

inequality constraints: gi ( X ) ≤ 0,

G

equality constraints: h j ( X ) = 0,

G X ∈ ℜD

(1)

i = 1, 2,..., p,

(2)

j = 1, 2,..., q,

(3)

and boundary constraints: x j , min ≤ x j ≤ x j , max . (4) Evolutionary Algorithms (EA) and most other metaheuristics are basically devised to operate on unconstrained search spaces. Though a lot of attempts have been mentioned in literature [1 - 2] to solve constrained optimization problems

978-1-61284-072-7/11/$26.00 ©2011 IEEE

using EAs, but it requires additional mechanisms to incorporate the effects of constraints into their objective function. While solving constrained optimization problems, we have to deal with the feasible and infeasible solutions. Individual solution candidates satisfying all the equality and inequality constraints are called feasible solutions and individuals failing to satisfy one or more of the constraints are called infeasible solutions. One of the most important concerns in constraint optimization technique is how to handle the infeasible solutions. It may be possible to totally ignore these infeasible solutions but as EAs are stochastic search methods, completely discarding the infeasible solutions may incur into loss of information about some promising regions of the function landscape. A traditional approach is to impose a penalty [3] for the infeasible solutions. As constraint violation is included, the penalized candidate solutions can be handled as an unconstrained objective function that can be optimized using unconstrained optimizing technique. This technique having several limitations required a careful fine tuning. If the penalty function chosen is very small, constraints are not emphatic enough, thus results the algorithm to lead to an infeasible solution and if the penalty function chosen is too large, the objective function is not be given enough weightage and the problem act like a mere constraint satisfaction problem, thereby the algorithm will locate an arbitrary feasible solution . To overcome this drawback, some researchers have recommended replacing the constant penalty by a variable penalty function to be added, some of them are dynamic scheme [4] and self-adaptive scheme [5]. A promising technique to handle these infeasible solutions is the gradient-based repair method [6] that attempts to fix infeasible solutions by taking advantage of the problem’s characteristics. This repair method works effectively if the relationship between decision variables and constraints could be characterized. Differential Evolution (DE) is arguably one of the most powerful stochastic real-parameter optimization algorithms in current use. DE operates through similar computational steps as employed by a standard Evolutionary Algorithm (EA). However, unlike traditional EAs, the DEvariants perturb the current-generation population members with the scaled differences of randomly selected and distinct population members. Therefore, no separate probability distribution has to be used for generating the offspring. In this article we show that an integration of the gradient based repair method of the infeasible solution can very effectively improve the performance of the canonical DE (DE/rand/1/bin scheme) on constrained problems of various kinds. Experimental results

on 18 benchmark problems taken from CEC 2010 [26] given in section VII. Rest of the paper is organized in the following way. Section II gives a brief description about the implementation of constraint handling technique so far. In section III an introduction to Differential Evolution is given. In section IV the gradient repair technique is described in details. Constrained optimization technique has been discussed in the section V. Our approach to constraint handling technique using Gradient Repair based Differential Evolution is discussed in section VI. Experimental result on 18 constrained problems of CEC 2010 [26] given in section VII. II. PREVIOUSLY TAKEN APPROACHES TO SOLVE CONSTRAINT HANDLING OPTIMIZATION Till now a lot of a different EA’s such as Differential Evolution (DE) [7]-[9], Evolution Strategies (ES) [10], [11], Evolutionary Programming (EP) [12], [13], and Particle Swarm Optimization (PSO) [14], [15] have come into play to solve constrained optimization problem. Many techniques have been implemented to use these above mentioned EA’s to fruitfully handle the constraint optimization problems. Static penalty method is the earliest method implemented to penalize the individual solutions which violate the constraints. Though they are very simple but different parameter should be introduced by the user when adding the penalty for multiple constraint violation. These parameters are problem dependent. To overcome this drawback adaptive penalty scheme [5] has been implemented, where user do not have to define parameters. Here the information obtained from the search procedure is used to determine the penalty to be added with the infeasible solution and no fine tuning of the algorithm is required. Generally the number of feasible solution present in the population determines the penalty function to be added. If the number of the feasible solutions present in the population is less then amount of penalty to be added will be higher and if more number of feasible solutions present in the population then the penalty function to be added with the infeasible solution will be less. An adaptive trade-off model (ATMES) was introduced in [16] where hierarchical non dominated individual selection scheme is used to select individuals with less constraint violation. The - constraint handling method was first incorporated in [17]. Here -level comparison is introduced which is a lexicographic order of priority where the constraint violation comes before the objective function value. It has shown that a constrained optimization problem can be transformed into an equivalent unconstrained optimization problem by using this level comparison. In - constraint handling method an optimal solution is reached by converging to 0 as the penalty coefficient is increased to infinity in the penalty function method. In [11], Runarsson and Yao introduced the Stochastic Rank (SR) method to achieve equilibrium between the objective function and the overall constraint violation. This is a stochastic ranking method based on Evolutionary Strategy and using a stochastic lexicographic order that ignores constraint violations with some probability, which have been

successfully, applied to various constrained optimization problems. A multi-objective approach along with local and global search operators has been introduced [18] to handle the constrained optimization problems, where each constraint is treated as an objective function to be optimized. In this method an infeasible individual solution will be selected which has a minimum objective value and less constraint violation. An ensemble of constraint handling techniques (ECHT) has been proposed [19] based on the No Free Lunch (NFL) [20] theorem that, it is impossible for a single constraint handling technique to outperform all other techniques on every problem. It is used to solve constrained real-parameter optimization problems, where each constraint handling method has its own population and each function call is fruitfully utilized by each of these populations. In [7] and [17] the concept of gradient based mutation has been formerly applied as an additional mutation scheme along with the classical DE mutation. In this paper we take a different approach by using the gradient based repair method [6] with DE. In gradient based repair, the gradient information obtained from the stipulated constraint set is used to systematically repair infeasible solutions and convert them to feasible solutions. We use this technique firstly to improve the parent solutions randomly generated by and also to the offspring generated by mutation and crossover schemes. Thus as the search progresses, the ratio between the feasible search space and the whole search space is increased. This in turn accelerates the search process and leads to an efficient search for very high quality solutions within the feasible region. III.

DIFFERENTIAL EVOLUTION

Differential Evolution (DE) [21]-[23] is arguably one of the most powerful stochastic real-parameter optimization algorithms of current interest. DE has been frequently adopted to tackle multi-objective, constrained, dynamic, large scale, and multimodal optimization problems and the resulting variants have been achieving top ranks in various competitions held under the IEEE CEC (Congress on Evolutionary Computation) conference series. In DE community, the individual trial solutions (which constitute a population) are called parameter vectors or genomes. DE operates through the same computational steps as employed by a standard EA. However, unlike traditional EAs, DE employs difference of the parameter vectors to explore the objective function landscape. Like other population-based search techniques, DE generates new points (trial solutions) that are perturbations of existing points, but these deviations are not samples from a predefined probability density function, like those in ESs. Instead, DE perturbs current generation vectors with the scaled difference of two randomly selected population vectors. In its simplest form, DE adds the scaled, random vector difference to a third randomly selected population vector to create a donor vector corresponding to each population vector (also known as target vector). Among the lots of available different variants of classical DE for our gradient repair based Differential Evolution algorithm we have used the variant “DE/rand/1” of Differential Evolution algorithm. Here “DE/rand/1” refers to:

G G G G Vi ,G = X r i ,G + F ⋅ ( X r i ,G − X r i ,G ), 1

2

i 1

i 2

(2)

3

i 3

The indices r , r and r are mutually exclusive integers randomly chosen from the range [1, NP], (Here NP denotes the total population) and all are different from the base index i. These indices are randomly generated once for each donor vector. The scaling factor F is a positive control parameter for scaling the difference vectors. IV.

GRADIENT BASED REPAIR

The Gradient based repair method is proposed by Chootian and Chen in [6] to repair the infeasible solution into feasible solution. We have effectively inherited this gradient repair method with Differential Evolution to handle the constraint optimization. The gradient information generally used to direct the infeasible solutions towards the feasible region defined by the stipulated constraints. This repair method utilize the the gradient information derived from the constraint set. Chootian et.al. use this method to repair some of the infeasible solution while the EA performs the stochastic search for the better optimal solutions. Thus a systematic approach is taken to repair the infeasible solutions from the gradient information derived from the stipulated constraint set. In general by using the inexact method of finite difference [24] the gradient can be derived directly from the constraints of which an explicit expression exists. But these constraints are very complicated to analytically evaluate and they are not available in explicit functional form so these constraint required to be computed through a computer simulation program or a solution of another optimization problem. In [6] Chootian et.al. use the forward difference formulae [24] given in equation no.4 . Here C consists of the vectors of the stipulated constraint

JG

set for the given problem i.e., inequality constraints ( g ) and

G equality constraints ( h ) those are defined in equation no.1. JG JG ⎡ g p×1 ⎤ . (3) C = ⎢G ⎥ ⎢⎣ h q×1 ⎥⎦ (( p + q )×1)

The derivatives of these constraints with respect to the individual solution vectors are approximated by the forward difference formulae [24] i.e.

JG G JG G JG 1 ⎡ g ( x | x j = x j + e) − g ( x), ∀j = 1,..., p ⎤ G G ∇x C = × ⎢ G G ⎥ , (4) e ⎢⎣ h( x | xk = xk + e) − h( x), ∀k = 1,..., q ⎥⎦

here ‘e’ is a small positive scalar for perturbation added to each and every dimension of the solution vectors. It can also be calculated by

JG JG ⎡∇ x g p×1 ⎤ . ∇x C = ⎢ G ⎥ ⎢⎣∇ x h q×1 ⎥⎦ (( p + q )×1)

(5)

The change in solution vector depending on the constraint violations are calculated by the following equations

JG JG G ΔC = ∇ x C × Δ x , (6) JG ( −1) G JG to evaluate Δ x we have to calculate ∇ x C . As ∇ x C is in general not a square matrix thus it is not invertible. So, to

JG ( −1) ∇x C the approximate inverse value of

calculate

JG ∇ x C has been generated by using the [25] Moore-Penrose JG + inverse or pseudo-inverse i.e. ∇ x C . G JG JG ( −1) Δ x = ΔC × ∇ x C . (7) Thus after gradient repairing the new solution becomes

G G G x new = x + Δ x .

(8)

Thus by using this gradient information the infeasible solutions are directed to the feasible zone defined by the stipulated constraints for a given problem. V.

CONSTRAINED OPTIMIZATION

A. Constraint Violation

G

ϕ ( x) is the measure that indicates G candidate solution x violates the given

The constraint violation

by how much a constraints. If an individual solution is in the feasible region then the constraint violation is zero otherwise it is greater than zero.

G

G

G

G

ϕ ( x) = 0; if x ∈ f ⎫⎪

⎬,

(9)

ϕ ( x) > 0; if x ∉ f ⎪⎭

Evaluating the constraint violation in the constrained optimization problem can generally be done in two ways. We can take the constraint violation as the maximum constraint violation or the summation of all the constraint violations.

G

{

JG G

G G

}

ϕ ( x) = max max{0, g j ( x )}, max | h k ( x ) | . j

G

j

JG G

k

G G

ϕ ( x) = ∑ max{0, g j ( x )} + ∑ | h k ( x ) | , j

m

m

(10) (11)

k

here m=1for evaluating the constraint violation. For our program we use the approach given in equation no.11 i.e. to calculate the total constraint violation by taking the summation of all the constraint violations. B. Constraint Handling In a constrained optimization problem the constraint handling technique is an important criterion to reach the

optimal solution within the feasible region (if exists). This is mainly to exploit the infeasible candidate solutions and extract effective information for the stochastic search process. Depending on the constraint violation and the objective function value we choose an individual candidate solution. 1) While to handle two infeasible solution we always choose the solution with less constraint violation without considering their objective function value. 2) Between a feasible and infeasible candidate solution we always choose the feasible solution irrespective of the objective function value. 3) Among two feasible solution the criterion of choosing one feasible solution is its objective function value. The individual with better function value will be chosen. While solving a constrained optimization problem it is very difficult to handle the situation if some active constraint is present. All equality constraints are active constraints and for the inequality constraints those satisfy

JG G g j ( x ) = 0 at the global

optimum solution are called active constraints. So the problems with equality constraint should be handled evasively for a high quality solution. The equality constraints can be altered into the inequality form and can easily be combined with the inequality constraint. Lots of techniques have been used for this particular operation. Here we use a tolerance parameter (δ) to for converting the equality constraints into inequality form. From (1), we can write that:

JG G ⎧ max{ g j ( x ), 0}, for j = 1,..., p G ⎪ (12) Cineq ( x ) = ⎨ G G ⎪⎩max{| h k ( x ) − δ |, 0}, for k = 1,..., q G Thus the objective is to minimize the fitness function f ( x )

such that the optimal solution obtained satisfies all the G inequality constraints Cineq ( x ) . VI.

GRADIENT REPAIR BASED DIFFERENTIAL EVALUATION

In this section we have presented the Gradient Repair Based Differential Evaluation algorithm which is used in this particular study for solving constrained optimization problems. i

a) Step 0: Initialization: ‘N’ Initial individuals x are generated as the initial search points, where the Parent population generated.

P(0) = {xi , i = 1, 2,...., N }

is

randomly

b) Step 1: Termination Criterion: If Number of generation exceeds the maximum no. of generations i.e. Gen_max the algoritm is terminated. c) Step 2: Gradient Based Repair of the parent population: If these generated parent populations are infeasible we use the gradient repair based method (explained in the section IV) to make the population feasible. d) Step 3: Mutation and Crossover: For each individual candidate solution in P (t ) (i.e. population in a particular generation) three other different individual solutions of

G r2i and r3i are randomly chosen. Thus x new,G the G new vector is generated by the base vector x r1i ,G and the G G difference vector ( x r2i ,G − x r3i ,G ) can be represented G G G G x new,G = x r1i ,G + F ⋅ ( x r2i ,G − x r3i ,G ), (13) i

indices r1 ,

here F is a positive control parameter for scaling the difference vectors. Binary crossover has been used in our program. Thus new trail vectors are generated. e) Step 4: Gradient Based Repair of the Trial solutions: If the generated trails by applying the mutation and crossover on the feasible parents become infeasible we have to use the gradient repair based method to make the trial solutions feasible. The gradient repair approach is applied for N g no. of times, till if the trial vector remains infeasible we have ignore it. f) Step 5: Selection Strategy: While to choose a solution between a trial vector and the parent vector we have use the constraint handling technique that has been described in section V. (B) g) Step 6: Go back to step1: If termination criterion is satisfied stop else continue from step-2 to step-6. VII. EXPERIMENTAL RESULTS We have tested our algorithm on the benchmark set of eighteen scalable benchmark problems for the CEC2010 competition and special session on “Single Objective Constrained Real-Parameter Optimization” [26]. A. Test Problems in Benchmark set of CEC2010 There are total 18 problems in CEC 2010 benchmark set from C01 to C18. Among these 18 problems, 6 problems C01, C07, C08, C13, C14 and C15 have inequality constraints only, 7 problems C03, C04, C05, C06, C09, C10 and C11 have equality constraints only and other 5 problems C02, C12, C16, C17 and C18 have both inequality and equality constraints. All these functions are solved in two cases of the number of decision variables (n) being 10 (10D) and 30 (30D) dimensions. In problems with equality constraints, the equality constraints are relaxed and converted into inequality constraints as explained in section V. as given in equation. 12. B. Parameter Settings: We have taken individual 25 runs for each function of the benchmark set from C01 to C18. A maximum function evaluation (Max_FES) is 2,00,000 for 10D and 6,00,000 for 30D, respectively. Value of tolerance parameter (δ) is 0.0001. a) All Parameters to be adjusted: Parameters for DE are population size (N), a scaling factor ( F0 ) and a crossover rate ( CR0 ). For gradient based repair method a small positive scalar for perturbation (e) while calculating the forward difference formulae number of repeating the gradient repair method (N_g). b) Actual parameter values used.

N= 5n (n=dimensionality of the problem.), F=0.7; CR= 0.95;

e=0.000004; N_g=3.

TABLE I. FUNCTION VALUES ACHIEVED WHEN FES = 2 ×104 , FES = 1×105 , FES = 2 ×105 FOR PROBLEMS C01-C06 OF 10D. FEs

2 ×104

1×105

2 ×105

Best Median worst Mean std Best Median worst Mean std Best Median worst Mean std

C01 -7.47308e-01 -7.47170e-01 -7.40538e-01 -7.45005e-01 3.86900e-03 -7.47310e-01 -7.47310e-01 -7.40557e-01 -7.45059e-01 3.89800e-03 -7.47310e-01 -7.47310e-01 -7.40557e-01 -7.45059e-01 3.89800e-03

C02 -2.47715e+00 -2.47714e+00 -2.47705e+00 -2.47711e+00 5.58630e-05 -2.47717e+00 -2.47717e+00 -2.47717e+00 -2.47717e+00 0.00000e+00 -2.47717e+00 -2.47717e+00 -2.47717e+00 -2.47717e+00 0.00000e+00

C03 5.69000e-06 8.15000e-06 2.27000e-05 1.21940e-05 9.21110e-06 1.54000e-23 2.34000e-23 1.03000e-22 4.72020e-23 4.83430e-23 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00

C04 -2.38000e+00 -2.38000e+00 -2.37709e+00 -2.37715e+00 6.51820e-05 -2.38000e+00 -2.38000e+00 -2.37742e+00 -2.37742e+00 0.00000e+00 -2.38000e+00 -2.38000e+00 -2.37742e+00 -2.37742e+00 0.00000e+00

C05 -6.02000e+05 -4.20000e+05 -1.99000e+05 -4.07012e+05 2.01793e+05 -6.02000e+05 -4.20000e+05 -2.98000e+05 -4.40116e+05 1.52822e+05 -1.57380e+06 -9.93868e+05 -4.06898e+05 -9.91525e+05 5.83458e+05

C06 -3.16150e+06 -1.38000e+06 -4.49000e+05 -1.66451e+06 1.37800e+06 -8.20000e+06 -5.15000e+06 -3.16150e+06 -5.50000e+06 2.54067e+06 -1.78000e+07 -9.97626e+06 -5.23000e+06 -1.10000e+07 6.36845e+06

TABLE II. FUNCTION VALUES ACHIEVED WHEN FES = 2 ×104 , FES = 1×105 , FES = 2 ×105 FOR PROBLEMS C07-C12 OF 10D. FEs

2 ×104

1×105

2 ×105

Best Median worst Mean std Best Median worst Mean std Best Median worst Mean std

C07 1.68148e+02 1.60000e+01 3.19000e+01 7.20170e+01 8.36298e+01 1.56000e-13 3.62000e-13 1.03000e-11 3.60000e-12 5.78224e-12 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00

C08 1.26450e+02 1.57000e+02 2.60000e+02 1.81249e+02 6.99775e+01 6.14000e-15 7.06000e-14 2.50000e-13 1.09000e-13 1.26610e-13 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00

C09 1.14095e+03 1.93000e+03 6.94000e+03 3.33700e+03 3.14198e+03 1.84000e-12 3.01000e-12 4.40000e-11 1.63000e-11 2.40320e-11 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00

C10 3.71552e+02 7.80000e+02 1.10000e+03 7.51754e+02 3.66686e+02 7.23000e-13 8.85000e-13 2.62000e-12 1.41000e-12 1.05280e-12 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00

C11 -1.52200e-03 -1.52000e-03 -1.52000e-03 -1.52100e-03 2.08470e-07 -1.52000e-03 -1.52000e-03 -1.52000e-03 -1.52000e-03 2.07640e-17 -1.520000-03 -1.52000e-03 -1.52000e-03 -1.52000e-03 5.76160e-18

C12 -1.99000e-01 -1.99000e-01 -1.99000e-01 -1.99235e-01 7.15390e-06 -1.99000e-01 -1.99000e-01 -1.99000e-01 -1.99000e-01 9.99200e-16 -1.99000e-01 -1.99000e-01 -1.99000e-01 -1.99000e-01 0.00000e+00

TABLE III. FUNCTION VALUES ACHIEVED WHEN FES = 2 ×104 , FES = 1×105 , FES = 2 ×105 FOR PROBLEMS C13-C18 OF 10D. FEs

2 ×104

1×105

2 ×105

Best Median worst Mean std Best Median worst Mean std Best Median worst Mean std

C13 -4.26085e+01 -5.36000e+01 -4.94000e+01 -4.85207e+01 5.52822e+00 -6.32000e+01 -6.32000e+01 -6.32000e+01 -6.32000e+01 0.00000e+00 -6.32000e+01 -6.32000e+01 -6.32000e+01 -6.32000e+01 0.00000e+00

C14 5.76711e+04 3.26000e+04 5.08000e+04 4.70150e+04 1.29698e+04 1.32000e-10 2.97000e-10 1.59000e-09 6.73000e-10 7.98220e-10 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00

C15 1.01255e+04 1.03000e+04 1.93000e+04 1.32550e+04 5.27476e+03 7.65000e-12 4.30000e-11 7.34000e-11 4.13000e-11 3.29010e-11 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00

C16 1.128449e-01 7.45000e-04 1.48000e-01 8.71080e-02 7.67990e-02 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00

C17 1.32338e+01 2.47000e-03 1.09000e+00 4.77560e+00 7.34524e+00 2.85000e-16 1.96000e-02 1.09000e+00 3.69000e-01 7.34524e+00 4.62000e-32 5.32000e-15 1.09000e+00 3.63000e-01 6.28360e-01

C18 1.51330e-03 2.82000e-03 1.25000e-02 5.62400e-03 6.02000e-03 1.46000e-15 3.91000e-15 1.99000e-14 8.43000e-15 1.00320e-14 0.00000e+00 0.00000e+00 5.05000e-29 1.68000e-29 2.91490e-29

TABLE IV. FUNCTION VALUES ACHIEVED WHEN FES = 6 ×104 , FES = 3 ×105 , FES = 6 ×105 FOR PROBLEMS C01-C06 OF 30D. FEs

6 ×104

3 ×105

6 ×105

Best Median worst Mean std Best Median worst Mean std Best Median worst Mean std

C01 -8.03460e-01 -7.90880e-01 -7.44820e-01 -7.88360e-01 1.90640e-02 -8.20890e-01 -7.99860e-01 -7.77440e-01 -8.00060e-01 1.58530e-02 -8.21880e-01 -7.99870e-01 -7.77440e-01 -7.93590e-01 1.40990e-02

C02 -2.27176e+00 -2.22365e+00 -2.17262e+00 -2.22428e+00 4.06060e-02 -2.29344e+00 -2.27796e+00 -2.17272e+00 -2.24804e+00 6.56850e-02 -2.31929e+00 -2.27815e+00 -2.17272e+00 -2.25672e+00 7.55970e-02

C03 4.22000e-14 2.40000e-11 1.20000e-09 4.08000e-10 6.86000e-10 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00

C04 -2.25279e+00 -2.24437e+00 -2.23828e+00 -2.24515e+00 7.28400e-02 -2.25995e+00 -2.25995e+00 -2.25995e+00 -2.25996e+00 0.00000e+00 -2.25995e+00 -2.25995e+00 -2.25995e+00 -2.25996e+00 0.00000e+00

C05 -5.71435e+02 -5.68871e+02 -5.54390e+02 -5.64899e+02 9.190832e+02 -5.79930e+02 -5.79845e+02 -5.79845e+02 -5.79874e+02 4.93890e-02 -5.80111e+02 -5.80109e+02 -5.80105e+02 -5.80109e+02 2.92500e-02

C06 -5.65913e+02 -5.65367e+02 -5.65017e+02 -5.65432e+02 4.51960e-01 -5.69321e+02 -5.69319e+02 -5.69315e+02 -5.69318e+02 2.85522e-02 -5.69324e+02 -5.69324e+02 -5.69324e+02 -5.69324e+02 1.58800e-06

TABLE V. FUNCTION VALUES ACHIEVED WHEN FES = 6 ×104 , FES = 3 ×105 , FES = 6 ×105 FOR PROBLEMS C07-C12 OF 30D. FEs

6 ×104

3 ×105

6 ×105

Best Median worst Mean std Best Median worst Mean std Best Median worst Mean std

C07 5.77007e+01 7.22713e+01 1.81890e+02 1.03954e+02 6.78868e+01 2.35000e-04 1.03309e-01 1.39699e-01 8.10810e-02 7.23400e-02 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00

C08 2.65371e+02 8.44253e+02 1.59591e+03 9.01845e+02 6.67137e+02 8.25240e-02 1.87958e-01 2.35530e-01 1.686000e-01 7.83050e-02 0.00000e+00 0.00000e+00 1.15000e-19 3.83100e-20 6.63600e-20

C09 9.40587e+03 3.03592e+04 8.33127e+04 2.90000e+04 3.16000e+04 1.03956e+00 2.12481e+00 2.13442e+00 9.17000e+01 1.23000e+02 1.40000e-19 2.54000e-18 2.76000e-18 1.81000e-18 1.45000e-18

C10 2.78938e+03 1.24006e+04 1.30349e+04 1.44000e+05 2.16000e+05 2.01000e-11 3.59000e-11 1.14336e-10 1.52000e-10 1.66000e+01 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00

C11 -3.90000e-04 -3.90000e-04 -3.90000e-04 -3.90000e-04 0.00000e+00 -3.90000e-04 -3.90000e-04 -3.90000e-04 -3.90000e-04 0.00000e+00 -3.90000e-04 -3.90000e-04 -3.90000e-04 -3.90000e-04 0.00000e+00

C12 -1.99260e-01 -1.99260e-01 -1.99260e-01 -1.99260e-01 0.00000e+00 -1.99260e-01 -1.99260e-01 -1.99260e-01 -1.99260e-01 0.00000e+00 -1.99260e-01 -1.99260e-01 -1.99260e-01 -1.99260e-01 0.00000e+00

TABLE VI. FUNCTION VALUES ACHIEVED WHEN FES = 6 ×104 , FES = 3 ×105 , FES = 6 ×105 FOR PROBLEMS C13-C18 OF 30D. FEs

6 ×104

3 ×105

6 ×105

Best Median worst Mean std Best Median worst Mean std Best Median worst Mean std

C13 -5.78644e+01 -5.07668e+01 -4.64828e+01 -5.17000e+01 5.75000e+00 -6.36458 e+01 -6.29208 e+01 -6.26276 e+01 -6.31000e+01 5.24000e-01 -6.47083e+01 -6.29208e+01 -6.26277e+01 -6.31000e+01 5.24000e-01

C14 4.16875e+03 8.18531e+03 8.48409e+03 1.16000e+04 1.07000e+04 9.31212e-01 1.44392e+00 2.35000e+00 9.37000e+02 1.66000e+03 5.78000e-20 2.13000e-19 1.86000e-18 7.12000e-19 1.00000e-18

C15 1.16000e+03 1.63472e+03 7.45897e+03 7.87000e+03 6.58000e+03 4.42000e+00 2.16000e+01 2.16032e+01 1.73000e+01 8.59067e+01 4.21499e+00 2.16032e+01 2.16032e+01 1.72561e+01 8.69411e+00

C16 1.64000e-07 1.88000e-07 1.00000e-06 4.51000e-07 4.76000e-07 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00 0.00000e+00

C17 1.12000e+01 1.61466e+01 1.88656e+02 1.09000e+01 6.51000e+00 4.00000e-01 3.51000e-01 2.73069e+00 1.16000e+00 1.36000e+00 3.51000e-01 3.51320e-01 3.51322e-01 3.51000e-01 1.23000e-06

C18 2.04000e-01 8.18000e+00 1.24000e+01 2.86000e+01 3.55000e+01 1.10000e-05 9.84000e-02 1.82000e-01 1.95000e-01 2.17000e-01 6.00000e-06 2.30320e-05 7.69200e-04 2.55000e-05 3.58000e-04

TABLE VII. COMPARISON TABLE OF THE MEAN VALUE AND THE STANDARD DEVIATION WITH THE PROPOSED ALGORITHM WITH SOME EXISTED ALGORITHMS. Takahama & Sakai [7] Brest [8] DE with Gradient Repair No. Best Std Best Std Best Std C01 -8.21825e-01 7.103893e-04 -8.21884e-01 1.31870e-02 1.40990e-02 -8.23460e-01 C02 -2.16924e+00 1.197582e-02 6.79229e-01 8.47050e-01 7.55970e-02 -2.31929e+00 C03 2.86734e+01 8.047159e-01 9.99368e-22 5.75770e+01 0.00000e+00 0.00000e+00 C04 4.69811e-03 3.067785e-03 8.04909e-05 2.39480e-04 0.00000e+00 -2.25995e+00 C05 -4.53130e+02 2.899105e+00 -1.99503e+01 1.52030e+02 2.92500e-02 -5.80111e+02 C06 -5.28575e+02 4.748378e-01 -5.30637e+02 1.27430e+02 1.58800e-06 -5.69324e+02 C07 1.14711e-15 1.233430e-15 4.21977e-26 3.25290e-23 0.00000e+00 0.00000e+00 C08 2.51869e-14 4.855177e-14 7.23109e-26 2.43950e+02 6.63600e-20 0.00000e+00 C09 2.77066e-16 2.821923e+01 8.77820e+00 1.40000e-19 1.45000e-18 2.77292e-25 C10 3.25200e+01 4.545577e-01 1.08620e-25 7.17860e+00 0.00000e+00 0.00000e+00 C11 -3.26846e-04 2.707605e-05 5.26900e-03 0.00000e+00 -3.92000e-04 -3.92000e-04 C12 -1.99145e-01 2.889253e+02 2.34530e-05 0.00000e+00 -1.99263e-01 -1.99260e-01 C13 -6.64247e+01 5.733005e-01 5.05530e-01 -6.47083e+01 5.24000e-01 -6.84293e+01 C14 5.01586e-14 5.608409e-13 5.71016e-26 7.97320e-01 1.00000e-18 0.00000e-00 C15 2.16034e+01 1.104834e-04 9.69934e-16 1.60450e+09 8.69411e+00 0.00000e+00 C16 1.062297e-20 8.12907e-02 2.99430e-01 0.00000e+00 0.00000e+00 0.00000e+00 C17 4.986691e+00 2.90137e+01 4.48320e+02 3.51000e-01 1.23000e-06 2.16571e-01 C18 1.22605e+00 1.664753e+02 1.75655e+01 3.05380e+02 3.58000e-04 6.00000e-06

To fix the value of ‘N_g’ parameter we have to handle the utilization of function evaluations evasively. So, if an infeasible solution is not going to be repaired after 3 iterations of gradient repair, we consider the solution cannot be repaired.

[6]

C. Tables with Numerical Results Our gradient repair based DE algorithm has been applied on the 18 problems in CEC 2010 benchmark. The results for 10 dimension problems have been compiled in Tables I, II and III. In Tables IV, V and VI the results for 30 dimension problems have been listed. A comparison table of function values for 30 dimensional problems with the existed algorithm is given in Table VII. The gradient based DE algorithm bid the other two algorithms for 15 functions out of 18 functions that proves it superiority.

[8]

D. Convergence Maps Fig-1 to Fig-4 illustrate the convergence graphs for the 10D and 30D problems of C09, C10, C14, C15, C17 and C18, where only feasible solutions of the best run out of the 25 runs are shown. The graph has been drawn by the plots of function values over the number of function evaluations till they reach their best value. VIII. CONCLUSION By using DE, that is a simple, efficient and robust search algorithm that can solve unconstrained optimization problems, we have implemented gradient repair based DE that can solve constrained optimization problem. In this study, we proposed a simple idea to repair the infeasible solutions generated in DE at the time of initialization and regeneration into feasible solutions so that the ratio of the feasible and total search space increased. Based on the obtained results on 10 and 30 dimension 18 scalable benchmark functions provided for CEC 2010 special session we can conclude that gradient repair based DE algorithm is an attractive alternative tool for solving constrained optimization problems.

[7]

[9]

[10]

[11]

[12]

[13]

[14]

[15]

[16]

[17]

[18]

[19]

REFERENCES [1]

[2]

[3]

[4]

[5]

C. A. C. Coello, “Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: A survey of the state-of-theart,” Comput. Methods Appl. Mech. Eng. , vol. 191, nos. 11–12, pp. 1245– 1287, Jan. 2002. Z. Michalewicz, “ A Survey of constraint handling techniques in evolutionary computation methods,” in Proc. Of the 4th Annual Conference on the Evolutionary Programming. Cambridge, Massachusetts: The MIT Press, 1995, pp. 135-155. S. Koziel and Z. Michalewicz, “Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization,” Evol. Comput. , vol. 7, no. 1, pp. 19–44, 1999. J. Joines, C. Houck, On the use of non-stationary penalty functions to solve nonlinear constrained optimization problems with GAs. Proceedings of the first IEEE conference on evolutionary computation; 1994. p. 579–84. C. A. C. Coello, “Use of a self-adaptive penalty approach for engineering optimization problem”, Computers in Industry 2000;41(2):113–27.

[20]

[21]

[22] [23]

[24] [25] [26]

P. Chootinan and A.Chen, “Constraint handling in genetic algorithms using a gradient-based repair method,” Computers and Operations Research, Volume 33 , Issue 8 (August 2006), pp. 2263-2281 T. Takahama and S. Sakai, “Constrained optimization by the constrained differential evolution with an Archive and gradient-based mutation,” in Proc. IEEE World Congr .Computational Intelligence., Barcelona,Spain,WCCI-2010,pp.1680–1688. J. Brest, B. Boskovic and V. Zumer, “An Improved Self-adaptive differential evolution algorithm in single objective constrained realparameter optimization,” in Proc. IEEE World Congr .Computational Intelligence., Barcelona, Spain, 2010, pp. 1073– 1080. V. L. Huang, A. K. Qin, and P. N. Suganthan, “Self-adaptive differential evolution algorithm for constrained real-parameter optimization,” in Proc. IEEE Congr. Evol. Comput., Vancouver, BC, Canada, 2006, pp. 215–222. E. Mezura-Montes and C. A. C. Coello, “Adding diversity mechanism to a simple evolution strategy to solve constrained optimization problems,” in Proc. Congr. Evol. Comput., 2003, pp. 6–13. T. P. Runarsson and X. Yao, “Stochastic ranking for constrained evolutionary optimization,” IEEE Trans. Evol. Comput., vol. 4, no. 3, pp. 284–294, Sep. 2000. J. H. Kim and H. Myung, “Evolutionary programming techniques for constrained optimization problems,” IEEE Trans. Evol. Comput. , vol. 1, no. 2, pp. 129–140, Jul. 1997. R. Mallipeddi and P. N. Suganthan, “Evaluation of nove l adaptive evolutionary programming on four constraint handling techniques,” in Proc. IEEE Congr. Evol. Comput. , Hong Kong, China, 2008, pp. 4045– 4052. J. J. Liang and P. N. Suganthan, “Dynamic multiswarm particle swarm optimizer with a n ove l constraint-handling mechanism,” i n Proc. IEEE Congr. Evol. Comput., Vancouver, BC, Canada, 2006, pp. 9–16. K. Zielinski and R. Laur, “Constrained single-objective optimization using particle swarm optimization,” in Proc. IEEE Congr. Evol. Comput. , Vancouver, BC, Canada, 2006, pp. 443–450. Y. Wang, Z. Cai, Y. Zhou, and W. Zeng, “A n adaptive tradeoff model for constrained evolutionary optimization,” IEEE Trans. Evol. Comput., vol. 12, no. 1, pp. 80–92, Feb. 2008. T. Takahama and S. Sakai, “Constrained optimization by the constrained differential evolution with gradient-based mutation and feasible elites,” in Proc. IEEE Congr. Evol. Comput. , Vancouver, BC, Canada, 2006, pp. 1–8. Y. Wang, Z. Cai, G. Guo, and Y. Zhou, “Multiobjective optimization and hybrid evolutionary algorithm to solve constrained optimization problems,” IEEE Trans. Syst., Man, Cybern., vol. 37, no. 3, pp. 560– 575, Jun. 2007. R. Mallipeddi and P. N. Suganthan, “Ensemble of Constraint Handling Techniques”, IEEE Trans. On Evol. Comput,, 14(4):561 – 579, 2010. D. H. Wolpert and W. G. Macready, “No free lunch theorems for optimization,” IEEE Trans. Evol. Comput. , vol. 1, no. 1, pp. 67–82, Apr. 1997. R. Storn and K. V. Price, “Differential evolution – A simple and efficient heuristic for global optimization over continuous spaces”, Journal of Global Optimization, 11(4), pp. 341–359, 1997. K. V. Price, R. Storn, and J. Lampinen, Differential Evolution - A Practical Approach to Global Optimization, Springer, Berlin, 2005. S. Das and P. N. Suganthan, “Differential evolution – a survey of the state-of-the-art”, IEEE Transactions on Evolutionary Computation, DOI: 10.1109/TEVC.2010.2059031. A. Griewank, G. F. Corliss, Automatic differentiation of algorithms: theory, implementation, and application. Philadelphia:SIAM; 1991. S.L.Campbell and C. J. Meyer, Generalized Inverses of Linear Transformations. Dover Publications, 1979. R. Mallipeddi and P. N. Suganthan. Problem Definitions and Evaluation Criteria for the CEC 2010 Competition and Special Session on Single Objective Constrained Real-Parameter Optimization. Technical report, Nanyang Technological University, Singapore, Nov, 2009. http://www.ntu.edu.sg/home/EPNSugan.

Suggest Documents