A Genetic Algorithm for solving Multi-Constrained Function

8 downloads 0 Views 107KB Size Report
multi-constrained optimization problems based on KS function is proposed. Firstly .... M ⊂ F. Considering equation (1), we can define the function sgn(x) as follows: sgn(x) = ... of two parents to generate offspring. ... 2 + x3 + 4x2. 4 + 5x5 ≤ 0,.
A Genetic Algorithm for solving Multi-Constrained Function Optimization Problems Based on KS Function Jianhua Xiao, Jin Xu, Zehui Shao, Congfeng Jiang and Linqiang Pan

Abstract— In this paper, a new genetic algorithm for solving multi-constrained optimization problems based on KS function is proposed. Firstly, utilizing the agglomeration features of KS function, all constraints of optimization problems are agglomerated to only one constraint. Then, we use genetic algorithm to solve the optimization problem after the compression of constraints. Finally, the simulation results on benchmark functions show the efficiency of our algorithm.

I. I NTRODUCTION Most real-world difficult-to-solve engineering optimization problems are characterized as multi-constrained nonlinear optimization problems. But many constrained optimization problems from the industrial engineering world are very complex and quite hard to solve by conventional optimization techniques. Genetic algorithm has been successfully applied to a variety of constrained optimization problems [1], [2], [3]. Compared with the traditional optimization approach, such as the Successive Quadratic Programming Algorithm [4], Gradient Projection Method [5] and so on, genetic algorithm not only needs less information such as gradient and continuity, but also is a globally searching approach. The central problem for applying genetic algorithm to the constrained optimization is how to handle constraints. The penalty function approach is perhaps the most common technique used to handle constraints in the genetic algorithm for constrained optimization. Some researchers proposed various sophisticated penalty function approaches such as multilevel penalty function [6], dynamic penalty functions [7], and penalty functions involving temperature-based evolution of penalty parameters with repair operators [8]. But, main drawback of all these approaches is how to select the right penalty parameters to obtain feasible solutions, which affect the efficiency of the algorithm directly, especially for multiconstrained nonlinear optimization problems. A method with a self-adaptive penalty function was proposed in [9], but a new shortage of this method requires the definition of a scaling factor. Michalewicz and Schoenauer [10] concluded that the static penalty function method is more robust than the sophisticated methods. In the paper, firstly, introducing KS function and utilizing the agglomeration features of KS function, the all constraints of optimization problems are agglomerated to only one constraint whose precision is controlled by one parameter. And J. H. Xiao, J. Xu, Z. H. Shao and L. Q. Pan are with the Key Laboratory of Image Processing and Intelligent Control, Department of Control Science and Engineering, Huazhong University of Science and Technology, Wuhan 430074, China, (Corresponding author: [email protected]). C. F. Jiang is with the School of Hydropower and Information Engineering, Huazhong University of Science and Technology, Wuhan 430074, China.

then use genetic algorithm to solve the optimization problem after the compression of constraints. Finally, the simulation results on six benchmark functions show the efficiency of our algorithm. The paper is organized as follows: Section II depicts multiconstrained nonlinear optimization problems and introduces KS function. Section III describes the genetic operators based on KS function. In section IV, algorithm description of genetic algorithm for constrained optimization based on KS function will be proposed. The test function results and analysis follow in section V. Section VI is the conclusion of the paper. II. P ROBLEM D ESCRIPTION A. Mathematical Model Constrained optimization deals with the problem of optimizing an objective function in presence of equality and/or inequality constraints. Constrained optimization is an extremely important tool used in almost every area of engineering, operations research and mathematics because many practical problems cannot be successfully modeled as a linear program. The general constrained optimization may be written as follows [11], [12]: Min where

(1)

where f (X), g(X) are real valued functions defined on E n , and X ∈ E n is an n-dimensional real vector with components x1 , x2 , · · · , xn that satisfy the restrictions and meanwhile minimize the objection, the set F ⊂ E n defines the feasible region. If constrained optimization has equality constraints gj (X) = 0 which may be transformed into two inequality constraints gj (X) ≥ 0 and gj (X) ≤ 0. B. KS Function KS function was firstly proposed by Kreisselmeier and Steinhauser in 1979 [13], and it has been constantly developed in recent 30 years. Let gj (X)(j = 1, 2, · · · , m) is real function set in n-dimension Euclidean space, where X ∈ E n . In exponential space, the maximum quasi-differentiable envelop of the gj (X) is defined as follows: KS(ρ, X) =

m 1  ρ·gj (X) ln[ e ], ρ j=1

(2)

where ρ > 0, ρ is a control parameter given according to the different problems.

4497 c 1-4244-1340-0/07$25.00 2007 IEEE

y = f (X), gj (X) ≤ 0, (j = 1, · · · , m),

Let

(1) Recombination-insertion generation model gmax (X) = max{gj (X)},

(3)

then m  1 eρ·[gj (X)−gmax (X)] },(4) KS(ρ, X) = gmax (X) + ln{ ρ j=1

1 gmax (X) ≤ KS(ρ, X) ≤ gmax (X) + ln(N ). ρ

(5)

Equation (5) indicates that if ρ is greater, KS function will be more proximate to gmax (x) [14]. It is clear that equation (2) is maximum envelope based on KS function for the original problems. C. Transform By the best use of the characteristic of quasi-differential envelope of KS function, the multi-constrained optimization problem such as equation (1) can be transformed to an optimization problem with one constraint. In this way, the constrained scale of the original problem is reduced greatly. Let the maximum envelop space of KS function be M , then M ⊂ F. Considering equation (1), we can define the function sgn(x) as follows:  1, if x > 0, (6) sgn(x) = 0, if x ≤ 0, obviously, if X ∈ M , then sgn[(ρ, X)] = 0, or sgn[(ρ, X)] = 1. If the optimization solution is in the F -M space, then we can get the sub-optimization solution by the method proposed in this paper, and the algorithm based on KS function can satisfy general engineering application. If the optimization solution is in the M space, we can achieve the optimization solution. We can construct a simple penalty function [15]: ϕ(X) = f (X) + M × sgn[KS(ρ, X)],

(7)

where M is an enough big positive number. The original problem (1) can be transformed to an unconstrained problem: min ϕ(X).

(8)

It is proved that the solution X ∗ of min ϕ(X) is the minimal solution or ap-proximate minimal solution. If we get X ∗ ∈ M , it must be the minimal solution of the original multi-constrained problem. III. G ENETIC O PERATORS In this section, the recombination-insertion generation model is established with binary code. The crossover operator uses one-point crossover, and the mutation operator uses uniform mutation operator, which can move freely in the solution space [16].

4498

The recombination-insertion generation model has a good performance of searching and development. It is described as follows: a) Generate new population with the number of p(t) × pg by selection from the t generation p(t) with the gap probability of pg . b) Perform crossover and mutation for the selected population. c) Inserts and recombines a new population, then generate the next generation p(t). (2) One-point crossover operator One-point crossover is also called simple crossover, which randomly selects one-cut-point and exchanges the right parts of two parents to generate offspring. For example, let two parents be X = [x1 , x2 , · · · , xn ] and Y = [y1 , y2 , · · · , yn ], if they are crossed after the k-th position with crossover rate pc , the resulting offspring are X  = [x1 , x2 , · · · , xk , yk+1 , yk+2 , · · · , yn ], Y  = [y1 , y2 , · · · , yk , xk+1 , xk+2 , · · · , xn ]. (3) Uniform mutation operator The uniform mutation is a basic operator of genetic algorithm, this is one simply replaces with a randomly selected number within a specified range. It is described in detail as follows a) Assign every bit as mutation point in turn. b) Substitute a random number for the original value according to the given mutation rate Pm for every mutation point. IV. A LGORITHM D ESCRIPTION In the paper, KS function is applied to handle multiconstraints for constrained optimization problem. Utilizing the agglomeration features of KS function, all constraints of optimization problems are agglomerated to only one constraint whose precision is controlled by one parameter and avoid the difficult of weight selection. The detailed procedure of the algorithm is given as follows: Step 1. (KS Function) The multi-constraints in the original problem are reduced to one constraint by KS function. Step 2. (Transform) The constrained function optimization problems are translated into non-constrained function optimization problems by the penalty function. Step 3. (Initialization) Initialize GA parameters, generate randomly initial population P (0), let the generation number t = 0. Step 4. (Crossover) Execute crossover operator by the method in (2) of section III. Step 5. (Mutation) Execute mutation operator by the method in (3) of section III. Step 6. (Selection and Recombination) Select individuals from the t generation with the gap probability, inserts and

2007 IEEE Congress on Evolutionary Computation (CEC 2007)

recombines a new population, then generate the next generate P (t + 1), let t = t + 1. Step 7. (Termination) If termination conditions hold, then stop, and keep the optimal solution or approximate solution obtained as the global optimal solution of the problem. Otherwise, go to Step 4. V. S IMULATION R ESULTS A. Test Functions and Algorithm Parameters In the above section, a genetic algorithm for constrained optimization based on KS function is proposed. In the simulation, six benchmark functions are applied to test our algorithm. These test functions are described as follows. Test P1 [17]: Min f (x) = (x1 − 10)2 + 5(x2 − 12)2 + x43 + 3(x4 − 11)2 s. t.

Test P6 [20]: min f (x) = (x1 − 10)3 + (x2 − 20)3 , s. t. −(x1 − 5)2 − (x2 − 5)2 + 100 ≤ 0, (x1 − 6)2 + (x2 − 5)2 − 82.81 ≤ 0, (14) where 13 ≤ x1 ≤ 100, 0 ≤ x2 ≤ 100. The optimum solution is x∗ = (14.095, 0.84296) where f (x∗ ) = −6961.81388. Genetic algorithm for constrained optimization based on KS function is implemented with MATLAB 7.1. The algorithm parameters used in our example are: the run time is 20, the population size is 100, the maxgeneration number is 500, the coded length with binary coded individuals is 25, crossover rate is 0.7, mutation rate is 0.0017, ρ is 100, M is 20000, and gap rate is 0.9.

+10x65 + 7x26 + x47 − 4x6 x7 − 10x6 − 8x7 , −127 + 2x21 + 3x42 + x3 + 4x24 + 5x5 ≤ 0,

B. Result and Analysis

4x21 + x22 − 3x1 x2 + 2x23 + 5x6 − 11x7 ≤ 0, −10 ≤ xi ≤ 10, i = 1, · · · , 7. (9)

We apply the proposed KS function constraint handing method and genetic algorithm based on KS function to test six benchmark function. Experimental results using proposed algorithm are shown in Table I, and Table II is the experimental results based on the dynamic penalty method [7]. The convergence curve of the algorithm on six test function also is represented in Fig. 1-6.

−282 + 7x1 + 3x2 + 10x23 + x4 − x5 ≤ 0, −196 + 23x1 + x22 + 6x26 − 8x7 ≤ 0,

where −10 ≤ xi ≤ 10 f or (i = 1, · · · , 7). The optimum solution is x∗ =(2.330499, 1.951372, -0.4775414, 4.365726, -0.6244870, 1.038131, 1.594227) where f (x∗ ) = 680.6300573. Test P2 [18]: max f (x) = −2x21 + 2x1 x2 − 2x22 + 4x1 + 6x2 , s. t.

x1 + x2 ≤ 2, x1 + 5x2 ≤ 5,

(10)

where xi ≥ 0 f or (i = 1, 2). The optimum solution is x∗ = (2.2468, 2.3815) where f (x∗ ) = 13.5908. Test P3 [18]: min f (x) = (x1 2 + x2 − 11)2 + (x1 + x2 2 − 7)2 , s. t. (x1 − 0.05)2 + (x2 − 2.5)2 − 4.84 ≤ 0, 4.84 − x1 2 − (x2 − 2.5)2 ≤ 0,

(11)

where 0≤x1 ≤6, 0≤x2 ≤6. The optimum solution is x∗ = (1.1284, 0.7443) where f (x∗ ) = 7.1612. Test P4 [19]: min f (x) = x21 + (x2 − 1)2 , s. t. x2 − x21 = 0,

(12)

where −1≤x √ 1 ≤1 and − 1≤x2 ≤1. The optimum solution is x∗ = (± 2/2, 0.5) where f (x∗ ) = 0.75. Test P5 [19]: max f (x) = [sin3 (2πx1 ) sin(2πx2 )]/[x31 (x1 + x2 )], s. t. x21 − x2 + 1 ≤ 0, 1 − x1 + (x2 − 4)2 ≤ 0,

(13)

where 0 ≤ x1 , x2 ≤ 10. The optimum solution is located at x∗ = (1.22797, 4.24537) where f (x∗ ) = 0.09583.

TABLE I E XPERIMENTAL RESULTS USING PROPOSED ALGORITHM Function Min. P1 max. P2 Min. P3 Min. P4 max. P5 Min. P6

Optimal 680.6301 7.1613 13.5908 0.7500 0.0958 -6961.8

Best 680.9530 7.1612 13.5908 0.7500 0.0958 -6945.6

Mean 681.4650 7.1609 13.5910 0.7502 0.0958 -6944.4

Worst 683.2632 7.1594 13.5912 0.7510 0.0958 -6938.3

TABLE II E XPERIMENT RESULTS USING THE DYNAMIC PENALTY METHOD Function Min. P1 Max. P2 Min. P3 Min. P4 Max. P5 Min. P6

Optimal 680.6301 7.1613 13.5908 0.7500 0.0958 -6961.8

Best 680.6320 7.1023 13.7501 0.7500 0.0958 -6898.0

Mean 680.6480 7.0541 13.8032 0.7580 0.0954 -6502.5

Worst 680.7610 6.9547 13.8456 0.8500 0.0878 -5962.8

To validate the proposed algorithm’s efficiency and feasibility, our algorithm compared with dynamic penalty function method. Comparing Table I and Table II, it is clear that the proposed algorithm performed better than the dynamic penalty function method according to all three criteria (best, mean, worst) for all benchmark function, except for P1. The two methods performed the same on problem P4 and P5 according to best criteria, but performed worse than our algorithm according to mean and worse. Furthermore, Fig. 1-6 also show the good convergence of our algorithm on test six benchmark function.

2007 IEEE Congress on Evolutionary Computation (CEC 2007)

4499

Fig. 1.

Test P1 convergence curve

Fig. 2.

Test P2 convergence curve

Fig. 5.

Test P5 convergence curve

Fig. 6.

Test P6 convergence curve

VI. C ONCLUSIONS In this paper, a new constraint handling techniques is developed, and is used to solve multi-constrained optimization problem. The simulation results show that our algorithm is efficient to constrained optimization problems. Although genetic algorithm for constrained optimization based on KS function looks simple and rough, it already has many advantages, for example, it can get rid of the local optimal solution utilizing the global search ability of genetic algorithm, and is not necessary to give weight values. The genetic algorithm for constrained optimization based on KS function deserves to be further investigated. Fig. 3.

Test P3 convergence curve

ACKNOWLEDGEMENT The authors would like to thank the Prof. Zhao Mingwang for his valuable opinions which helped to improve this paper greatly. The project was supported by the National Natural Science Foundation of China (Grant Nos. 60373089, 60674106, 30570431, and 60533010), the Program for New Century Excellent Talents in University (NCET-05-0612), the Ph.D. Programs Foundation of Ministry of Education of China (20060487014)), and the Chenguang Program of Wuhan (200750731262). R EFERENCES Fig. 4.

4500

Test P4 convergence curve

[1] K. Deb, “An Efficient Constraint Handling Method for Genetic Algorithms”, Computer Methods in Applied Mechanics and Engineering, vol. 186, pp. 311-338, 2000.

2007 IEEE Congress on Evolutionary Computation (CEC 2007)

[2] S. Venkatraman, G. G. Yen, “A Generic Framework for Constrained Optimization Using Genetic Algorithms”, IEEE Transactions on Evolutionary Compution. vol. 9, no. 4, pp. 424-435, 2005. [3] T. P. Runarsson, X. Yao, “Stochastic Ranking for Constrained Evolutionary Optimization”, IEEE Transactions on Evolutionary Computation, vol. 4, pp. 284-294, 2000. [4] R. Fletcher, “A General Quadratic Programming Algorithm”, IMA Journal of Applied Mathematics, vol. 7, no. 1, pp. 76-91, 1971. [5] J. B. Rosen, “The Gradient Projection Method for Nonlinear Programming: Part II. Nonlinear Constraints”, Journal of the Society for Industrial and Applied Mathematics, vol. 9, no. 4, pp. 514-532, 1996. [6] A. Homaifar, S. H. V. Lai, X. Qi, “Constrained optimization via genetic algorithms”, Simulation, vol. 62, no. 4, pp. 242-254, 1994. [7] J. A. Joines and C. R. Houck, “On the use of nonstationary penalty functions to solve nonlinear constrained optimization problems with GAs”, In Proceedings of the International Conference on Evolutionary Computation, Piscataway, pp. 579-584, 1994. [8] Z. Michalewicz and N. Attia, “Evolutionary optimization of constrained problems”, In Proceedings of the Third Annual Conference on Evolutionary Programming, Singapore, pp. 98-108, 1994. [9] A. Carlos, “Use of A Self-adaptive Penalty Approach for Engineering Optimization Problems”, Computers in Industry, vol. 41, pp. 113-127, 2000. [10] Z. Michalewicz and M. Schoenauer, “Evolutionary algorithms for constrained parameter optimization problems”, Evolutionary Computation, vol. 4, no. 1, pp. 1-32, 1996. [11] S. D. Qian,Y. Q. Hu, Y. A. Gan, Operational Research. Beijing: Tsinghua University Press, pp. 174-193, 1990. [12] D. M. Himmenlblau, Applied Nonlinear Programming, New York: McGraw-Hill, Inc, 1972. [13] G. Kreisselmeier, R. Steinhauser, “Systematic Control Design by Optimizing a Vector Performance Index”, In Proc of IFAC Symposium on Computer Aided Design of Control System, Zurich Switzerland, 1979. [14] J. Bsrthelemy, M. F. Riley, “An Improved Multilevel Optimization Approach for Design of Complex Engineering System”, AIAA Paper 86-0950-cp, 1986. [15] A. David Wismer, R. Chattergy, Introduction to Nonlinear Optimization: A Problem Solving Approach, North-holland Publishing Company, 1979. [16] Z. Michalewicz, Genetic Algorithms Data Structure=Evolution Programs, New York: Spring-Verlag, 1996. [17] W. Hock, K. Schittkowski, Test Examples for Nonlinear Programming Codes, Berlin, Germany: Springer-Verlag, 1981. [18] S. J. MU, H. Y. Su, “A New Genetic Algorithm to Handle the Constrained Optimization Problem”, In Proceedings of the 41st IEEE Conference on Decision and Control Las Vegas, Ne-vada, USA, pp. 739-740, 2002. [19] S. Koziel, Z. Michalewicz, “Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization”, Evol. Comput., vol. 7, no. 1, pp. 19-44, 1999. [20] C. Floundas, P. Pardalos, A Collection of Test Problems for Constrained Global Optimization, Berlin, Germany: Springer-Verlag, 1987.

2007 IEEE Congress on Evolutionary Computation (CEC 2007)

4501

Suggest Documents