A novel metaheuristic optimization method for solving ... - CiteSeerX

27 downloads 2510 Views 1MB Size Report
Aug 26, 2012 - for solving constrained engineering optimization problems. Hadi Eskandar ... more than one local optimum in the problem, the results may de- pend on the ... thermore, the gradient search may become unstable when the objective ...... other considered optimization engines as shown in Table 11. In terms of ...
Computers and Structures 110–111 (2012) 151–166

Contents lists available at SciVerse ScienceDirect

Computers and Structures journal homepage: www.elsevier.com/locate/compstruc

Water cycle algorithm – A novel metaheuristic optimization method for solving constrained engineering optimization problems Hadi Eskandar a, Ali Sadollah b, Ardeshir Bahreininejad b,⇑, Mohd Hamdi b a b

Faculty of Engineering, Semnan University, Semnan, Iran Faculty of Engineering, University of Malaya, 50603 Kuala Lumpur, Malaysia

a r t i c l e

i n f o

Article history: Received 21 June 2012 Accepted 27 July 2012 Available online 26 August 2012 Keywords: Water cycle algorithm Metaheuristic Global optimization Constrained problems Engineering design Constraint handling

a b s t r a c t This paper presents a new optimization technique called water cycle algorithm (WCA) which is applied to a number of constrained optimization and engineering design problems. The fundamental concepts and ideas which underlie the proposed method is inspired from nature and based on the observation of water cycle process and how rivers and streams flow to the sea in the real world. A comparative study has been carried out to show the effectiveness of the WCA over other well-known optimizers in terms of computational effort (measures as number of function evaluations) and function value (accuracy) in this paper. Ó 2012 Elsevier Ltd. All rights reserved.

1. Introduction Over the last decades, various algorithms have been developed to solve a variety of constrained engineering optimization problems. Most of such algorithms are based on numerical linear and nonlinear programming methods that may require substantial gradient information and usually seek to improve the solution in the neighborhood of a starting point. These numerical optimization algorithms provide a useful strategy to obtain the global optimum solution for simple and ideal models. Many real-world engineering optimization problems, however, are very complex in nature and quite difficult to solve. If there is more than one local optimum in the problem, the results may depend on the selection of the starting point for which the obtained optimal solution may not necessarily be the global optimum. Furthermore, the gradient search may become unstable when the objective function and constraints have multiple or sharp peaks. The drawbacks (efficiency and accuracy) of existing numerical methods have encouraged researchers to rely on metaheuristic algorithms based on simulations and nature inspired methods to solve engineering optimization problems. Metaheuristic algorithms commonly operate by combining rules and randomness to imitate natural phenomena [1]. ⇑ Corresponding author. Tel.: +60 379675266; fax: +60 379675330. E-mail addresses: [email protected] (H. Eskandar), [email protected] (A. Sadollah), [email protected] (A. Bahreininejad), [email protected] (M. Hamdi). 0045-7949/$ - see front matter Ó 2012 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.compstruc.2012.07.010

The phenomena may include the biological evolutionary process such as genetic algorithms (GAs) proposed by Holland [2] and Goldberg [3], animal behavior such as particle swarm optimization (PSO) proposed by Kennedy and Eberhart [4], and the physical annealing which is generally known as simulated annealing (SA) proposed by Kirkpatrick et al. [5]. Among the optimization methods, the evolutionary algorithms (EAs) which are generally known as general purpose optimization algorithms are known to be capable of finding the near-optimum solution to the numerical real-valued test problems. EAs have been very successfully applied to constrained optimization problems [6]. GAs are based on the genetic process of biological organisms [2,3]. Over many generations, natural populations evolve according to the principles of natural selections (i.e. survival of the fittest). In GAs, a potential solution to a problem is represented as a set of parameters. Each independent design variable is represented by a gene. Combining the genes, a chromosome is produced which represents an individual (solution). The efficiency of the different architectures of evolutionary algorithms in comparison to other heuristic techniques has been tested in both generic [7–9] and engineering design [10] problems. Recently, Chootinan and Chen [11] proposed a constraint-handling technique by taking a gradient-based repair method. The proposed technique is embedded into GA as a special operator. PSO is a recently developed metaheuristic technique inspired by choreography of a bird flock developed by Kennedy and Eberhart [4]. The approach can be viewed as a distributed behavioral algorithm that performs a multidimensional search. It makes use of a

152

H. Eskandar et al. / Computers and Structures 110–111 (2012) 151–166

velocity vector to update the current position of each particle in the swarm. In Ref. [12], there are some suggestions for choosing the parameters used in PSO. He and Wang [13] proposed an effective co-evolutionary PSO for constrained problems, where PSO was applied to evolve both decision factors and penalty factors. In these methods, the penalty factors were treated as searching variables and evolved by GA or PSO to the optimal values. Recently, Gomes [14] applied PSO on truss optimization using dynamic constraints. The present paper introduces a novel metaheuristic algorithm for optimizing constrained functions and engineering problems. The main objective of this paper is to present a new global optimization algorithm for solving the constrained optimization problems. Therefore, a new population-based algorithm named as the water cycle algorithm (WCA), is proposed. The performance of the WCA is tested on several constrained optimization problems and the obtained results are compared with other optimizers in terms of best function value and the number of function evaluations. The remaining of this paper is organized as follows: in Section 2, the proposed WCA and the concepts behind it are introduced in details. In Section 3, the performance of the proposed optimizer is validated on different constrained optimization and engineering design problems. Finally, conclusions are given in Section 4.

Wherever two first-order streams join, they make a second-order stream (shown in Fig. 2 in white colors). Where two second-order streams join, a third-order stream is formed (shown in Fig. 2 in blue colors), and so on until the rivers finally flow out into the sea (the most downhill place in the assumed world) [16]. Fig. 3 shows the Arkhangelsk city on the Dvina River. Arkhangelsk (Archangel in English) is a city in Russia that drapes both banks of the Dvina River, near where it flows into the White Sea. A typical real life stream, river, sea formation (Dvina River) is shown in Fig. 3 resembling the shape in Fig. 2.

2. Water cycle algorithm

2.2.1. Create the initial population In order to solve an optimization problem using populationbased metaheuristic methods, it is necessary that the values of problem variables be formed as an array. In GA and PSO terminologies such array is called ‘‘Chromosome’’ and ‘‘Particle Position’’, respectively. Accordingly, in the proposed method it is called ‘‘Raindrop’’ for a single solution. In a Nvar dimensional optimization problem, an raindrop is an array of 1  Nvar. This array is defined as follows:

2.1. Basic concepts The idea of the proposed WCA is inspired from nature and based on the observation of water cycle and how rivers and streams flow downhill towards the sea in the real world. To understand this further, an explanation on the basics of how rivers are created and water travels down to the sea is given as follows. A river, or a stream, is formed whenever water moves downhill from one place to another. This means that most rivers are formed high up in the mountains, where snow or ancient glaciers melt. The rivers always flow downhill. On their downhill journey and eventually ending up to a sea, water is collected from rain and other streams. Fig. 1 is a simplified diagram for part of the hydrologic cycle. Water in rivers and lakes is evaporated while plants give off (transpire) water during photosynthesis. The evaporated water is carried into the atmosphere to generate clouds which then condenses in the colder atmosphere, releasing the water back to the earth in the form of rain or precipitation. This process is called the hydrologic cycle (water cycle) [15]. In the real world, as snow melts and rain falls, most of water enters the aquifer. There are vast fields of water reserves underground. The aquifer is sometimes called groundwater (see percolation arrow in Fig. 1). The water in the aquifer then flows beneath the land the same way water would flow on the ground surface (downward). The underground water may be discharged into a stream (marsh or lake). Water evaporates from the streams and rivers, in addition to being transpired from the trees and other greenery, hence, bringing more clouds and thus more rain as this cycle counties [15]. Fig. 2 is a schematic diagram of how streams flow to the rivers and rivers flow to the sea. Fig. 2 resembles a tree or roots of a tree. The smallest river branches, (twigs of tree shaped figure in Fig. 2 shown in bright green1), are the small streams where the rivers begins to form. These tiny streams are called first-order streams (shown in Fig. 2 in green colors). 1

For interpretation of color in Fig. 2, the reader is referred to the web version of this article.

2.2. The proposed WCA Similar to other metaheuristic algorithms, the proposed method begins with an initial population so called the raindrops. First, we assume that we have rain or precipitation. The best individual (best raindrop) is chosen as a sea. Then, a number of good raindrops are chosen as a river and the rest of the raindrops are considered as streams which flow to the rivers and sea. Depending on their magnitude of flow which will be described in the following subsections, each river absorbs water from the streams. In fact, the amount of water in a stream entering a rivers and/or sea varies from other streams. In addition, rivers flow to the sea which is the most downhill location.

Raindrop ¼ ½x1 ; x2 ; x3 ; . . . ; xN 

ð1Þ

To start the optimization algorithm, a candidate representing a matrix of raindrops of size Npop  Nvar is generated (i.e. population of raindrops). Hence, the matrix X which is generated randomly is given as (rows and column are the number of population and the number of design variables, respectively):

2

Raindrop1

3

6 Raindrop 7 6 2 7 7 6 7 6 Population of raindrops ¼ 6 Raindrop3 7 7 6 . 7 6 .. 5 4 RaindropNpop 2 1 x1 x12 x13 6 2 2 6 x1 x2 x23 6 ¼6 . .. .. 6 . . . 4 . N

x1 pop

N

x2 pop

N

x3 pop



x1Nvar

 .. .

x2Nvar .. . N

   xNpop var

3 7 7 7 7 7 5 ð2Þ

Each of the decision variable values ðx1 ; x2 ; . . . ; xNvar Þ can be represented as floating point number (real values) or as a predefined set for continuous and discrete problems, respectively. The cost of a raindrop is obtained by the evaluation of cost function (C) given as:

  C i ¼ Costi ¼ f xi1 ; xi2 ; . . . ; xiNvar i ¼ 1; 2; 3; . . . ; Npop

ð3Þ

where Npop and Nvars are the number of raindrops (initial population) and the number of design variables, respectively. For the first step, Npop raindrops are created. A number of Nsr from the best

H. Eskandar et al. / Computers and Structures 110–111 (2012) 151–166

153

Fig. 1. Simplified diagram of the hydrologic cycle (water cycle).

Fig. 2. Schematic diagram of how streams flow to the rivers and also rivers flow to the sea.

individuals (minimum values) are selected as sea and rivers. The raindrop which has the minimum value among others is considered as a sea. In fact, Nsr is the summation of Number of Rivers (which is a user parameter) and a single sea as given in Eq. (4). The rest of the population (raindrops form the streams which flow to the rivers or may directly flow to the sea) is calculated using Eq. (5).

Nsr ¼ Number of Rivers þ |{z} 1

ð4Þ

NRaindrops ¼ Npop  Nsr

ð5Þ

Sea

In order to designate/assign raindrops to the rivers and sea depending on the intensity of the flow, the following equation is given:

 ( )  Cost    n NSn ¼ round PNsr   NRaindrops ;  Costi 

n ¼ 1; 2; . . . ; Nsr

ð6Þ

i¼1

where NSn is the number of streams which flow to the specific rivers or sea.

Fig. 3. Arkhangelsk city on the Dvina River (adopted from NASA, Image Source: http://asterweb.jpl.nasa.gov/gallery-detail.asp?name=Arkhangelsk).

2.2.2. How does a stream flow to the rivers or sea? As mentioned in subsection 2.1, the streams are created from the raindrops and join each other to form new rivers. Some of the streams may also flow directly to the sea. All rivers and streams end up in sea (best optimal point). Fig. 4 shows the schematic view of stream’s flow towards a specific river. As illustrated in Fig. 4, a stream flows to the river along the connecting line between them using a randomly chosen distance given as follow:

X 2 ð0; C  dÞ;

C>1

ð7Þ

where C is a value between 1 and 2 (near to 2). The best value for C may be chosen as 2. The current distance between stream and river is represented as d. The value of X in Eq. (7) corresponds to a distributed random number (uniformly or may be any appropriate distribution) between 0 and (C  d). The value of C being greater than one enables streams to flow in different directions towards the rivers.

154

H. Eskandar et al. / Computers and Structures 110–111 (2012) 151–166

    if X iSea  X iRiver  < dmax

i ¼ 1; 2; 3; . . . ; Nsr  1

Evaporation and raining process end where dmax is a small number (close to zero). Therefore, if the distance between a river and sea is less than dmax, it indicates that the river has reached/joined the sea. In this situation, the evaporation process is applied and as seen in the nature after some adequate evaporation the raining (precipitation) will start. A large value for dmax reduces the search while a small value encourages the search intensity near the sea. Therefore, dmax controls the search intensity near the sea (the optimum solution). The value of dmax adaptively decreases as: i

iþ1

Fig. 4. Schematic view of stream’s flow to a specific river (star and circle represent river and stream respectively).

i

dmax ¼ dmax 

dmax max iteration

ð10Þ

2.2.4. Raining process After satisfying the evaporation process, the raining process is applied. In the raining process, the new raindrops form streams in the different locations (acting similar to mutation operator in GA). For specifying the new locations of the newly formed streams, the following equation is used:

X new Stream ¼ LB þ rand  ðUB  LBÞ

ð11Þ

  i i i X iþ1 Stream ¼ X Stream þ rand  C  X River  X Stream

ð8Þ

where LB and UB are lower and upper bounds defined by the given problem, respectively. Again, the best newly formed raindrop is considered as a river flowing to the sea. The rest of new raindrops are assumed to form new streams which flow to the rivers or may directly flow to the sea. In order to enhance the convergence rate and computational performance of the algorithm for constrained problems, Eq. (12) is used only for the streams which directly flow to the sea. This equation aims to encourage the generation of streams which directly flow to the sea in order to improve the exploration near sea (the optimum solution) in the feasible region for constrained problems.

  i i i X iþ1 River ¼ X River þ rand  C  X Sea  X River

ð9Þ

X new stream ¼ X sea þ

Fig. 5. Exchanging the positions of the stream and the river where star represents river and black color circle shows the best stream among other streams.

This concept may also be used in flowing rivers to the sea. Therefore, the new position for streams and rivers may be given as:

where rand is a uniformly distributed random number between 0 and 1. If the solution given by a stream is better than its connecting river, the positions of river and stream are exchanged (i.e. stream becomes river and river becomes stream). Such exchange can similarly happen for rivers and sea. Fig. 5 depicts the exchange of a stream which is best solution among other streams and the river. 2.2.3. Evaporation condition Evaporation is one of the most important factors that can prevent the algorithm from rapid convergence (immature convergence). As can be seen in nature, water evaporates from rivers and lakes while plants give off (transpire) water during photosynthesis. The evaporated water is carried into the atmosphere to form clouds which then condenses in the colder atmosphere, releasing the water back to earth in the form of rain. The rain creates the new streams and the new streams flow to the rivers which flow to the sea [15]. This cycle which was mentioned in subsection 2.1 is called water cycle. In the proposed method, the evaporation process causes the sea water to evaporate as rivers/streams flow to the sea. This assumption is proposed in order to avoid getting trapped in local optima. The following Psuocode shows how to determine whether or not river flows to the sea.

pffiffiffiffi

l  randnð1; Nvar Þ

ð12Þ

where l is a coefficient which shows the range of searching region near the sea. Randn is the normally distributed random number. The larger value for l increases the possibility to exit from feasible region. On the other hand, the smaller value for l leads the algorithm to search in smaller region near the sea. A suitable value for l is set to 0.1. pffiffiffiffi In mathematical point of view, the term l in Eq. (12) represents the standard deviation and, accordingly, l defines the concept of variance. Using these concepts, the generated individuals with variance l are distributed around the best obtained optimum point (sea). 2.2.5. Constraint handling In the search space, streams and rivers may violate either the problem specific constraints or the limits of the design variables. In the current work, a modified feasible-based mechanism is used to handle the problem specific constraints based on the following four rules [17]:  Rule 1: Any feasible solution is preferred to any infeasible solution.  Rule 2: Infeasible solutions containing slight violation of the constraints (from 0.01 in the first iteration to 0.001 in the last iteration) are considered as feasible solutions.

H. Eskandar et al. / Computers and Structures 110–111 (2012) 151–166

155

 Step 12: Check the convergence criteria. If the stopping criterion is satisfied, the algorithm will be stopped, otherwise return to Step 5.

Fig. 6. Schematic view of WCA.

 Rule 3: Between two feasible solutions, the one having the better objective function value is preferred.  Rule 4: Between two infeasible solutions, the one having the smaller sum of constraint violation is preferred. Using the first and fourth rules, the search is oriented to the feasible region rather than the infeasible region. Applying the third rule guides the search to the feasible region with good solutions [17]. For most structural optimization problems, the global minimum locates on or close to the boundary of a feasible design space. By applying rule 2, the streams and rivers approach the boundaries and can reach the global minimum with a higher probability[18].

The schematic view of the proposed method is illustrated in Fig. 6 where circles, stars, and the diamond correspond to streams, rivers, and sea, respectively. From Fig. 6, the white (empty) shapes refer to the new positions found by streams and rivers. Fig. 6 is an extension of Fig. 4. The following paragraphs aim to offer some clarifications on the differences between the proposed WCA and other metaheuristics methods such as PSO for example. In the proposed method, rivers (a number of best selected points except the best one (sea)) act as ‘‘guidance points’’ for guiding other individuals in the population towards better positions (as shown in Fig. 6) in addition to minimize or prevent searching in inappropriate regions in near-optimum solutions (see Eq. (8)). Furthermore, rivers are not fixed points and move toward the sea (the best solution). This procedure (moving streams to the rivers and, then moving rivers to the sea) leads to indirect move toward the best solution. The procedure for the proposed WCA is shown in Fig. 7 in the form of a flowchart. In contrast, only individuals (particles) in PSO, find the best solution and the searching approach based on the personal and best experiences. The proposed WCA also uses ‘‘evaporation and raining conditions’’ which may resemble the mutation operator in GA. The evaporation and raining conditions can prevent WCA algorithm from getting trapped in local solutions. However, PSO does not seem to posses such criteria/mechanism.

3. Numerical examples and test problems 2.2.6. Convergence criteria For termination criteria, as commonly considered in metaheuristic algorithms, the best result is calculated where the termination condition may be assumed as the maximum number of iterations, CPU time, or e which is a small non-negative value and is defined as an allowable tolerance between the last two results. The WCA proceeds until the maximum number of iterations as a convergence criterion is satisfied. 2.2.7. Steps and flowchart of WCA The steps of WCA are summarized as follows:  Step 1: Choose the initial parameters of the WCA: Nsr, dmax, Npop, max_iteration.  Step 2: Generate random initial population and form the initial streams (raindrops), rivers, and sea using Eqs. (2), (4), and (5).  Step 3: Calculate the value (cost) of each raindrops using Eq. (3).  Step 4: Determine the intensity of flow for rivers and sea using Eq. (6).  Step 5: The streams flow to the rivers by Eq. (8).  Step 6: The rivers flow to the sea which is the most downhill place using Eq. (9).  Step 7: Exchange positions of river with a stream which gives the best solution, as shown in Fig. 5.  Step 8: Similar to Step 7, if a river finds better solution than the sea, the position of river is exchanged with the sea (see Fig. 5).  Step 9: Check the evaporation condition using the Psuocode in subsection 2.2.3.  Step 10: If the evaporation condition is satisfied, the raining process will occur using Eqs. (11) and (12).  Step 11: Reduce the value of dmax which is user defined parameter using Eq. (10).

In this section, the performance of the proposed WCA is tested by solving several constrained optimization problems. In order to validate the proposed method for constraint problems, first, four constrained benchmark problems have been applied and then, the performance of the WCA for seven engineering design problems (widely used in literatures) were examined and the optimization results were compared with other optimizers. The benchmark problems include the objective functions of various types (quadratic, cubic, polynomial and nonlinear) with various number of the design variables, different types and number of inequality and equality constraints. The proposed algorithm was coded in MATLAB programming software and simulations were run on a Pentium V 2.53 GHz with 4 GB RAM. The task of optimizing each of the test functions was executed using 25 independent runs. The maximization problems were transformed into minimization ones as f(x). All equality constraints were converted into inequality ones, |h(x)|  d 6 0 using the degree of violation d = 2.2E16 that was taken from MATLAB. For all benchmark problems (except of multiple disc clutch brake problem), the initial parameters for WCA, (Ntotal, Nsr and dmax) were chosen as 50, 8, and 1E03, respectively. For the multiple disc clutch brake design problem, the predefined WCA user parameters were chosen as 20, 4, and 1E03. Different iteration numbers were used for each benchmark function, with smaller iteration number for smaller number of design variables and moderate functions, while larger iteration number for large number of desicion variables and complex problems. The mathematical formulations for constrained benchmark functions (problems 1–4) are given in Appendix A. Similarly, the mathematical objective function and their constraints for the mechanical engineering design problems are presented in Appendix B.

156

H. Eskandar et al. / Computers and Structures 110–111 (2012) 151–166 Table 1 Comparison of the best solution given by previous studies for constrained problem 1. DV

IGA [21]

HS [1]

WCA

Optimal

X1 X2 X3 X4 X5 X6 X7 g1(X) g2(X) g3(X) g4(X) f(X)

2.330499 1.951372 0.477541 4.365726 0.624487 1.038131 1.594227 4.46E05 252.561723 144.878190 7.63E06 680.630060

2.323456 1.951242 0.448467 4.361919 0.630075 1.03866 1.605348 0.208928 252.878859 145.123347 0.263414 680.641357

2.334238 1.950249 0.474707 4.366854 0.619911 1.030400 1.594891 1E13 252.569346 144.897817 2.2E12 680.631178

2.330499 1.951372 0.477541 4.365726 0.624487 1.038131 1.594227 4.46E05 252.561723 144.878190 7.63E06 680.630057

Table 2 Comparison of statistical results for various algorithms for constrained problem 1. ‘‘NFEs’’, ‘‘SD’’ and ‘‘NA’’ stand for number of function evaluations, standard deviation, and not available, respectively.

Fig. 7. Flowchart of the proposed WCA.

3.1. Constrained problem 1 For this minimization problem (see Appendix A.1), the best solution given by a number of optimizers was compared in Table 1. Table 2 presents the comparison of statistical results for the constrained problem 1 obtained using WCA and compared with previous statistical result reported by homomorphous mappings (HM) [19], adaptive segregational constraint handling evolutionary

Methods

Worst

Mean

Best

SD

NFEs

HM [19] ASCHEA [20] IGA [21] GA [11] GA1 [22] SAPF [24] SR [26] HS [1] DE [27] CULDE [28] PSO [25] CPSOGD [29] SMES [23] DEDS [30] HEAA [32] ISR [31] a Simplex [33] PESO [34] CDE [35] ABC [36] WCA

683.1800 NA 680.6304 680.6538 NA 682.081 680.7630 NA 680.1440 680.6300 684.5289 681.3710 680.7190 680.6300 680.6300 680.6300 680.6300 680.6300 685.1440 680.6380 680.6738

681.1600 680.6410 680.6302 680.6381 NA 681.246 680.6560 NA 680.5030 680.6300 680.9710 680.7810 680.6430 680.6300 680.6300 680.6300 680.6300 680.6300 681.5030 680.6400 680.6443

680.9100 680.6300 680.6301 680.6303 680.6420 680.7730 680.6321 680.6413 680.7710 680.6300 680.6345 680.6780 680.6320 680.6300 680.6300 680.6300 680.6300 680.6310 680.7710 680.6340 680.6311

4.11E02 NA 1.00E05 6.61E03 NA 0.322 0.034 NA 0.67098 1E07 5.1E01 0.1484 1.55E02 2.9E13 5.8E13 3.2E13 2.9E10 NA NA 4E03 1.14E02

1400,000 1500,000 NA 320,000 350,070 500,000 350,000 160,000 240,000 100,100 140,100 NA 240,000 225,000 200,000 271,200 323,426 350,000 248,000 240,000 110,050

algorithm (ASCHEA) [20], improved genetic algorithm (IGA) [21], GA [11], GA1 [22], simple multi-membered evolution strategy (SMES) [23], self adaptive penalty function (SAPF) [24], PSO [25], stochastic ranking (SR) [26], differential evolution (DE) [27], cultured differential evolution (CULDE) [28], harmony search (HS) [1], coevolutionary particle swarm optimization using gaussian distribution (CPSO–GD) [29], differential evolution with dynamic stochastic selection (DEDS) [30], improved stochastic ranking (ISR) [31], hybrid evolutionary algorithm and adaptive constraint handling technique (HEAA) [32], the a constraint Simplex method (a Simplex) [33], particle evolutionary swarm optimization (PESO) [34], co-evolutionary differential evolution (CDE) [35], and artificial bee colony (ABC) [36]. As shown in Table 2, in terms of the number of function evaluations, the proposed method shows superiority to other algorithms (except for the CULDE method which requires 100,100 function evaluations). In terms of the optimum solution, the WCA offered better or close to the best value compared with other algorithms. 3.2. Constrained problem 2 This minimization function (see Appendix A.2) was previously solved using HM, ASCHEA, SR, cultural algorithms with evolutionary programming (CAEP) [37], hybrid particle swarm optimization (HPSO) [38], changing range genetic algorithm (CRGA) [39], DE,

157

H. Eskandar et al. / Computers and Structures 110–111 (2012) 151–166 Table 3 Comparison of the best solution given by various algorithms for the constrained problem 2. DV

CULDE [28]

HS [1]

GA1 [22]

WCA

Optimal

X1 X2 X3 X4 X5 g1(X) g2(X) g3(X) g4(X) g5(X) g6(X) f(X)

78.000000 33.000000 29.995256 45.000000 36.775813 1.35E08 92.000000 11.15945 8.840500 4.999999 4.12E09 30665.5386

78.000000 33.000000 29.995000 45.000000 36.776 4.34E05 92.000043 11.15949 8.840510 5.000064 6.49E05 30665.5000

78.0495 33.007000 27.081000 45.000000 44.940000 1.283813 93.283813 9.592143 10.407856 4.998088 1.91E03 31020.8590

78.000000 33.000000 29.995256 45.000000 36.775812 1.96E12 91.999999 11.159499 8.840500 5.000000 0.000000 30665.5386

78.00000 33.00000 29.99526 45.00000 36.77581 9.71E04 92.000000 1.11E+01 8.870000 5.000000 9.27E09 30665.5390

Table 4 Comparison of statistical results for reported algorithms for constrained problem 2. Methods

Worst

Mean

Best

SD

NFEs

HM [19] ASCHEA [20] SR [26] CAEP [37] PSO [25] HPSO [38] PSODE [25] CULDE [28] DE [27] HS [1] CRGA [39] SAPF [24] SMES [23] DELC [40] DEDS [30] HEAA [32] ISR [31] a Simplex [33] WCA

30645.9000 NA 30665.5390 30662.2000 30252.3258 30665.5390 30665.5387 30665.5386 30665.5090 NA 30660.3130 30656.4710 30665.5390 30665.5390 30665.5390 30665.5390 30665.5390 30665.5390 30665.4570

30665.3000 30665.5000 30665.5390 30662.5000 30570.9286 30665.5390 30665.5387 30665.5386 30665.5360 NA 30664.3980 30655.9220 30665.5390 30665.5390 30665.5390 30665.5390 30665.5390 30665.5390 30665.5270

30664.5000 30665.5000 30665.5390 30665.5000 30663.8563 30665.5390 30665.5387 30665.5386 30665.5390 30665.5000 30665.5200 30665.4010 30665.5390 30665.5390 30665.5390 30665.5390 30665.5390 30665.5390 30665.5386

NA NA 2E05 9.300 81.000 1.70E06 8.30E10 1E07 5.067E03 NA 1.600 2.043 0.000 1.0E11 2.70E11 7.40E12 1.10E11 4.20E11 2.18E02

1400,000 1500,000 88,200 50,020 70,100 81,000 70,100 100,100 240,000 65,000 54,400 500,000 240,000 50,000 225,000 200,000 192,000 305,343 18,850

CULDE, particle swarm optimization with differential evolution (PSO–DE) [25], PSO, HS, SMES, SAPF, differential evolution with level comparison (DELC) [40], DEDS, ISR, HEAA, and a Simplex method. Table 3 compares the reported best solutions for the CULDE, HS, GA1, and WCA. The statistical results of different algorithms are given in Table 4. From Table 4, the PSO method found the best solution (30663.8563) in 70,100 function evaluations. The proposed WCA reached the best solution (30665.5386) in 18,850 function evaluations. The WCA offered modest solution quality in less number of function evaluations for this problem.

3.3. Constrained problem 3 This minimization problem (see Appendix A.3) has n decision variables and one equality constraint. The optimal solution of the problem is at X  ¼ ðp1ffiffin ;    ; p1ffiffinÞ with a corresponding function value of f(x) = 1. For this problem n is considered equal to 10. This function was previously solved using HM, ASCHEA, PSO–DE, PSO, CULDE, SR, DE, SAPF, SMES, GA, CRGA, DELC, DEDS, ISR, HEAA, a Simplex method, and PESO. The comparison of best solutions is shown in Table 5. The statistical results of seventeen algorithms including the WCA are shown in Table 6. From Table 6, the WCA obtained its best solution in 103,900 function evaluations (which is considerably less than other compared algorithms except for the CRGA). However, the statistical results obtained by WCA are more accurate than the CRGA.

Table 5 Comparison of best solution for the constrained problem 3 given by two algorithms. DV

CULDE [28]

WCA

Optimal solution

X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 h(X) f(X)

0.304887 0.329917 0.319260 0.328069 0.326023 0.302707 0.305104 0.315312 0.322047 0.309009 9.91E04 0.995413

0.316011 0.316409 0.315392 0.315872 0.316570 0.316209 0.316137 0.316723 0.316924 0.316022 0.000000 0.999981

0.316227 0.316227 0.316227 0.316227 0.316227 0.316227 0.316227 0.316227 0.316227 0.316227 0 1

3.4. Constrained problem 4 For this maximization problem (see Appendix A.4) which is converted to the minimization problem, the feasible region of the search space consists of 729 disjoint spheres. A point (x1, x2, x3) is feasible, if and only if, there exist p, q, and r such that the inequality holds, as given in Appendix A [41]. For this problem, the optimum solution is X⁄ = (5, 5, 5) with f(X⁄) = 1. This problem was previously solved using HM, SR, CULDE, CAEP, HPSO, ABC, PESO, CDE, SMES, and teaching-learning-based optimization (TLBO) [42]. The statistical results of eleven optimizers including the WCA are shown in Table 7. From Table 7, the WCA

158

H. Eskandar et al. / Computers and Structures 110–111 (2012) 151–166

Table 6 Comparison of statistical results given by various algorithms for constrained problem 3. Methods

Worst

Mean

Best

SD

HM [19] ASCHEA [20] PSO [25] PSODE [25] CULDE [28] CRGA [39] SAPF [24] SR [26] ISR [31] DE [27] SMES [23] GA [11] DELC [40] DEDS [30] HEAA [32] a Simples [33] PESO [34] WCA

0.997800 NA 1.004269 1.005010 0.639920 0.993100 0.887000 1.00000 1.001000 1.025200 1.000000 0.999790 1.000000 1.000500 1.000000 1.000500 0.464000 0.999171

0.998900 0.999890 1.004879 1.005010 0.788635 0.997500 0.964000 1.000000 1.001000 1.025200 1.000000 0.999920 1.000000 1.000500 1.000000 1.000500 0.764813 0.999806

0.999700 1.000000 1.004986 1.005010 0.995413 0.999700 1.000000 1.000000 1.001000 1.025200 1.000000 0.999980 1.000000 1.000500 1.000000 1.000500 0.993930 0.999981

NA 1400,000 NA 1500,000 1.0E+0 140,100 3.8E12 140,100 0.115214 100,100 1.4E03 67,600 3.01E01 500,000 1.9E04 229,000 8.2E09 349,200 NA 8000,000 2.09E04 240,000 5.99E05 320,000 2.1E06 200,000 1.9E08 225,000 5.2E15 200,000 8.5E14 310,968 NA 350,000 1.91E04 103,900

NFEs

Table 7 Comparison of statistical results given by various algorithms for constrained problem 4. Methods

Worst

Mean

Best

SD

NFEs

HM [19] SR [26] CAEP [37] HPSO [38] CULDE [28] SMES [23] PESO [34] CDE [35] ABC [36] TLBO [42] WCA

0.991950 1 0.996375 1 1 1 0.994 1 1 1 0.999998

0.999135 1 0.996375 1 1 1 0.998875 1 1 1 0.999999

0.999999 1 1 1 1 1 1 1 1 1 0.999999

NA 0 9.7E03 1.6E15 0 0 NA 0 0 0 2.51E07

1400,000 350,000 50,020 81,000 100,100 240,000 350,000 248,000 240,000 50,000 6100

reached its best solution considerably faster than other reported algorithms using 6100 function evaluations. 3.5. Constrained benchmark engineering and mechanical design problems 3.5.1. Three-bar truss design problem The three-bar truss problem (see Appendix B.1) is one of the engineering test problems for constrained algorithms. The comparison of the best solution among the WCA, DEDS, and PSO–DE is presented in Table 8. The comparison of obtained statistical results for the WCA with previous studies including DEDS, PSO–DE, HEAA, and society and civilization algorithm (SC) [43] is presented in Table 9. The proposed WCA obtained the best solution in 5250 function evaluations which is superior to other considered algorithms. 3.5.2. Speed reducer design problem In this constrained optimization problem (see Appendix B.2), the weight of speed reducer is to be minimized subject to constraints on bending stress of the gear teeth, surface stress, transverse deflections of the shafts, and stresses in the shafts [44]. The variables x1 to x7 represent the face width (b), module of teeth (m), number of teeth in the pinion (z), length of the first shaft between bearings (l1), length of the second shaft between bearings (l2), and the diameter of first (d1) and second shafts (d2), respectively. This is an example of a mixed integer programming problem. The third variable x3 (number of teeth) is of integer values while all left variables are continuous.

Table 8 Comparison of the best solution obtained from the previous algorithms for three-bar truss problem. DV

DEDS [30]

PSO-DE [25]

WCA

X1 X2 g1(X) g2(X) g3(X) f(X)

0.788675 0.408248 1.77E08 1.464101 0.535898 263.895843

0.788675 0.408248 5.29E11 1.463747 0.536252 263.895843

0.788651 0.408316 0.000000 1.464024 0.535975 263.895843

Table 9 Comparison of statistical results obtained from various algorithms for the three-bar truss problem. Methods

Worst

Mean

Best

SD

NFEs

SC [43] PSO–DE [25] DEDS [30] HEAA [32] WCA

263.969756 263.895843 263.895849 263.896099 263.896201

263.903356 263.895843 263.895843 263.895865 263.895903

263.895846 263.895843 263.895843 263.895843 263.895843

1.3E02 4.5E10 9.7E07 4.9E05 8.71E05

17,610 17,600 15,000 15,000 5250

This engineering problem previously was optimized using SC, PSO–DE, DELC, DEDS, HEAA, (l + k)  ES [44], modified differential evolution (MDE) [45,46], and ABC. The comparison of the best solution of reported methods is presented in Table 10. The statistical results of six algorithms were compared with the proposed WCA and are given in Table 11. The WCA, DELC and DEDS outperformed other considered optimization engines as shown in Table 11. In terms of the number of function evaluations, the WCA method reached the best solution faster than other reported algorithms using 15,150 function evaluations.

3.5.3. Pressure vessel design problem In pressure vessel design problem (see Appendix B.3), proposed by Kannan and Kramer [47], the target is to minimize the total cost, including the cost of material, forming, and welding. A cylindrical vessel is capped at both ends by hemispherical heads as shown in Fig. 8. There are four design variables in this problem: Ts (x1, thickness of the shell), Th (x2, thickness of the head), R (x3, inner radius), and L (x4, length of the cylindrical section of the vessel). Among the four design variables, Ts and Th are expected to be integer multiples of 0.0625 in, and R and L are continuous design variables. Table 12 shows the comparisons of the best solutions obtained by the proposed WCA and other compared methods. This problem has been solved previously using GA based co-evolution model (GA2) [48], GA through the use of dominance-based tour tournament selection (GA3) [49], CPSO, HPSO, hybrid nelder-mead simplex search and particle swarm optimization (NM–PSO) [41], gaussian quantum-behaved particle swarm optimization (GQPSO), quantum-behaved particle swarm optimization (QPSO) [50], PSO, and CDE and compared with the proposed WCA as given in Table 13. From Table 13, the WCA was executed for obtaining two sets of statistical results for comparative study. The first set of statistical results was obtained for 27,500 function evaluations using and the second set of statistical results was obtained for 8000 function evaluations using WCA. As can be seen from Table 13, in terms of the best solution and number of function evaluations, the proposed method is superior to other optimizer. Considering the statistical and comparison results in Table 13, it can be concluded that the WCA is more efficient than the other optimization engines for the pressure vessel design problem, in

159

H. Eskandar et al. / Computers and Structures 110–111 (2012) 151–166 Table 10 Comparison of the best solution obtained from the previous algorithms for the speed reducer problem. DV

DEDS [30]

DELC [40]

HEAA [32]

MDE[45,46]

PSODE [25]

WCA

X1 X2 X3 X4 X5 X6 X7 f(X)

3.5 + 09 0.7 + 09 17.000000 7.3 + 09 7.715319 3.350214 5.286654 2994.471066

3.5 + 09 0.7 + 09 17.000000 7.3 + 09 7.715319 3.350214 5.286654 2994.471066

3.500022 0.700000 17.000012 7.300427 7.715377 3.350230 5.286663 2994.499107

3.500010 0.700000 17.000000 7.300156 7.800027 3.350221 5.286685 2996.356689

3.500000 0.700000 17.000000 7.300000 7.800000 3.350214 5.2866832 2996.348167

3.500000 0.700000 17.000000 7.300000 7.715319 3.350214 5.286654 2994.471066

Table 11 Comparison of statistical results obtained from various algorithms for the speed reducer problem. Methods

Worst

Mean

Best

SD

NFEs

SC [43] PSO–DE [25] DELC [40] DEDS [30] HEAA [32] MDE [45,46] (l + k)  ES [44] ABC [36] WCA

3009.964736 2996.348204 2994.471066 2994.471066 2994.752311 NA NA NA 2994.505578

3001.758264 2996.348174 2994.471066 2994.471066 2994.613368 2996.367220 2996.348000 2997.058000 2994.474392

2994.744241 2996.348167 2994.471066 2994.471066 2994.499107 2996.356689 2996.348000 2997.058000 2994.471066

4.0 6.4E06 1.9E12 3.6E12 7.0E02 8.2E03 0.0 0.0 7.4E03

54,456 54,350 30,000 30,000 40,000 24,000 30,000 30,000 15,150

Fig. 8. Schematic view of pressure vessel problem.

this paper. Fig. 9 depicts the function values versus the number of iterations for the pressure vessel design problem. One of the advantages of the proposed method is that the function values are reduced to near optimum point in the early iterations (see Fig. 9). This may be due to the searching criteria and constraint handling approach of WCA where it initially searches a wide region of problem domain and rapidly focuses on the optimum solution. 3.5.4. Tension/compression spring design problem The tension/compression spring design problem (see Appendix B.4) is described in Arora [51] for which the objective is to minimize the weight (f(x)) of a tension/compression spring (as shown

in Fig. 10) subject to constraints on minimum deflection, shear stress, surge frequency, limits on outside diameter and on design variables. The independent variables are the wire diameter d(x1), the mean coil diameter D(x2), and the number of active coils P(x3). The comparisons of the best solutions among several reported algorithms are given in Table 14. This problem has been used as a benchmark problem for testing the efficiency of numerous optimization methods, such as GA2, GA3, CAEP, CPSO, HPSO, NM–PSO, G-QPSO, QPSO, PSO–DE, PSO, DELC, DEDS, HEAA, SC, DE, ABC, and (l + k)  ES. The obtained statistical results using the reported optimizers and the proposed WCA are given in Table 15. For comparison of statistical results obtained by WCA, two sets of statistical results are presented as shown in Table 15. The first set of results corresponds to 11,750 function evaluations and the second set corresponds to 2000 function evaluations using WCA. The best function value is 0.012630 with 80,000 function evaluations obtained by NM–PSO. From Table 15, the proposed WCA produced equally best result in the same number of function evaluations as G-QPSO method. In terms of solution quality, only NM–PSO was superior to all other considered algorithms (including WCA). However, WCA and G-QPSO offer competitive best solutions in much less number of function evaluations than offered by the NM–PSO method. Fig. 11 demonstrates the function values with respect to the number of iterations for the tension/compression spring design problem. From Fig. 11, the initial population of the algorithm

Table 12 Comparison of the best solution obtained from various studies for the pressure vessel problem. DV

CDE [35]

GA3 [49]

CPSO [29]

HPSO [38]

NM–PSO [41]

G-QPSO [50]

WCA

X1 X2 X3 X4 g1(X) g2(X) g3(X) g4(X) f(X)

0.8125 0.4375 42.0984 176.6376 6.67E07 3.58E02 3.705123 63.3623 6059.7340

0.8125 0.4375 42.0974 176.6540 2.01E03 3.58E02 24.7593 63.3460 6059.9463

0.8125 0.4375 42.0913 176.7465 1.37E06 3.59E04 118.7687 63.2535 6061.0777

0.8125 0.4375 42.0984 176.6366 8.80E07 3.58E02 3.1226 63.3634 6059.7143

0.8036 0.3972 41.6392 182.4120 3.65E05 3.79E05 1.5914 57.5879 5930.3137

0.8125 0.4375 42.0984 176.6372 8.79E07 3.58E02 0.2179 63.3628 6059.7208

0.7781 0.3846 40.3196 200.0000 2.95E11 7.15E11 1.35E06 40.0000 5885.3327

160

H. Eskandar et al. / Computers and Structures 110–111 (2012) 151–166

Table 13 Comparison of statistical results given by different optimizers for the pressure vessel problem.

Table 15 Comparisons of statistical results obtained from various algorithms for the tension/ compression spring problem.

Methods

Worst

Mean

Best

SD

NFEs

Methods

Worst

Mean

Best

SD

NFEs

GA2 [48] GA3 [49] CPSO [29] HPSO [38] NMPSO [41] G-QPSO [50] QPSO [50] PSO [25] CDE [35] WCA

6308.4970 6469.3220 6363.8041 6288.6770 5960.0557 7544.4925 8017.2816 14076.3240 6371.0455 6590.2129 7319.0197

6293.8432 6177.2533 6147.1332 6099.9323 5946.7901 6440.3786 6440.3786 8756.6803 6085.2303 6198.6172 6230.4247

6288.7445 6059.9463 6061.0777 6059.7143 5930.3137 6059.7208 6059.7209 6693.7212 6059.7340 5885.3327 5885.3711

7.4133 130.9297 86.4500 86.2000 9.1610 448.4711 479.2671 1492.5670 43.0130 213.0490 338.7300

900,000 80,000 240,000 81,000 80,000 8000 8000 8000 204,800 27,500 8000

GA2 [48] GA3 [49] CAEP [37] CPSO [29] HPSO [38] NM–PSO [41] G-QPSO [50] QPSO [50] PSO [25] DE [27] DELC [40] DEDS [30] HEAA [32] PSO–DE [25] SC [43] (l + k)ES [44] ABC [36] WCA

0.012822 0.012973 0.015116 0.012924 0.012719 0.012633 0.017759 0.018127 0.071802 0.012790 0.012665 0.012738 0.012665 0.012665 0.016717 NA NA 0.012952 0.015021

0.012769 0.012742 0.013568 0.012730 0.012707 0.012631 0.013524 0.013854 0.019555 0.012703 0.012665 0.012669 0.012665 0.012665 0.012922 0.013165 0.012709 0.012746 0.013013

0.012704 0.012681 0.012721 0.012674 0.012665 0.012630 0.012665 0.012669 0.012857 0.012670 0.012665 0.012665 0.012665 0.012665 0.012669 0.012689 0.012665 0.012665 0.012665

3.94E05 5.90E05 8.42E04 5.20E04 1.58E05 8.47E07 0.001268 0.001341 0.011662 2.7E05 1.3E07 1.3E05 1.4E09 1.2E08 5.9E04 3.9E04 0.012813 8.06E05 6.16E04

900,000 80,000 50,020 240,000 81,000 80,000 2000 2000 2000 204,800 20,000 24,000 24,000 24,950 25,167 30,000 30,000 11,750 2000

Fig. 9. Function values versus number of iterations for the pressure vessel problem.

Fig. 10. Schematic view of tension/compression spring problem.

was in the infeasible region in the early iterations of WCA. After further iterations, the population was adjusted to the feasible region and the function values were reduced at each iteration. The constraint violation values with respect to the number of iterations for the tension/compression spring problem are shown in Fig. 12. From Fig. 12, the obtained solutions did not satisfy the constraints in the early iterations. As the algorithm continued, the obtained results satisfied the constraints, while the value of constraint violation decreased. 3.5.5. Welded beam design problem This design problem (see Appendix B.5), which has been often used as a benchmark problem, was proposed by Coello [48]. In this

Fig. 11. Function values with respect to the number of iterations for tension/ compression spring problem.

problem, a welded beam is designed for the minimum cost subject to constraints on shear stress (s), bending stress (r) in the beam, buckling load on the bar (Pb), end deflection of the beam (d), and side constraints. There are four design variables as shown in Fig. 13: h(x1), l(x2), t(x3) and b(x4). The optimization engines previously applied to this problem include GA2, GA3, CAEP, CPSO, HPSO, NM–PSO, hybrid genetic algorithm (HGA) [52], MGA [53], SC, and DE. The comparisons for the best solutions given by different algorithms are presented in Table 16. The comparisons of the statistical results are given in Table 17. Among those previously reported studies, the best solution was obtained using NM–PSO with an objective function value of

Table 14 Comparison of the best solution obtained from various algorithms for the tension/compression spring problem. DV

DEDS [30]

GA3 [49]

CPSO [29]

HEAA [32]

NM–PSO [41]

DELC [40]

WCA

X1 X2 X3 g1(X) g2(X) g3(X) g4(X) f(X)

0.051689 0.356717 11.288965 1.45E09 1.19E09 4.053785 0.727728 0.012665

0.051989 0.363965 10.890522 1.26E03 2.54E05 4.061337 0.722697 0.012681

0.051728 0.357644 11.244543 8.25E04 2.52E05 4.051306 0.727085 0.012674

0.051689 0.356729 11.288293 3.96E10 3.59E10 4.053808 0.727720 0.012665

0.051620 0.355498 11.333272 1.01E03 9.94E04 4.061859 0.728588 0.012630

0.051689 0.356717 11.288965 3.40E09 2.44E09 4.053785 0.727728 0.012665

0.051680 0.356522 11.300410 1.65E13 7.9E14 4.053399 0.727864 0.012665

161

H. Eskandar et al. / Computers and Structures 110–111 (2012) 151–166

Fig. 12. Constraint violation values with respect to the number of iterations for tension/compression spring problem.

Fig. 14. Function values versus number of iterations for the welded beam problem.

Fig. 15. Schematic view of rolling element bearing problem.

Fig. 13. Schematic view of welded beam problem. Table 17 Comparison of the statistical results obtained from different optimization engines for the welded beam problem. Methods

Worst

Mean

Best

SD

NFEs

GA2 [48] GA3 [49] CAEP [37] CPSO [29] HPSO [38] PSO–DE [25] NM–PSO [41] MGA [53] SC [43] DE [27] WCA

1.785835 1.993408 3.179709 1.782143 1.814295 1.724852 1.733393 1.995000 6.399678 1.824105 1.744697 1.801127

1.771973 1.792654 1.971809 1.748831 1.749040 1.724852 1.726373 1.919000 3.002588 1.768158 1.726427 1.735940

1.748309 1.728226 1.724852 1.728024 1.724852 1.724852 1.724717 1.824500 2.385434 1.733461 1.724856 1.724857

1.12E02 7.47E02 4.43E01 1.29E02 4.01E02 6.7E16 3.50E03 5.37E02 9.6E01 2.21E02 4.29E03 1.89E02

900,000 80,000 50,020 240,000 81,000 66,600 80,000 NA 33,095 204,800 46,450 30,000

f(x) = 1.724717 after 80,000 function evaluations. Using the proposed WCA, the best solution of f(x) = 1.724856 was obtained using 46,450 function evaluations. The proposed WCA was also performed for 30,000 function evaluations offering a best solution of f(x) = 1.724857. The optimization statistical results obtained by the WCA outperformed the obtained results by other considered algorithms, except the NM–PSO, PSO–DE, HPSO and CAEP in terms of cost value. However, WCA could offer a competitive set of statistical results in less number of function evaluations as shown in Table 17. Fig. 14 illustrates the function values with respect to the number of iterations for the welded beam design problem. 3.5.6. Rolling element bearing design problem The objective of this problem (see Appendix B.6) is to maximize the dynamic load carrying capacity of a rolling element bearing, as demonstrated in Fig. 15. This problem has 10 decision variables

Table 16 Comparison of the best solution obtained from various algorithms for the welded beam problem. DV

GA3 [49]

CPSO [29]

CAEP [37]

HGA [52]

NM–PSO [41]

WCA

X1(h) X2(l) X3(t) X4(b) g1(X) g2(X) g3(X) g4(X) g5(X) g6(X) g7(X) f(X)

0.205986 3.471328 9.020224 0.206480 0.103049 0.231747 5E04 3.430044 0.080986 0.235514 58.646888 1.728226

0.202369 3.544214 9.048210 0.205723 13.655547 78.814077 3.35E03 3.424572 0.077369 0.235595 4.472858 1.728024

0.205700 3.470500 9.036600 0.205700 1.988676 4.481548 0.000000 3.433213 0.080700 0.235538 2.603347 1.724852

0.205700 3.470500 9.036600 0.205700 1.988676 4.481548 0.000000 3.433213 0.080700 0.235538 2.603347 1.724852

0.205830 3.468338 9.036624 0.20573 0.02525 0.053122 0.000100 3.433169 0.080830 0.235540 0.031555 1.724717

0.205728 3.470522 9.036620 0.205729 0.034128 3.49E05 1.19E06 3.432980 0.080728 0.235540 0.013503 1.724856

162

H. Eskandar et al. / Computers and Structures 110–111 (2012) 151–166

Table 18 Comparison of the best solution obtained using three algorithms for the rolling element bearing problem. DV

GA4 [54]

TLBO [42]

WCA

X1 X2 X3 X4 X5 X6 X7 X8 X9 X10 g(X1) g(X2) g(X3) g(X4) g(X5) g(X6) g(X7) g(X8) g(X9) g(X10) f(X)

125.717100 21.423000 11.000000 0.515000 0.515000 0.415900 0.651000 0.300043 0.022300 0.751000 0.000821 13.732999 2.724000 3.606000 0.717000 4.857899 0.003050 0.000007 0.000007 0.000005 81843.30

125.7191 21.42559 11.000000 0.515000 0.515000 0.424266 0.633948 0.300000 0.068858 0.799498 0.000000 13.15257 1.525200 0.719056 16.49544 0.000000 0.000000 2.559363 0.000000 0 81859.74

125.721167 21.423300 11.001030 0.515000 0.515000 0.401514 0.659047 0.300032 0.040045 0.600000 0.000040 14.740597 3.286749 3.423300 0.721167 9.290112 0.000087 0.000000 0.000000 0.000000 85538.48

Table 19 Comparison of statistical results using four optimizers for the rolling element bearing problem. Methods

Worst

Mean

Best

SD

NFEs

GA4 [54] ABC [36] TLBO [42] WCA

NA 78897.81 80807.85 83942.71

NA 81496.00 81438.98 83847.16

81843.30 81859.74 81859.74 85538.48

NA 0.69 0.66 488.30

225,000 10,000 10,000 3950

Fig. 17. Schematic view of multiple disc clutch brake design problem.

Table 20 Comparison of the best solution obtained using three algorithms for the multiple disc clutch brake problem. DV

NSGA-II [56]

TLBO [42]

WCA

X1 X2 X3 X4 X5 g(X1) g(X2) g(X3) g(X4) g(X5) g(X6) g(X7) g(X8) f(X)

70.0000 90.0000 1.5.0000 1000.0000 3.0000 0.0000 22.0000 0.9005 9.7906 7.8947 3.3527 60.6250 11.6473 0.4704

70.000000 90.000000 1.000000 810.000000 3.000000 0.000000 24.000000 0.919427 9830.371000 7894.696500 0.702013 37706.250000 14.297986 0.313656

70.000000 90.000000 1.000000 910.000000 3.000000 0.000000 24.000000 0.909480 9.809429 7.894696 2.231421 49.768749 12.768578 0.313656

Fig. 16. Comparison of convergence rate for the rolling element bearing problem using: (a) TLBO and ABC, (b) WCA.

design variable and the remainder are continuous design variables. Constraints are imposed based on kinematic and manufacturing considerations. The problem of the rolling element bearing was optimized previously using GA4 [54], ABC, and TLBO. Table 18 shows the comparison of the best solutions for the three optimizers and the proposed method in terms of design variables and function values, and constraints accuracy. The statistical optimization results for reported algorithms are given in Table 19. From Table 19, the proposed method detected the best solution (the maximum dynamic load carrying capacity of a rolling element bearing) with considerable improvement compared with other optimizers. In terms of statistical results, WCA offered better results using less number of function evaluations (NFEs) compared with other considered algorithms. Fig. 16 compares the convergence rates. From Fig. 16a, the convergence rates for ABC and TLBO are close (with a slightly higher mean searching capability for TLBO). However, WCA reached the best solution at 79 iterations (iteration number for obtained best solution = NFEs/Ns). Meanwhile, the WCA also found the best solution, as shown in Table 19.

which are pitch diameter (Dm), ball diameter (Db), number of balls (Z), inner and outer raceway curvature coefficients (fi and fo), K Dmin , K Dmax , e, e, and f, as shown in Fig. 15. The five latter appear in constraints and indirectly affect the internal geometry. Z is the discrete

3.5.7. Multiple disk clutch brake design problem This minimization problem (see Appendix B.7), which is categorized as discrete optimization problem, is taken from [55]. Fig. 17 illustrates a schematic view of multiple disc clutch brake. The objective is to minimize the mass of the multiple disc clutch brake

H. Eskandar et al. / Computers and Structures 110–111 (2012) 151–166

using five discrete design variables: inner radius (ri = 60, 61, 62, . . ., 80), outer radius (ro = 90, 91, 92, . . ., 110), thickness of the disc (t = 1, 1.5, 2, 2.5, 3), actuating force (F = 600, 610, 620, . . ., 1000), and number of friction surfaces (Z = 2, 3, 4, 5, 6, 7, 8, 9). The problem of the multiple disc clutch brake was also attempted by Deb and Srinivasan [56] using NSGA-II, TLBO, and ABC. The comparisons of best solutions and the statistical optimization results are given in Tables 20 and 21, respectively. All the reported optimizers give the same optimal solution for the multiple disc clutch brake problem as shown in Table 21. However, in terms of statistical results including worst, mean, and standard deviation, the proposed WCA shows superiority compared with other optimization engines. Fig. 18 shows the comparison of convergence rate for three optimizers. From Fig. 18a, it can be seen that the convergence rate for the TLBO method is faster than ABC in earlier generations. However, as the number of iterations increase, the convergence rate for both algorithms becomes nearly the same. As shown in Fig. 18b, the proposed method detected the best solution faster than other considered optimizers at 25 iterations (500 function evaluations), while TLBO and ABC methods found the best solution at over 45 iterations (more than 900 function evaluations). These overall optimization results indicate that the WCA has the capability in handling various combinatorial optimization problems (COPs) and can offer optimum solutions (near or better than to the best-known results) under lower computational efforts (measured as the number of function evaluations). Therefore, it can be concluded that the WCA is an attractive alternative optimizer

Table 21 Comparison of statistical results obtained using three optimizers for the multiple disc clutch brake problem. Methods

Worst

Mean

Best

SD

NFEs

ABC [36] TLBO [42] WCA

0.352864 0.392071 0.313656

0.324751 0.327166 0.313656

0.313657 0.313657 0.313656

0.54 0.67 1.69E16

>900 >900 500

163

for constrained and engineering optimization challenging other metaheuristic methods especially in terms of computational efficiency

4. Conclusions This paper presented a new optimization technique called the water cycle algorithm (WCA). The fundamental concepts and ideas which underlie the method are inspired from nature and based on the water cycle process in real world. In this paper, the WCA with embedded constraint handling approach is proposed for solving eleven constrained benchmark optimization and engineering design problems. The statistical results, based on the comparisons of the efficiency of the proposed WCA against numerous other optimization methods, illustrate the attractiveness of the proposed method for handling numerous types of constraints. The obtained results show that the proposed algorithm, (generally), offers better solutions than other optimizers considered in this research in addition to its efficiency in terms of the number of function evaluations (computational cost) for almost every problem. In general, the WCA offers competitive solutions compared with other metaheuristic optimizers based on the reported and experimental results in this research. However, the computational efficiency and quality of solutions given by the WCA depends on the nature and complexity of the underlined problem. This applies to the efficiency and performance of numerous metaheuristic methods. The proposed method may be used for solving the real world optimization problems which require significant computational efforts efficiently with acceptable degree of accuracy for the solutions. However, further research is required to examine the efficiencies of the proposed WCA on large scale optimization problems. Acknowledgments The authors would like to acknowledge the Ministry of Higher Education of Malaysia and the University of Malaya, Kuala Lumpur, Malaysia for the financial support under grant number UM.TNC2/ IPPP/UPGP/628/6/ER013/2011A.

Appendix A A.1. Constrained problem 1

f ðxÞ ¼ ðx1  10Þ2 þ 5ðx2  12Þ2 þ x43 þ 3ðx4  11Þ2 þ 10x65 þ 7x26 þ x47  4x6 x7  10x6  8x7 subject to:

g 1 ðxÞ ¼ 127  2x21  3x42  x3  4x24  5x5 P 0 g 2 ðxÞ ¼ 282  7x1  3x2  10x23  x4 þ x5 P 0 g 3 ðxÞ ¼ 196  23x1  x22  6x26 þ 8x7 P 0 g 4 ðxÞ ¼ 4x21  x22 þ 3x1 x2  2x23  5x6 þ 11x7 P 0 10 6 xi 6 10 i ¼ 1; 2; 3; 4; 5; 6; 7 A.2. Constrained problem 2

Fig. 18. Comparison of convergence rate for the multiple disc clutch brake problem using: (a) TLBO and ABC, (b) WCA.

f ðxÞ ¼ 5:3578547x33 þ 0:8356891x1 x5 þ 37:293239x1 þ 40729:141

164

H. Eskandar et al. / Computers and Structures 110–111 (2012) 151–166

subject to:

subject to:

g 1 ðxÞ ¼ 85:334407 þ 0:0056858x2 x5 þ 0:0006262x1 x4 0:0022053x3 x5  92 6 0

1 2 3

g 2 ðxÞ ¼ 85:334407  0:0056858x2 x5  0:0006262x1 x4 0:0022053x3 x5 6 0 g 3 ðxÞ ¼ 80:51249 þ 0:0071317x2 x5 þ 0:0029955x1 x2

g 2 ðxÞ ¼ x397:5 160 x2 x2 1 2 3

1:93x34

g 3 ðxÞ ¼ x

4 2 x6 x3

g 4 ðxÞ ¼

þ0:0021813x23  110 6 0 g 4 ðxÞ ¼ 80:51249  0:0071317x2 x5  0:0029955x1 x2 0:0021813x23

g 1 ðxÞ ¼ x 27 160 x2 x

þ 90 6 0

g 5 ðxÞ ¼ 9:300961 þ 0:0047026x3 x5 þ 0:0012547x1 x3 þ0:0019085x3 x4  25 6 0 g 6 ðxÞ ¼ 9:300961  0:0047026x3 x5  0:0012547x1 x3 0:0019085x3 x4 þ 20 6 0

g 5 ðxÞ ¼ g 6 ðxÞ ¼ g 7 ðxÞ ¼ g 8 ðxÞ ¼ g 9 ðxÞ ¼

78 6 x1 6 102 33 6 x2 6 45

1:93x35 x2 x47 x3

160 160

½ð745x4 =x2 x3 Þ2 þ16:9106 

1=2

160

110x36

½ð745x5 =x2 x3 Þ

2

6 1=2

þ157:510



85x37

160

x2 x3 160 40 5x2 160 x1 x1 160 12x2

g 10 ðxÞ ¼ 1:5xx64þ1:9  1 6 0

27 6 xi 6 45 i ¼ 3; 4; 5

g 11 ðxÞ ¼ 1:1xx75þ1:9  1 6 0

A.3. Constrained problem 3

where

n pffiffiffin Y f ðxÞ ¼  n : xi i¼1

2:6 6 x1 6 3:6; 0:7 6 x2 6 0:8; 17 6 x3 6 28; 7:3 6 x4 6 8:3;

subject to:

7:3 6 x5 6 8:3; 2:9 6 x6 6 3:9; 5:0 6 x7 6 5:5

n X hðxÞ ¼ x2i ¼ 1

B.3. Pressure vessel design problem

i¼1

0 6 xi 6 1 i ¼ 1; . . . ; n f ðxÞ ¼ 0:6224x1 x3 x4 þ 1:7781x2 x23 þ 3:1661x21 x4 þ 19:84x21 x3

A.4. Constrained problem 4

subject to: 2

f ðxÞ ¼ 

2

100  ðx1  5Þ  ðx2  5Þ  ðx3  5Þ 100

2

g 1 ðxÞ ¼ x1 þ 0:0193x g 2 ðxÞ ¼ x2 þ 0:00954x3 6 0

subject to: 2

2

2

gðXÞ ¼ ðx1  pÞ þ ðx2  qÞ þ ðx3  rÞ  0:0625 6 0 0 6 xi 6 10 i ¼ 1; 2; 3 p; q; r ¼ 1; 2; 3; . . . ; 9

g 3 ðxÞ ¼ px23 x4  4=3px33 þ 1296; 000 6 0 g 4 ðxÞ ¼ x4  240 6 0 0 6 xi 6 100 i ¼ 1; 2 10 6 xi 6 200 i ¼ 3; 4

Appendix B B.1. Three-bar truss design problem

pffiffiffi f ðxÞ ¼ ð2 2x1 þ x2 Þ  l subject to: pffiffi 1 þx2 g 1 ðxÞ ¼ pffiffi2x22xþ2x Pr60 x 1 2

1

g 2 ðxÞ ¼

pffiffi x2 2x21 þ2x1 x2

Pr60

g 3 ðxÞ ¼ pffiffi2x1þx P  r 6 0 2

1

0 6 xi 6 1 i ¼ 1; 2 l ¼ 100 cm;P ¼ 2 kN=cm2 ; r ¼ 2 kN=cm2 B.2. Speed reducer design problem

  f ðxÞ ¼ 0:7854x1 x22 3:3333x23 þ 14:9334x3  43:0934  2       1:508x1 x6 þ x27 þ 7:4777 x36 þ x37 þ 0:7854 x4 x26 þ x5 x27

B.4. Tension/compression spring design problem

f ðxÞ ¼ ðx3 þ 2Þx2 x21 subject to:

 g 1 ðxÞ ¼ 1  x32 x3 71; 785x41 x41 6 0  g 2 ðxÞ ¼ 4x22  x1 x2 12; 566ðx2 x31  x41 Þ þ 1=5108x21  1 6 0 g 3 ðxÞ ¼ 1  140:45x1 =x22 x3 6 0 g 4 ðxÞ ¼ x2 þ x1 =1:5  1 6 0 0:05 6 x1 6 2:00 0:25 6 x2 6 1:30 2:00 6 x3 6 15:00

165

H. Eskandar et al. / Computers and Structures 110–111 (2012) 151–166

Db ri r0 ; fi ¼ ; f0 ¼ T ¼ D  d  2Db Dm Db Db D ¼ 160; d ¼ 90; Bw ¼ 30; r i ¼ r o ¼ 11:033

B.5. Welded beam design problem



f ðxÞ ¼ 1:10471x21 x2 þ 0:04811x3 x4 ð14 þ x2 Þ

0:5ðD þ dÞ 6 Dm 6 0:6ðD þ dÞ; 0:15ðD  dÞ 6 Db 6 0:45ðD  dÞ;

subject to:

4 6 Z 6 50; 0:515 6 fi and f o 6 0:6;

g 1 ðxÞ ¼ sðxÞ  smax 6 0

0:4 6 K Dmin 6 0:5; 0:6 6 K Dmax 6 0:7; 0:3 6 e 6 0:4;

g 2 ðxÞ ¼ rðxÞ  rmax 6 0

0:02 6 e 6 0:1; 0:6 6 f 6 0:85:

g 3 ðxÞ ¼ x1  x4 6 0 g 4 ðxÞ ¼ 0:10471x21 þ 0:04811x3 x4 ð14 þ x2 Þ  5 6 0

B.7. Multiple disk clutch brake design problem

g 5 ðxÞ ¼ 0:125  x1 6 0 g 6 ðxÞ ¼ dðxÞ  dmax 6 0 g 7 ðxÞ ¼ P  Pc ðxÞ 6 0

f ðxÞ ¼ pðr 2o  r 2i ÞtðZ þ 1Þq

0:1 6 xi 6 2 i ¼ 1; 4

subject to:

0:1 6 xi 6 10 i ¼ 2; 3

g 1 ðxÞ ¼ r o  r i  Dr P 0

where,

rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi x P MR sðxÞ ¼ ðs0 Þ2 þ 2s0 s00 2 þ ðs00 Þ2 s0 ¼ pffiffiffi s00 ¼ J 2R 2 x1 x2 rffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffiffi     2 2 x2 x2 x1 þ x3 ; M ¼P Lþ ; R¼ þ 2 4 2 pffiffiffi 2 

 2 x x1 þ x3 J¼2 2 x1 x 2 2 þ 12 2 ffi qffiffiffiffiffiffi rffiffiffiffiffiffi! x23 x64 4:013E 36 6PL 4PL3 x3 E rðxÞ ¼ 2 ; dðxÞ ¼ 3 ; Pc ðxÞ ¼ 1 2L 4G x4 x3 Ex3 x4 L2 P ¼ 6000lb;

L ¼ 14in;

E ¼ 30  106 psi; G ¼ 12  106 psi

smax ¼ 13; 600 psi; rmax ¼ 30; 000 psi; dmax ¼ 0:25in

g 2 ðxÞ ¼ lmax  ðZ þ 1Þðt þ dÞ P 0 g 3 ðxÞ ¼ pmax  prz P 0 g 4 ðxÞ ¼ pmax v sr max  prz v sr P 0 g 5 ðxÞ ¼ v sr max  v sr P 0 g 6 ðxÞ ¼ T max  T P 0 g 7 ðxÞ ¼ M h  sMs P 0 g 8 ðxÞ ¼ T P 0 where,

Mh ¼

r3  r3 2 lFZ o2 i2 ; 3 r0  ri

v rz ¼ B.6. Rolling element bearing design problem

max C d ¼ fc Z 2=3 D1:8 b C d ¼ 3:647f c Z

D1:4 b

if D  25:4 mm

subject to:

g 1 ðxÞ ¼

/0 1

2 sin ðDb =Dm Þ

g 3 ðxÞ ¼ K D max ðD  dÞ  2Db P 0 g 4 ðxÞ ¼ fBw  Db 6 0 g 5 ðxÞ ¼ Dm  0:5ðD þ dÞ P 0 g 6 ðxÞ ¼ ð0:5 þ eÞðD þ dÞ  Dm P 0 g 7 ðxÞ ¼ 0:5ðD  Dm  Db Þ  eDb P 0 g 8 ðxÞ ¼ fi P 0:515 g 9 ðxÞ ¼ fo P 0:515 where,

3

1:72 0:41 )10=3 0:3 1  c f ð2f  1Þ i o 5 fc ¼ 37:9141 þ 1:04 fo ð2f i  1Þ 1þc " #

0:41 c0:3 ð1  cÞ1:39 2f i  2f i  1 ð1 þ cÞ1=3 /o ¼ 2p  2

0h

 cos1 @

(

fðD  dÞ=2  3ðT=4Þg2 þ fD=2  T=4  Db g2  fd=2 þ T=4g2 2fðD  dÞ=2  3ðT=4ÞgfD=2  T=4  Db g



;

30ðM h þ Mf Þ

References

Zþ1P0

g 2 ðxÞ ¼ 2Db  K D min ðD  dÞ P 0

2

F

pðr20  r2i Þ I z pn

Dr ¼ 20 mm, Iz = 55 kgmm2, pmax = 1 MPa, Fmax = 1000 N, Tmax = 15 s, l = 0.5, s = 1.5, Ms = 40 Nm, Mf = 3 Nm, n = 250 rpm, vsrmax = 10 m/s, lmax = 30 mm, rimin = 60, rimax = 80, romin = 90, romax = 110, tmin = 1.5, tmax = 3, Fmin = 600, Fmax = 1000, Zmin = 2, Zmax = 9.

if D 6 25:4 mm

2=3

2pnðr 3o  r 3i Þ ; 90ðr 20  r 2i Þ

prz ¼

i1 A

[1] Lee KS, Geem ZW. A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice. Comput Meth Appl Mech Eng 2005;194:3902–33. [2] Holland J. Adaptation in natural and artificial systems. Ann Arbor, MI: University of Michigan Press; 1975. [3] Goldberg D. Genetic algorithms in search, optimization and machine learning. Reading, MA: Addison-Wesley; 1989. [4] Kennedy J, Eberhart R. Particle swarm optimization. In: Proceedings of the IEEE international conference on neural networks. Perth, Australia: 1995. p. 1942– 8. [5] Kirkpatrick S, Gelatt C, Vecchi M. Optimization by simulated annealing. Science 1983;220:671–80. [6] Coello CAC. Theoretical and numerical constraint-handling techniques used with evolutionary algorithms: a survey of the state of the art. Comput Meth Appl Mech Eng 2002;191:1245–87. [7] Areibi S, Moussa M, Abdullah H. A comparison of genetic/memetic algorithms and other heuristic search techniques. Las VeGAs, Nevada: ICAI; 2001. [8] Elbeltagi E, Hegazy T, Grierson D. Comparison among five evolutionary-based optimization algorithms. Adv Eng Inf 2005;19:43–53. [9] Youssef H, Sait SM, Adiche H. Evolutionary algorithms, simulated annealing and tabu search: a comparative study. Eng Appl Artif Intell 2001;14:167–81. [10] Giraud-Moreau L, Lafon P. Comparison of evolutionary algorithms for mechanical design components. Eng Optim 2002;34:307–22. [11] Chootinan P, Chen A. Constraint handling in genetic algorithms using a gradient-based repair method. Comput Oper Res 2006;33:2263–81. [12] Trelea IC. The particle swarm optimization algorithm: convergence analysis and parameter selection. Inform Process Lett 2003;85:317–25. [13] He Q, Wang L. An effective co-evolutionary particle swarm optimization for engineering optimization problems. Eng Appl Artif Intell 2006;20:89–99. [14] Gomes HM. Truss optimization with dynamic constraints using a particle swarm algorithm. Expert Syst Appl 2011;38:957–68.

166

H. Eskandar et al. / Computers and Structures 110–111 (2012) 151–166

[15] David S. The water cycle, illustrations by John Yates. New York: Thomson Learning; 1993. [16] Strahler AN. Dynamic basis of geomorphology. Geol Soc Am Bull 1952;63:923–38. [17] Montes EM, Coello CAC. An empirical study about the usefulness of evolution strategies to solve constrained optimization problems. Int J Gen Syst 2008;37:443–73. [18] Kaveh A, Talatahari S. A particle swarm ant colony optimization for truss structures with discrete variables. J Const Steel Res 2009;65:1558–68. [19] Koziel S, Michalewicz Z. Evolutionary algorithms, homomorphous mappings, and constrained parameter optimization. IEEE Trans Evol Comput 1999;7: 19–44. [20] Ben Hamida S, Schoenauer M. ASCHEA: new results using adaptive segregational constraint handling. IEEE Trans Evol Comput 2002:884–9. [21] Tang KZ, Sun TK, Yang JY. An improved genetic algorithm based on a novel selection strategy for nonlinear programming problems. Comput Chem Eng 2011;35:615–21. [22] Michalewicz Z. Genetic algorithms, numerical optimization, and constraints. In: Esheman L, editor. Proceedings of the sixth international conference on genetic algorithms. San Mateo: Morgan Kauffman; 1995. p. 151–8. [23] Mezura-Montes E, Coello CAC. A simple multimembered evolution strategy to solve constrained optimization problems. IEEE Trans Evol Comput 2005;9: 1–17. [24] Tessema B, Yen GG. A self adaptive penalty function based algorithm for constrained optimization. IEEE Trans Evol Comput 2006:246–53. [25] Liu H, Cai Z, Wang Y. Hybridizing particle swarm optimization with differential evolution for constrained numerical and engineering optimization. Appl Soft Comput 2010;10:629–40. [26] Runarsson TP, Xin Y. Stochastic ranking for constrained evolutionary optimization. IEEE Trans Evol Comput 2000;4:284–94. [27] Lampinen J. A constraint handling approach for the differential evolution algorithm. IEEE Trans Evol Comput 2002:1468–73. [28] Becerra R, Coello CAC. Cultured differential evolution for constrained optimization. Comput Meth Appl Mech Eng 2006;195:4303–22. [29] Renato AK, Leandro Dos Santos C. Coevolutionary particle swarm optimization using gaussian distribution for solving constrained optimization problems. IEEE Trans Syst Man Cybern Part B Cybern 2006;36:1407–16. [30] Zhang M, Luo W, Wang X. Differential evolution with dynamic stochastic selection for constrained optimization. Inform Sci 2008;178:3043–74. [31] Runarsson TP, Xin Y. Search biases in constrained evolutionary optimization. IEEE Trans Syst Man Cybern Part C Appl Rev 2005;35:233–43. [32] Wang Y, Cai Z, Zhou Y, Fan Z. Constrained optimization based on hybrid evolutionary algorithm and adaptive constraint handling technique. Struct Multidisc Optim 2009;37:395–413. [33] Takahama T, Sakai S. Constrained optimization by applying the a; constrained method to the nonlinear simplex method with mutations. IEEE Trans Evol Comput 2005;9:437–51. [34] A.E.M. Zavala, A.H. Aguirre, E.R.V. Diharce, Constrained optimization via evolutionary swarm optimization algorithm (PESO). In: Proceedings of the 2005 conference on genetic and evolutionary computation. New York, USA: 2005. p. 209–16. [35] Huang FZ, Wang L, He Q. An effective co-evolutionary differential evolution for constrained optimization. Appl Math Comput 2007;186(1):340–56.

[36] Karaboga D, Basturk B. Artificial bee colony (ABC) optimization algorithm for solving constrained optimization problems. LNAI 2007;4529:789–98 [Berlin: Springer-Verlag]. [37] Coello CAC, Becerra RL. Efficient evolutionary optimization through the use of a cultural algorithm. Eng Optim 2004;36:219–36. [38] He Q, Wang L. A hybrid particle swarm optimization with a feasibility-based rule for constrained optimization. Appl Math Comput 2007;186:1407–22. [39] Amirjanov A. The development of a changing range genetic algorithm. Comput Meth Appl Mech Eng 2006;195:2495–508. [40] Wang L, Li LP. An effective differential evolution with level comparison for constrained engineering design. Struct Multidisc Optim 2010;41:947–63. [41] Zahara E, Kao YT. Hybrid Nelder–Mead simplex search and particle swarm optimization for constrained engineering design problems. Expert Syst Appl 2009;36:3880–6. [42] Rao RV, Savsani VJ, Vakharia DP. Teaching-learning-based optimization: a novel method for constrained mechanical design optimization problems. Comput Aided Des 2011;43:303–15. [43] Ray T, Liew KM. Society and civilization: an optimization algorithm based on the simulation of social behavior. IEEE Trans Evol Comput 2003;7:386–96. [44] Mezura-Montes E, Coello CAC. Useful infeasible solutions in engineering optimization with evolutionary algorithms, MICAI 2005. Lect Notes Artif Int 2005;3789(2005):652–62. [45] Montes E, Reyes JV, Coello CAC. Modified differential evolution for constrained optimization. In: IEEE congress on evolutionary computation. CEC; 2006a. p. 25–32. [46] Montes E, Coello CAC, Reyes JV. Increasing successful offspring and diversity in differential evolution for engineering design. In: Proceedings of the seventh international conference on adaptive computing in design and manufacture. 2006b. p. 131–9. [47] Kannan BK, Kramer SN. An augmented lagrange multiplier based method for mixed integer discrete continuous optimization and its applications to mechanical design. J Mech Des 1994;116:405–11. [48] Coello CAC. Use of a self-adaptive penalty approach for engineering optimization problems. Comput Ind 2000;41:113–27. [49] C Coello CA, Mezura Montes E. Constraint-handling in genetic algorithms through the use of dominance-based tournament selection. Adv Eng Inf 2002;16:193–203. [50] Coelho LDS. Gaussian quantum-behaved particle swarm optimization approaches for constrained engineering design problems. Expert Syst Appl 2010;37:1676–83. [51] Arora JS. Introduction to optimum design. New York: McGraw-Hill; 1989. [52] Yuan Q, Qian F. A hybrid genetic algorithm for twice continuously differentiable NLP problems. Comput Chem Eng 2010;34:36–41. [53] Coello CAC. Constraint-handling using an evolutionary multiobjective optimization technique. Civ Eng Environ Syst 2000;17:319–46. [54] Gupta S, Tiwari R, Shivashankar BN. Multi-objective design optimization of rolling bearings using genetic algorithm. Mech Mach Theory 2007;42:1418–43. [55] Osyczka A. Evolutionary algorithms for single and multicriteria design optimization: studies in fuzzyness and soft computing. Heidelberg: PhysicaVerlag; 2002. [56] Deb K, Srinivasan A. Innovization: innovative design principles through optimization, Kanpur genetic algorithms laboratory (KanGAL). Indian Institute of Technology Kanpur, KanGAL report number: 2005007; 2005.

Suggest Documents