Optimal design of a welded I-section frame using four ... - Springer Link

1 downloads 0 Views 248KB Size Report
Abstract The purpose of this study is to investigate the suitability of four conceptually different optimization algorithms for specifically the optimal design of welded.
Struct Multidisc Optim 25, 54–61  Springer-Verlag 2003 Digital Object Identifier (DOI) 10.1007/s00158-002-0272-5

Optimal design of a welded I-section frame using four conceptually different optimization algorithms K. J´ armai, J.A. Snyman, J. Farkas, G. Gondos

Abstract The purpose of this study is to investigate the suitability of four conceptually different optimization algorithms for specifically the optimal design of welded I-section frames. The cost function to be minimized is the volume of the frame. Constraints on lateral-torsional buckling as well as local buckling of the beam and column webs and flanges are taken into consideration. The algorithms evaluated include a genetic algorithm, a novel leap-frog gradient method without line searches, as well as an orthogonal search method requiring no gradients and the differential evolution technique. Key words nonlinear optimization algorithms, welded frames, structural optimization, genetic algorithm, differential evolution

the local buckling of flanges and webs. The design variables are the profile dimensions of the columns and the beam. It is assumed that the flange widths of the two profiles are the same, i.e. b1 = b2 = b. Furthermore, the local buckling constraints of the flanges are assumed to be ac0.5 tive, i.e. tf 1 = tf 2 = δb, where 1/δ = 28ε, ε = (235/fy ) and fy is the yield stress. Thus the vector of design variables is x = (h1 , h2 , b, tw1 , tw2 ), as shown in Fig. 1. Since the constraints are highly non-linear and the problem may therefore have multiple local minima, four different optimization algorithms are applied to the volume minimization problem in an attempt to obtain a reliable solution, and to establish a preferred optimal design procedure. To facilitate in the further presentation of the methods and the discussion of the implementations, the general design optimization problem is formally stated here as: Find x = (x1 , x2 , . . . , xn ) ∈ Rn ,

1 Introduction The aim of this paper is to demonstrate the application of efficient and conceptually different mathematical methods to a structural optimization problem and to compare their relative performances. The frame to be investigated is shown in Fig. 1. It is a simple braced (nonsway) frame with rigid (welded) joints. The objective function to be minimized is the volume of the structure. The stress constraints for the columns and beam are formulated according to Eurocode 3 (1992) considering the overall buckling and lateral-torsional buckling, as well as Received September 26, 2001 K. J´ armai1 , J.A. Snyman2 , J. Farkas3 , G. Gondos3 1,3

University of Miskolc, 3515 Hungary e-mail: [email protected] 2 University of Pretoria, South Africa e-mail: [email protected] 

Part of this paper was presented at the World Congress on Structural and Multidisciplinary Optimization, June 4–8 2001, Dalian, China

that minimizes a cost function f (x) subject to the constraints gj (x) ≤ 0 ,

j = 1, 2, . . . , m ,

and hj (x) = 0 ,

j = 1, 2, . . . r ,

where f (x), gj (x), and hj (x) are scalar functions of the design variables x. The optimum solution is usually denoted by x∗ . Over the past thirty years many powerful iterative numerical algorithms have been developed to solve the above general problem. It is however true to say that no single algorithm dominates in being superior to all others when applied to different subclasses of the above general problem. Depending on the degree of non-linearity, the presence of noise or discontinuities in the functions, the number and nature (discrete or continuous) of the variables involved, the existence of multiple local minima and the time required to evaluate the functions, different methods may be preferable to others depending on the efficiency, accuracy and reliability required.

55

Fig. 1 The frame structure and the cross section of columns and beams

2 Optimization methods 2.1 Genetic algorithm 2.1.1 Principle of a genetic algorithm The first method to be applied is a genetic algorithm (Goldberg 1997). This method belongs to the class of evolutionary methods that mimic certain phenomena occurring in nature, such as natural selection and multiplication. Here the mass of an engineering structure is determined by its dimensional design variables. The members of a population are therefore chosen to be the various combinations of the design variables. In the genetic algorithm these members are binary coded and called chromosomes. Every chromosome has a fitness value, representing the extent of mass minimization subject to constraints, and is compared to that of all other chromosomes. In other words, the lower the objective function value, the higher the fitness value of the corresponding chromosome. In the optimization process, in each generation, parents are selected to create offsprings for the successive population using crossover and mutation. The aim is to create a member that has the lowest objective function value and can be considered as representing the global optimum. 2.1.2 Applied genetic algorithm The genetic algorithm (GA) used in this work is essentially based on the basic operators developed by Goldberg (1997), namely selection, crossover, and mutation. However the use of these operators is not sufficiently effective in solving optimization problems subject to constraints. Additional operators are needed to speed up convergence to a local optimum and in particular to find the global optimum with high probability. Advanced operators that do this are known. Many reference textbooks and papers

have dealt with different genetic operators, such as the update operator, viral infection and the elitist operator, that may be adopted for the current problem and can easily be installed into our computer program. For this study we have also developed new operators, which are suited for the optimization problem to be solved here. In particular two new operators have been developed and used: making-clones that is a modified elitist operator and the laboratory operator. To understand the mechanism of the current algorithm the new operators will be discussed in detail, but existing and conventional operators are only briefly mentioned. More information about these basic operators can be found in Goldberg’s book Goldberg (1997).

2.1.3 Genetic operators Selection operator Selection is based on the principle of the operation of a biased roulette-wheel, where each current string in the population has a roulette wheel slot sized in proportion to its fitness. The higher the fitness value of an individual chromosome, the higher the chance of reproducing a new string in the next generation. Crossover operator Crossover is an exchange of sections of the parents chromosomes; this operation is accomplished by deleting the crossover fragment of the first parent and then inserting the crossover fragment of the second parent. The second offspring is produced in a symmetrical manner. Mutation operator Mutation is a random modification of the chromosome. This operator improves the exploration of the global design space.

2.1.4 Making clones operator The fittest member in the population should be selected. To prevent this chromosome from disappearing, it is de-

56 liberately copied at two or more times into the next generation. These copies can be referred to as clones. For optimum performance the number of clones should not be more than 1–2/of the population size.

2.1.5 Laboratory operator It is experienced that the GA is prone to get stuck at a member having a relative high fitness value. To enforce the GA to search for better members surrounding the current fittest member, the following strategy is adopted. Design variables (must be discrete values) are arranged in an increasing sequence. Each variable value therefore has a position number depending on how many variables can be found prior to the actual value in the sequence. The fittest member in the current population is put into a “laboratory”. This chromosome is representative of the current optimal design vector. The values of these design variables are now changed randomly by small perturbations of their respective position numbers. This results in a new chromosome being formed, with a corresponding new objective function value. If this value is lower than that of the modified chromosome, the clone of this new member is put into the next generation. Thousands of new individuals can be created in the above manner, but usually only a few of these chromosomes will yield better function values. Unfortunately because of the computational expense, this operator can not be applied in cases where the number of design variables exceeds 5 or 6. In optimization problem with up to 6 design variables, the use of the laboratory operator decreases the time required in seeking the optimum and gives more reliable results.

2.2 The leap-frog method The dynamic trajectory method, more commonly known as the leap-frog method, was originally proposed by Snyman (1982, 1983) for the unconstrained minimization of a scalar function f (x) of n real variables x = (x1 , x2 , . . . , xn ). The algorithm has recently been modified to handle constraints by means of a penalty function formulation Snyman (2000). The method possesses the following characteristics: it uses only function gradient information ∇f (x), requires no explicit line searches, is extremely robust and handles steep valleys and discontinuities in functions and gradients with ease. The method seeks low local minima and can thus be used as a basic component in a methodology for global optimization. Although, for very high accuracy, it may not be as efficient as classical methods when applied to smooth and near-quadratic functions, it is particularly robust and reliable in dealing with the presence of numerical noise in the objective and constraint functions. The method

usually converges very quickly to the neighbourhood of the optimum. This is because the fundamental physical principles underlying the method, ensures controlled and stable convergence along a dynamic trajectory towards the optimum.

2.2.1 Basic dynamic model The algorithm is modeled on the motion of a particle of unit mass in a n-dimensional conservative force field with potential energy at x given by f (x). At x, the force on the particle is given by ¨ = −∇f (x) a=x

(1)

from which it follows that for the time interval [0, t]: 1 1 2 2 ˙ ˙ x(t) − x(0) = f (x(0)) − f (x(t)) , 2 2 T (t) − T (0) = f (0) − f (t) ,

(2)

or f (t) + T (t) = constant ,

(3)

(conservation of energy) where T denotes the kinetic energy of the particle. Note that since ∆f = −∆T , as long as T increases, it follows that f decreases. This forms the basis of the dynamic algorithm.

2.2.2 LFOP: Basic algorithm for unconstrained problems Given f (x) and a starting point x(0) = x0 : – Compute the dynamic trajectory by solving the initial value problem (IVP): ¨ (t) = −∇f (x(t)) , x

˙ x(0) =0,

x(0) = x0 .

(4)

˙ – Monitor x(t) = v(t). Clearly, as long as T = 12 v(t)2 increases, f (x(t)) decreases as desired. – When v(t) decreases, apply some interfering strategy to extract energy and thereby increase the likelihood of subsequent descent. – In practice a numerical integration “leap-frog” scheme is used to integrate the IVP (Snyman 1982): Compute, for k = 0, 1, 2, . . . and time step ∆t, xk+1 = xk + vk ∆t ,

vk+1 = vk + ak+1 ∆t ,

(5)

where   ak = −∇f xk ,

1 v0 = a0 ∆t . 2

(6)

57 – A typical interfering strategy is: If vk+1  ≥ vk , continue, else set vk =

vk+1 + vk , 4

xk =

xk+1 + xk , 2

(7)

compute new vk+1 and continue. – Further heuristics are used to determine an initial ∆t, to allow for magnification and reduction of ∆t, and to control the step size. 2.2.3 LFOPC: Modification for constrained problems Constrained optimization problems are solved by the application of LFOP in three phases to a penalty function formulation of the problem (Snyman 2000). Given a function f (x), with equality constraints hi (x) = 0 (i = 1, 2, . . . , r) and inequality constraints gj (x) ≤ 0 (j = 1, 2, . . . , m) and penalty parameter µ  0, the penalty function problem is to minimize P (x, µ) = f (x) +

r 

µh2i (x) +

i=1

where  0 βj = µ

m 

βj gj2 (x) ,

(8)

j=1

if gj (x) ≤ 0 ,

(9)

if gj (x) > 0 .

Phase 0: Given some x0 , then with the overall penalty parameter µ = µ0 (= 102 ) apply LFOP to P (x, µ0 ) to give x ∗ (µ0 ). Phase 1: With x0 = x ∗ (µ0 ), µ = µ1 (= 104 ) apply LFOP to P (x, µ1 ) to give x ∗ (µ1 ) and identify active constraints ia = 1, 2, . . . , na , gia (x∗ (µ1 )) > 0. Phase 2: With x0 = x ∗ (µ1 ), use LFOP to minimize Pa (x, µ1 ) =

r 

µ1 h2i (x) +

i=1

na 

µ1 gi2 (x) ,

(10)

ia =1

to give x∗ .

zones are introduced to slow down the algorithm when it approaches the constraint boundaries. A modified objective function, using penalty functions, are used to handle the constraints. Instead of continually searching in the co-ordinate space corresponding to the directions of the independent variables, the method achieves an improvement after one cycle of co-ordinate searches by lining the search directions up into an orthogonal system, with the overall step of the previous stage as the first building block for the new set of orthogonal directions. After each iteration k, Rosenbrock’s method locates x(k+1) after completing unidimensional searches from the previous point x(k) along a set of orthonormal directions. It introduces boundary zones, to slow down the algorithm, when it approaches the boundary too closely. A modified objective function is calculated in the boundary zone, using penalty functions. No gradient calculation is needed. The available computer code is very easy to implement on engineering problems. The method may find local minima instead of the global minimum.

2.4 Differential evolution technique Price and Storn introduced the Differential Evolution (DE) algorithm in the 90’s (Storn and Price 1995). This method was originally designed to operate on continuous variables, but its improved variant is capable of handling discrete variables as well as mixed design variables. The scheme of DE is the following. Generally, the function to be optimized, f is of the form: f (xi ) ,

i = 1, . . . , D

(11)

where xi are continuous design variables, and D is the number of design variables. In order to establish a starting point for optimum seeking, the population must be initialized. The initial population, PG=0 , is with random values chosen from within the given boundary constraints:   (U) (L) (L) P0 = xi,j,0 = randj [0, 1] · xj − xj + xj ,

2.3 The method of Rosenbrock

i = 1, . . . , N P ,

The third algorithm applied is Rosenbrock’s method, which has also been modified to be able to handle discrete values (Rosenbrock 1960; Farkas and J´ armai 1997). This method is a direct search mathematical programming method without derivatives. Instead of continuous line searches, the algorithm takes discrete steps during searches in orthogonal search directions. In each iteration, the procedure searches successively along n linearly independent and orthogonal directions. When a new point is reached at the end of an iteration, a new set of orthogonal search vectors are constructed. Boundary

where randj [0, 1] denotes a uniformly distributed random (U) (L) value within the range [0.0,1.0]; xj is upper and xj is (L) lower boundary value for xj . Here xj should be lower than the lowest discrete value. N P is size of the population, which remains constant during the search process. DE’s self-referential population reproduction scheme is different from other evolutionary algorithms. From the 1-st generation forward, vectors in the current population, PG , are randomly sampled and combined to create candidate vectors for the subsequent generation, PG+1 . The population of candidate or “trial” vectors,

j = 1, . . . , D ,

(12)

58  PG+1 = Ui,G+1 = ui,j,G+1 , is generated as follows: If randj [0, 1] ≤ CR or j = k then

3.2 Optimum design of I-section frame

ui,j,G+1 = vi,j,G+1 ,

Beam-columns of welded I-sections are discussed in detail by J´ armai and Farkas (2000). Information on the computation of cross-sectional characteristics can be found in the book by Farkas and J´ armai (1997). Eurocode 3 (1992) formulas are used for the stability calculations of I-section frame.

vi,j,G+1 = xr3,j,G + F · (xr1,j,G − xr2,j,G ) ,

(13)

otherwise ui,j,G+1 = xi,j,G , with i = 1, . . . , N P , j = 1, . . . , D, where k ∈ {1, . . . , D} is random parameter index chosen once for each I, r1 , r2 , r3 ∈ {1, . . . , N P } is randomly selected, except that r1 = r2 = r3 = i, and CR ∈ {0, . . . , 1}, F ∈ {0, . . . , 1+}, considering that  (L) x    j ui,j,G+1 = x(U) j    ui,j,G+1

(L)

,

(U) xj

,

if ui,j,G+1 < xj if ui,j,G+1 >

(14)

otherwise .

F and CR are DE control parameters. Like N P , both values remain constant during the search process. The upper limit on F has been empirically determined. CR is a real-valued crossover factor that controls the probability that a trial vector parameter will come from the randomly chosen, mutated vector, vi,j,G+1 , instead of the current vector, xi,j,G . Generally, both F and CR affect the convergence velocity and robustness of the search process (Gondos 2001). The population for the next generation, PG+1 , is selected from the current population, PG , and the child population, according to the following rule:  ui,G+1 if f (ui,G+1 ) ≤ f (Xi,G ) , Xi,G+1 = (15) Xi,G otherwise .

3 I-section frame design problem The frame to be investigated is shown in Fig. 1. It is a simple braced (non-sway) frame with rigid (welded) joints. Forces and moments in joints can be calculated using formulas given by Glushkov (1975).

3.2.1 Constraint considering overall and lateral-torsional buckling for the column (Constraint 1) We have kLT 1 MC N1 + ≤1, χy1 A1 fy1 χLT 1 Wx1 fy1

(16)

where fy1 is yield stress reduced by the safety factor γM1 , i.e., fy1 = fy /γM1 ); χy1 is overall buckling factor; kLT 1 is increasing factor; χLT 1 is lateral torsional buckling factor; MC is bending moment in joint C; N1 is axial force in rod; A1 is cross-sectional area; Wx1 is section modulus. Also, N1 =

F , 2

(17)

MC =

FL , 4D1

H Ix2 +2, L Ix1

Ix1 ≈

h3 tw h2 + btf , 12 2

(19)

Wx ≈

h2 tw + btf h , 6

(20)

D1 =

(18)

A = htw + 2btf ,

(21)

1 , ¯2 Φ1 + Φ21 − λ y1  

¯ y1 − 0.2 + λ ¯2 , Φ1 = 0.5 1 + 0.49 λ y1 χy1 =

¯ y1 = 0.7H , λ r1 λE

 r1 =

(22) (23) 

Iy1 , A1

λE = π

E , fy

E = 2.1 × 105 MPa , µLT 1 N1 , χy1 A1 fy

3.1 Applied section and structural geometry

kLT 1 = 1 −

A single-bay frame is investigated to get sizes providing minimal volume. Optimal dimensions are determined using normal steel. Figure 1 shows the applied section geometry and the main joints. In the description of optimization process subscript 1 is used for column and 2 for beam. Note that for fabrication reasons b1 = b2 = b.

βM1 = 2.15 , χLT 1 =

(24) ¯ y1 βM1 − 0.15 , µLT 1 = 0.15λ

1 , ¯2 ΦLT 1 + Φ2LT 1 − λ LT 1

 

¯ LT 1 − 0.2 + λ ¯2 ΦLT 1 = 0.5 1 + 0.49 λ LT 1 ,

(25) (26)

(27)

59  ¯LT 1 = λ

Wx1 fy , Mcr1

(28)

π 2EIy1 Mcr1 = 2.704 H2



Iω1 H 2 GIt1 + , Iy1 π 2 EIy1

G 1 = , E 2.6

3.2.4 Constraint considering local buckling for girder web of beam (Constraint 4) Given the definition

(29) 3

2

3

b tf h tf b , Iω = , 6 24   It = 0.5 ht2w + 2bt2f .

(30)

Iy =

(31)

ψ2 =

MD A −W +H A2 x2 MD Wx2

A +H A2

,

(40)

for ψ2 > −1 we have h2 42ε ≤ , tw2 0.67 + 0.33ψ2

3.2.2 Constraint considering local buckling for girder web of column (Constraint 2) Given the definition ψ1 =

MC −W + F/2 A1 x1 MC Wx1

+ F/2 A1

,

42ε h1 ≤ , tw1 0.67 + 0.33ψ1

 ε=

235 , fy

 h1 ≤ 62ε(1 − ψ1) −ψ1 . tw1

(33)

(34)

We have kLT 2 MD N2 + ≤1, χy2 A2 fy1 χLT 2 Wx2 fy1

FL 4

MA = −

(35)

MC , 2

  1 1− . D1

(43)

We consider this constraint as active, thus tf = δb.

3.3 Main data for the frame with welded I-sections Geometrical data is given by (see Fig. 1): H = 9 m; L = 12 m; F = 750 kN; fy = 235 MPa. Ranges of section dimensions (in mm): h1 = 400–710 ,

h2 = 700–1010 ,

b = 350–660 ,

tw2 = 1–16 .

(44)

The objective function may be written in the form: (36) V = 2A1 H + A2 L .

L , r2 λE

(45)

3.4 Computational results

(37)

¯ y2 βMLT 2 − 0.15 , µLT 2 = 0.15λ π 2EIy2 Mcr2 = 1.365 L2

1 = 28ε . δ

tw1 = 1–16 ,

All parameter definitions and calculations for the beam are similar to those for the column with the subscripts changed from 1 to 2, except for the following: ¯y2 = λ

(42)

We have b 1 ≤ , tf δ

3.2.3 Constraint considering overall and lateral-torsional buckling for beam (Constraint 3)

MD =

 h2 ≤ 62ε(1 − ψ2) −ψ2 . tw2

3.2.5 Constraint considering local buckling of flanges (Constraint 5)

and for ψ1 ≤ −1,

3MA , H

and for ψ2 ≤ −1,

(32)

for ψ1 > −1 we have

N2 = HA =

(41)



βMLT 2 = 1.4 ,

Iω2 L2 GIt2 + . Iy2 π 2 EIy2

(38)

(39)

The best local optima computed by the different algorithms are listed in Table 1. The number of function evaluations required for convergence by the genetic algorithm was around 5–7 thousand, for the leap-frog method the number of gradient vector evaluations were 200–600 (often reaches the neighborhood of x∗ within 10–20 iterations) and for Rosen-

60 Table 1 Frame local optima obtained using the genetic, leap-frog, and Rosenbrock algorithms x∗ in mm

Genetic discrete solution

Leap-frog

Rosenbrock

Rosenbrock discrete solution

Differential Evolution

h1 h2 b tw1 tw2

650 950 470 7 7

686 1003 463 6.6 7.3

636.1 981.8 484.6 6.1 8.6

630 980 490 7 9

686.6 1010 460 6.65 7.75

V in mm3

0.6411

0.6288

0.6754

0.6997

0.6296

Fig. 2 Convergence history using the genetic algorithm Fig. 4 Convergence history using unscaled leap-frog method with x0 = (710, 1010, 660, 16, 16)

brock’s method 700–1000 function evaluations were required. The Differential Evolution method required more that 12 000 function evaluations. These values varied depending on the starting point, the specified convergence criteria and other parameters unique to the different algorithms. The problem appears to have a number of local optima in the relatively flat neighborhood of the global optimum. Also if the cost (time) of function evaluation is large, then those techniques, which need more, are not so efficient. Figures 2–6 show the computed convergence histories using respectively the genetic, Rosenbrock, and leapfrog algorithms. In the scaled application of the leap-frog method variables 4 and 5 were scaled so that their allowable ranges [1–16 in (44)] were increased by a factor of 100 to be 100–1600.

Fig. 5 Convergence history using scaled leap-frog method with x0 = (710, 1010, 660, 16, 16)

.

Fig. 3 Convergence history using Rosenbrock’s algorithm with starting point x0 = (700, 1000, 560, 7, 9)

Fig. 6 Convergence history using Differential Evolution method

61 Table 2 Comparison of the different algorithms Method

Function evaluations

Advantages

Disadvantages

Genetic

several thousands

Leap-frog

50–1200 (gradient evaluation only) 700–2000

handles non-convex functions gives discrete values robust, infeasible starting point

great number of function evaluations small violation of constraints somewhat sensitive to scaling terminates at a relatively high local optima

Rosenbrock’s

Differential Evolution

12 400

quick, gives discrete values usually do not violates constraints can handle mixed (continuous/discrete) variables

4 Conclusions All four algorithms find acceptable solutions. Table 2 enables a comparative evaluation of the four different algorithms. The function evaluations are relatively inexpensive and therefore their number has little effect on the computational cost. The leap-frog method appears to be the more robust of the methods and easily finds local optima starting from infeasible starting points. The new genetic operators introduced, such as crossover with alternating directions, making-clones and laboratory, are effective in speeding up the convergence of the genetic algorithm. Because of its inherent discrete nature, the genetic algorithm gives discrete solutions. The Rosenbrock algorithm is relatively fast, but is inclined to terminate at local optima with relatively high cost function values. It is therefore advisable to use it in a multi-start mode. It can be used to give both discrete and continuous solutions. In some cases the solution of the genetic algorithm may violate some of the constraints. The violation of constraints by the leap-frog method is negligible whilst the Rosenbrock method always guarantees exact satisfaction of all the constraints. DE can give a consistent result that prevents the time-consuming reruns, thus DE combined with finite element analysis can be more reliable. It should be mentioned that DE has other considerable advantage that it is also capable of handling continuous and mixed variables that could be important in other optimization problems. There are lots of advanced operator and improved GA can be introduced in literature which can make GA more effective. Acknowledgements This research work was supported by the Hungarian Scientific Research Foundation grants OTKA T38058, T37941 and the Fund for the Development of Higher Education FKFP 8/2000 project. The project was also supported by the Hungarian-South African Intergovernmental S&T CO-operation program. The Hungarian partner is the

great number of function evaluation

Ministry of Education, R&D Deputy Undersecretary of State, the South-African partner is the Foundation for Research Development.

References Eurocode 3, 1992: Design of steel structures, Part 1.1, CEN. Brussels: European Committee for Standardization Farkas, J.; J´ armai, K. 1997: Analysis and optimum design of metal structures. Rotterdam-Brookfield: Balkema Glushkov, G. 1975: Formulas for design frames. Moscow: Mir Goldberg, D.E. 1997: Genetic Algorithms in Search, Optimization & Machine Learning. Addison-Wesley Gondos, Gy. 2001: Optimum design of trusses by evolutionary algorithms. In: 3-rd International Conference of PHD Students [held in Miskolc, Hungary, 2001]. Vol. I, pp. 149–156 J´ armai, K.; Farkas, J. 2000: Optimum design of compression columns of welded I-section and comparison with rolled profiles. In: Iv´ anyi, M.; Muzeau, J.P.; B.H.V. Topping: Computational Steel Structures Technology, pp. 119–129, Edinburgh: Civil Comp Press Rosenbrock, H.H. 1960: An automatic method for finding the greatest or least value of a function. Computer Journal 3, 175–184 Snyman, J.A. 1982: A new and dynamic method for unconstrained minimization. Appl. Math. Modelling 6, 449–462 Snyman, J.A. 1983: An improved version of the original leapfrog method for unconstrained minimization. Appl. Math. Modelling 7, 216–218 Snyman, J.A. 2000: The LFOPC leap-frog method for constrained optimization. Comp. Math. Applic. 40, 1085–1096 Storn, R.; Price, K. 1995: Differential evolution: Simple and efficient adaptive scheme for global optimization over continuous spaces. Technical Report TR-95-012, ICSI, March 1995 Storn, R. 1996: On the usage of differential evolution for function optimization. In: NAFIPS 1996, pp. 519–523, Berkeley

Suggest Documents