A survey of non-gradient optimization methods in structural engineering

64 downloads 78990 Views 402KB Size Report
article (e.g. in Word or Tex form) to their personal website or institutional repository. ..... The basic structure of an HS algorithm is as follows [71]: procedure ...
This article appeared in a journal published by Elsevier. The attached copy is furnished to the author for internal non-commercial research and education use, including for instruction at the authors institution and sharing with colleagues. Other uses, including reproduction and distribution, or selling or licensing copies, or posting to personal, institutional or third party websites are prohibited. In most cases authors are permitted to post their version of the article (e.g. in Word or Tex form) to their personal website or institutional repository. Authors requiring further information regarding Elsevier’s archiving and manuscript policies are encouraged to visit: http://www.elsevier.com/authorsrights

Author's personal copy

Advances in Engineering Software 59 (2013) 19–28

Contents lists available at SciVerse ScienceDirect

Advances in Engineering Software journal homepage: www.elsevier.com/locate/advengsoft

A survey of non-gradient optimization methods in structural engineering Warren Hare a, Julie Nutini b, Solomon Tesfamariam c,⇑ a

Mathematics, University of British Columbia, Kelowna, BC, Canada Computer Science, University of British Columbia, Vancouver, BC, Canada c School of Engineering, University of British Columbia, Kelowna, BC, Canada b

a r t i c l e

i n f o

Article history: Received 8 November 2012 Received in revised form 11 March 2013 Accepted 11 March 2013

Keywords: Optimization Structural engineering Non-gradient methods Heuristic methods Swarm methods Derivative-free optimization

a b s t r a c t In this paper, we present a review on non-gradient optimization methods with applications to structural engineering. Due to their versatility, there is a large use of heuristic methods of optimization in structural engineering. However, heuristic methods do not guarantee convergence to (locally) optimal solutions. As such, recently, there has been an increasing use of derivative-free optimization techniques that guarantee optimality. For each method, we provide a pseudo code and list of references with structural engineering applications. Strengths and limitations of each technique are discussed. We conclude with some remarks on the value of using methods customized for a desired application. Ó 2013 Elsevier Ltd. All rights reserved.

1. Introduction Optimization is the process of minimizing or maximizing an objective function (e.g. cost, weight). Three main types of optimization problems that arise in structural engineering are [1,2]: sizing optimization, shape optimization, and topology optimization. Sizing optimization entails determining the member area of each element. Shape optimization entails optimizing the profile/shape of the structure. Topology optimization is associated with connectivity of structural elements. Traditionally, the three optimization problems were solved independently (e.g., [3]), however, recent trend shows simultaneous optimization of sizing, shape and topology provides better results [2,4]. In this paper, we consider optimization problems of the form

minimize

f ðxÞ

subject to

cðxÞ 6 0

x

ð1Þ

l 6 x 6 u; where f : Rn ! R; cðxÞ ¼ ðc1 ðxÞ; . . . ; cm ðxÞÞ and 6 should be interpreted coordinate-wise. We permit lj, uj = ±1, j 2 {1, . . ., n}, to allow for the possibility of unbounded variables. If the objective function of an optimization problem is smooth (i.e., differentiable) and gradient information is reliable, ⇑ Corresponding author. Tel.: +1 250 807 8185. E-mail address: [email protected] (S. Tesfamariam). 0965-9978/$ - see front matter Ó 2013 Elsevier Ltd. All rights reserved. http://dx.doi.org/10.1016/j.advengsoft.2013.03.001

then gradient based optimization algorithms present an extremely powerful collection of tools for solving the problem. However, in some structural engineering problems, such as when simulations are employed to imitate problem conditions, gradient information may not be available for the problem. Even if gradient information is available, it can be unreliable or difficult to compute. Thus, non-gradient methods are incredibly useful optimization tools. As the name suggests, non-gradient methods do not require gradient information to converge to a solution. Rather, these methods solely use function evaluations of the objective function to converge to a solution. We note that if gradient information is available for a well-behaved problem, then a gradient based method should be used. However, when gradient information is not available, non-gradient methods are practical alternatives. Several reviews of non-gradient methods for optimization problems in structural engineering have been published. The majority of these focus on heuristic methods. In 1991, a review of genetic algorithms for structural optimization was published [5]. In 2002, a more general review of evolutionary algorithms for structural optimization was published [6]. In 2007, a review focused on the design of steel frames via stochastic search methods was published [7]. In 2008, a review on the use of simulated annealing methods for structural optimization [8] and a general review on publications of structural engineering applications using particle swarm optimization [9] were published. In 2009, a review on the use of the harmony search methods in structural design optimization was published

Author's personal copy

20

W. Hare et al. / Advances in Engineering Software 59 (2013) 19–28

[10]. In 2011, a review focused on the design of skeletal structures using a variety of heuristic techniques was published [11]. Most recently, in 2012, a comprehensive review of stochastic search heuristics was published [12]. In this paper, we present a detailed review of the non-gradient methods for structural optimization. We also provide a list of references that utilize the optimization methods. We include the methods in the review papers previously mentioned, as well as several other methods. We also include the more recent Derivative-free Optimization (DFO) methods that have become increasingly popular in optimization applications. Unlike the general category of non-gradient methods, DFO methods are supported by mathematical convergence theories, which ensures that the algorithms converge to a local minimizer of the objective function. Due to their practical utility and the numerous problems suited to them, new non-gradient algorithms are frequently developed; for example, the very recent magnetic charged system search [13], which adapts the (also recent) charged system search [14]. In this paper, we choose to focus on methods that appear frequently in the literature. Thus, we exclude some recently proposed methods. However, we emphasize that new non-gradient methods are regularly providing improved solutions to many structural engineering problems. The remainder of this paper is organized as follows. In Section 2, we present some of the most popular heuristic methods used in structural engineering: evolutionary algorithms. These methods use techniques that imitate natural evolution. In Section 3, we present some heuristic methods inspired by physical processes and the nature of stochastic processes. In Section 4, we present heuristic methods inspired by self-organizing systems. These methods are often referred to as swarm algorithms, as they are often inspired by how animal swarms employ simple rules to develop favorable system behavior. In Section 5, we present formalized methods that are strengthened by mathematical convergence theory. We refer to these methods as Derivative-free Optimization, as is commonly used in the mathematical community. Each of these sections is broken into several subsections that describe examples of specific algorithms. In Section 6, we consider our observations from the previous sections, present our conclusions and provide a summary table of the methods discussed.

procedure GeneticAlgorithm begin Initialize and evaluate P(t); while (not termination condition) do begin Recombine P(t) to yield C(t); Evaluate C(t); Select P(t + 1) from P(t) and C(t); end end GAs have been applied to numerous structural engineering applications: structural reliability [17,18], bridge design, structure, maintenance and repair [19–21], design of welded steel plate girder bridges [22], seismic zoning [23], seismic design of lifeline systems [24], truss structure optimization [25–27], the size, shape and topology of skeletal structures [28] and design optimization of steel structures [29–32], reinforced concrete flat slab buildings [33], steel telecommunication poles [34] and viscous dampers [35]. See also [36–49,31,50–58]. 2.2. Evolutionary strategies Evolutionary strategies (ES) are a subclass of evolutionary algorithms. The structure of these methods was originally developed by Rechenberg and Schwefel ([59–62]). To start, an ES defines an initial parent population of size l that consists of potential soluð0Þ tions, say Bð0Þ p , to the problem at hand. Each individual ak in Bp is comprised of a parameter set yk, its objective value Fk :¼ F(yk) and an evolvable set of strategy parameters sk. Next, the parent population reproduces, generating k offspring, where k is a fixed parameter of the method. To do this, first there is a marriage step, where one family C of size q is randomly chosen from the parent population at time t; BðtÞ p . Then for the individuals in family C, their strategy and object parameters are recombined. These new parameters are then mutated, forming the offspring population at time t; BðtÞ o . Finally, the selection step forms a new parent population Bðgþ1Þ . There are two main types of ESs for different numbers of parp ents and offspring, namely (l + k)-ES and (l, k)-ES. Generally, the structure of the ES is as follows ([63]):

2. Evolutionary algorithms Evolutionary algorithms are a class of non-gradient populationbased algorithms used in many areas of engineering optimization. These methods use techniques that imitate natural evolution. They follow the four general steps of reproduction, mutation, recombination and selection and use a fitness function to determine the conditions that support survival. 2.1. Genetic algorithm A Genetic Algorithm (GA) is probably the most commonly used evolutionary algorithm and one of the more common non-gradient methods. These methods were originally proposed by John Holland in 1975 [15]. A GA selects an initial population of potential solutions, say P(t), for each iteration t to the problem at hand. Using stochastic transformations, some solutions will undergo a mutation or crossover step. These new potential solutions are referred to as the offspring, say C(t). From both P(t) and C(t), the ‘most fit’ solutions (solutions with the better objective values) are selected to form a new population, P(t + 1). After the evaluation of several generations, the algorithm hopefully converges to the optimal or sub-optimal solution of the objective function. Generally, the structure of the GA is as follows [16]:

procedure EvoluntionStrategy begin Initialize parent population Bð0Þ p ; while (not termination condition) do for n = 1: k do begin Cn= MarriageðBðtÞ p ; qÞ; sn= s_recombination (Cn); yn= y_recombination (Cn); ~sn ¼ s_mutation (sn); ~n ¼ y_mutation ðyn ; ~sn Þ; y end Update BðtÞ o ; Perform selection and update parent population BðtÞ p ; end end Evolutionary strategies appear in the literature often and have been applied to several structural engineering problems: optimizing truss structures ([64–66]), optimizing a connection rod shape and minimizing the volume of a square plate with a central cutout ([67]), and the design of a cantilever beam ([68]), steel frames ([30]) and cylindrical shells ([69]).

Author's personal copy

W. Hare et al. / Advances in Engineering Software 59 (2013) 19–28

2.3. Strengths and limitations of evolutionary algorithms As a class of heuristic algorithms, there is no mathematical convergence theory for evolutionary algorithms – and thus no assurance of optimality of the final solution found. However, in practice, evolutionary algorithms can be very successful in finding good solutions quickly. The number of function evaluations evolutionary algorithms require can be scaled independent of the dimension of the problem, making them versatile for large scale problems. However, do to the need to maintain large populations of candidate solutions, evolutionary algorithms can be cumbersome for small scale problems. Most papers that compare the performance of non-gradient methods on one application include at least one comparison to an evolutionary algorithm (see Table 1). evolutionary algorithms are often considered a baseline approach, and the method to beat if you wish to claim a new algorithm is of high quality. 3. Physical algorithms 3.1. Harmony search The Harmony Search (HS) algorithm was first introduced by Geem, Kim and Loganathan in 2001 [70]. As the name suggests, this algorithm mimics the evolution of a harmony relationship between several sound waves of differing frequencies when played simultaneously. In music, a best state (aesthetically pleasing harmony) is desired; in optimization, the ‘best state’ is achieved at the global optimum. The processes of random selection, memory consideration and pitch adjustment are all incorporated in this algorithm. There are two main parameters used in the HS algorithm: the Harmony Memory (HM) accepting rate, denoted by raccept, and the pitch adjustment rate, denoted by rpa. As their names suggest, raccept is the rate at which a new harmony is accepted into the HM, and rpa controls the degree to which the pitch can be adjusted. The basic structure of an HS algorithm is as follows [71]: procedure HarmonySearch begin Initialize parameters, including raccept and rpa; Generate initial HM with random harmonies; while t < max number of iterations while i 6 number of variables if random value rand < raccept Choose value from HM for the variable i; if rand < rpa Adjust the value by adding certain amount; end if else Choose a new rand value; end if end while Accept the new harmony if better; end while Find the current best harmony (solution); end

Several examples of structural engineering problems that have been solved using an HS algorithm are: optimization of truss structures [50,72], optimization of pin connected structures [73], minimum cost design of steel frames [41], and optimum design of steel frames [40,30,45], steel sway frames [57], cellular beams [74] and reinforced concrete frames [75]. Several structural design optimization problems are tackled using an HS algo-

21

rithm, including sizing and configuration for a truss structure, pressure vessel design, and welded beam design in [51]. See also [44,46,31,76,77]. 3.2. Simulated annealing A Simulated Annealing (SA) algorithm [78,79] is a probabilistic heuristic that mimics the annealing process used in materials science. During this process, a material is heated to high temperatures, causing atoms to move from their initial positions and randomly move through higher energy states. As the temperature of the material is slowly lowered, the atoms settle into a new configuration that hopefully has a lower internal energy. Translating this process to an optimization problem, the initial state can be thought of as a local minimum. The heating of the material translates to replacing the current solution(s) with a new random solution(s). The new solution(s) may be accepted according to a probability based on the resulting function value decrease and on a ‘temperature’ measure, which slowly decreases as iterations continue. The temperature parameter allows for solutions to be accepted that may have a higher objective value, thus avoiding local minima. The basic structure of a SA algorithm is as follows [80]: procedure SimulatedAnnealing begin Select an initial state i 2 S and initial temperature T > 0; Set temperature change counter t :¼ 0; while (bf not termination condition) do begin Set repetition counter n :¼ 0; repeat Generate state j, a randomly chosen neighbor of i; Calculate d = f(j)  f(i); if d < 0 then i j else if random (0,1) < exp (d/T), then i j; n n + 1; until n = N(t); t t + 1; T T(t); end end

SA algorithms for structural engineering have been used mainly in design optimization. Some examples are optimizing tensegrity systems [81] and the design optimization of truss structures [26], laminated composite structures [82,36], steel frames [83,30], cross-sections [84] and concrete frames [85]. See also [86,42,77,58]. 3.3. Ray optimization Inspired by laws that govern the transition of a light ray from one medium to another, Ray Optimization (RO) is relatively new to structural engineering. The algorithm employs a number of agents that search the space. Agents can be thought of as a particle of light, with a location and direction. At each iteration, each agent computes an ‘origin’, which is a point defined by the average of the best known solution and the best known solution for the individual agent. Using Snell’s refraction law and a small random perturbation, each agent’s direction is then adjusted to move towards the ‘origin’ and the agents location is updated by moving in the new direction. The basic structure of RO is as follows [87]:

Author's personal copy

22

W. Hare et al. / Advances in Engineering Software 59 (2013) 19–28

procedure RayOptimization begin Generate initial conditions for agents i; while (bf not termination condition) do begin if an agent violates boundary, then fix position; Evaluate objective for each agent; Determine the so-far best global solution g; For each agent, determine the so-far best position and store as local best bi; Check stopping conditions; Compute origin Oi for each agent: Oi ¼ 12 ðbi þ gÞ; Apply Snell’s refraction law and random perturbation to determine each agent’s movement towards their origin; end end

Ray optimization has been successfully applied to spring design, welded beam design, and truss design [87,88].

been used to optimize the structural weight of frames [90], to optimize the design of steel structures [83,30] and truss structures [26], and to evaluate the seismic performance of optimized frame structures [91]. See also [45]. 3.5. Strengths and limitations of physical and stochastic algorithms Like evolutionary algorithms, there is no mathematical convergence theory for physical and stochastic algorithms. These are designed to break free from local minimizers and have often been found successful in acquiring better global solutions than other algorithms. However, the tendency to leave local minimizers can cause difficultly in convergence. As such, physical and stochastic algorithms are often used in conjunction with other algorithms that are designed to zoom in on local solutions. As stochastic based methods, it is difficult to reproduce results from such algorithms. Running the same algorithm, on the same problem, may result in widely different answers. It has be argued that this contradicts the scientific desire of reproducibility of experiments. This can be overcome by careful programming and retention of the ‘random’ strings used.

3.4. Tabu search

4. Swarm algorithms

The Tabu Search (TS) method, formally proposed by Glover in 1989 [89], is a local search heuristic that works with other algorithms to overcome the restrictions of local optimality. It is applied to constrained combinatorial optimization problems that are discrete in nature. To describe the process of a tabu search, we select an initial solution x 2 X, where X is the feasible set. We let S(x) be the set of moves that move x to an adjacent extreme point. Let T # S, where T is the set of tabu moves. The set T is determined by a function that employs previous information from the search process up to t iterations prior to the current iteration. To determine membership in T, there may be an itemized list or a set of tabu conditions, i.e.,

Swarm algorithms imitate the processes of decentralized, selforganized systems, which can be either natural or artificial in nature. The most commonly used swarm algorithms in structural engineering model biological systems that use simple rules, which result in the development of an ‘intelligent’ system behavior. The following swarm algorithms will be discussed in the subsequent sub-sections: ant colony optimization, particle swarm optimization, shuffled frog-leaping, and artificial bee colony.

TðxÞ ¼ fs 2 S : s violates the tabu conditionsg: As a pseudocode, the TS method has the following form [89]: procedure TabuSearch begin Select an initial x 2 X; x⁄ :¼ x, T :¼ ;, k 0; begin if S(x)  T is empty stop; else Set k k + 1; Select sk 2 S(x)  T such that sk(x) = OPTIMUM (s(x): s 2 S(x)  T)); end Let x :¼ sk(x); if f(x) < f(x⁄) x⁄ :¼ x; end Check stopping conditions; Update T; end end

The TS allows an algorithm to store past information and uses it to improve the steps taken; it can prevent an algorithm from converging back to a local optimum. In structural engineering, it has

4.1. Ant colony optimization As the name suggests, an Ant Colony Optimization (ACO) algorithm follows the processes of an ant colony searching for food. This algorithm, is a stochastic combinatorial optimization method that uses mathematical principles from graph theory. Basically, it models the process of ant foraging by pheromone communication through path formation. A detailed description of the Ant System, as originally named by Dorigo, Maniezzo and Colorni, can be found in [92]. To discuss ACO, it is necessary to define some terms from graph theory. A mathematical graph can be though of as a collection of dots connected by a series of lines. Mathematically, the dots are called nodes (or vertices) and represented by an index i. A lines connecting node i to node j is called an edge and represented by a pair of indices (i, j). A path is a way of getting from one node to another node by traveling along the edges. In ACO, the optimization problem is formulated in terms of determining the shortest path on graph. In brief, the positions for each ant are selected and the pheromone trail intensities at iteration t = 0, denoted by sij(t), for each edge (i, j) are initialized. Thereafter, every ant moves from their current position to another position based on a probability function, which is a function of two ‘desirability measures’: pheromone trail intensities and visibility [92]. For a trail travelled by many ants, the pheromone intensity will be strong, indicating a favorable path. The visibility measure favors proximity of positions, making closer positions more desirable. After a set number of iterations, all ants will have completed a tour of positions and a measure of the change in pheromone trail intensities will be updated. This cycle will continue until either a maximum number of cycles has been completed or every ant fol-

Author's personal copy

W. Hare et al. / Advances in Engineering Software 59 (2013) 19–28

lowed the same tour. The general structure of an ACO algorithm is as follows [92]: procedure AntColony begin For every edge (i, j), initialize sij(t); Place m ants on n nodes; while (not termination condition) do begin for s = 1, . . ., n  1 for k = 1, . . ., m Move kth ant to node j using probability function pkij ðtÞ; end end Move each ant to corresponding starting node; Calculate each path length of tour, Lk; Update shortest path; Update pheromone trail; end end There are three processes that account for the successful nature of this algorithm: positive feedback, distributed computation, and a constructive greedy heuristic. Positive feedback is used as a search and optimization tool. If a choice is made between different ‘path’ options and the result is good, then that choice will be more favorable in the future. This results in the quick discovery of good solutions. Distributed computation mimics the increased effectiveness of a search carried out by a population of ants working cooperatively together compared to the same number of ants working individually. By incorporating this idea into the algorithm, premature convergence is avoided. A greedy heuristic, i.e., only locally optimal moves are allowed, is used to ensure that reasonable solutions are found early on in the search process. Some structural engineering applications that ACO has been applied to include optimizing bridge deck rehabilitation [20], minimum weight and compliance problems in structural topology design [93] and design optimization of truss structures [72], concrete frames [75] and steel frames [39,30,31,49,94]. See also [40,45,46,76,52,58]. 4.2. Particle swarm optimization Particle Swarm Optimization (PSO) algorithms mimic animal flocking behaviors. These algorithms, originally accredited to Eberhart, Kennedy and Shi [95,96], have a similar stochastic nature to GAs and like GAs, work with a set of potential solutions and the concept of ‘fitness’. Essentially, particles (candidate solutions) move around the search space, iteratively improving their fitness value according to a given quality measure. Each particle is influenced by its neighbor. Simple mathematical formulas for position xid and velocity vid are used to move the i particles through the d-dimensional hyperspace, accelerating towards ‘better’ solutions pbesti. For a detailed description of PSO algorithms, see [95]. The general structure of a PSO algorithm is as follows [97]: procedure ParticleSwarm begin Initialize xid, vid and pbesti for each i; while (not termination condition) do begin for each i Evaluate f(xi);

23

Update pbesti; end for each i Set g equal to index of neighbor with best pbesti; Use g to calculate vid; Update xid = xid + vid; Evaluate f(xi); if f(xi) < pbesti Update pbesti; end end end end PSO algorithms have been applied to several structural engineering problems, such as the optimization of a transport aircraft wing [98], optimizing bridge deck rehabilitation [20], optimization of pin connected structures [73], structural damage identification [86,47], continuum structural topology design [99] and optimum design of reinforced concrete frames [75], cellular beams [74], steel structures [30] and truss-structures [43,54,72,46,52]. See also [37,44,45,76,53,77,58]. 4.3. Shuffled frog-leaping The Shuffled Frog-Leaping (SFL) method is a local search heuristic proposed by Eusuff, Lansey and Pasha in 2006 [100]. It is of the recent group of evolutionary memetic algorithms. A memetic algorithm, like other swarm algorithms, is a population based approach influenced by natural memetics. As the name suggests, the SFL method mimics the actions of frogs in a swamp. Each stone is a discrete location and the frogs are trying to find the stone with the largest food source. The frogs are allowed to communicate with other frogs to improve their position. Basically, the SFL algorithm allows for the separate evolution of communities, and then shuffles these communities. The shuffling process results in local search information being exchanged between communities. This exchange of information helps the algorithm move towards a global optimum. In general, the global exploration SFL algorithm has the following form [100]: procedure SuffledFrogLeaping begin Initialize number of memeplexes m and frogs in each memeplex n; Sample F = mn virtual frogs U(1), . . ., U (F); Compute performance value f(i) for each frog U(i); Sort frogs in order of decreasing performance, store in array X; Set PX equal to best frog’s position; while (not termination condition) do begin Partition frogs into memplexes Y1, . . ., Ym (n frogs in each) according to Yk = [U(j)k, f(j)kjU(j)k = U(k + m(j  1)), f(j)k = f(k + m(j  1)), j = 1, . . ., n]; Memetic evolution within each memeplex (for details of local exploration, see [100]); Replace Y1, . . ., Ym into X in order of decreasing performance; Update PX; end end

Author's personal copy

24

W. Hare et al. / Advances in Engineering Software 59 (2013) 19–28

SFL methods have been applied to the optimization of pipe sizes for water distribution network design [42] and bridge deck repairs [20,21].

result may be ineffective on problems with different structures. However, when applied to the specific problem they are designed for, swarm algorithms have be found to be highly effective.

4.4. Artificial bee colony The Artificial Bee Colony (ABC) algorithm, proposed by Karaboga in 2005 [101], follows the food foraging behavior of honey bee swarms. There are three groups of bees in the model: the scout bees that fly randomly in the search space; the employed bees that select a random solution to be perturbed based on the exploitation of the neighborhood of their food sources; and the onlooker bees are placed on each food source according to a probability based selection process [102]. The algorithm is based on the amount of nectar at each of the n food sources, with onlookers having preference to food sources with high probability values. If a new source has a higher nectar amount than a source in their memory, then the new position is updated and the previous position is forgotten. If a predetermined number of trials controlled by the parameter limit shows no improvement to a solution, then the food source is abandoned, and the corresponding employed bee becomes a scout bee. The general structure of the ABC algorithm is as follows [102]: procedure ArtificialBeeColony begin Initialize n, limit, food positions xi for i = 1, . . ., n each with dimension d; Evaluate the fitness of each food position; while (not termination condition) do begin Employed phase: Produce new solutions with k 2 {1, . . ., n}, j 2 {1, . . ., d}, / 2 [0, 1] at random according to vij = xij + /ij  (xij  xkj); Evaluate solutions; Apply greedy selection for employed bees; Onlooker phase: Calculate probability values for each solution xi according to Pi ¼ Pfni ; f j¼i j

Produce new solutions from xi selected using Pi; Evaluate these solutions; Apply greedy selection for onlooker bees; Scout phase: Find abandoned solution: if limit exceeds Replace with new random solution; end Update best solution; end end

For an introduction and references to the different bee optimization methods, see the introduction of [77] (Artificial Bee Colony Algorithm). The ABC algorithm described above has been applied to structural optimization problems involving truss structures [44,77,58], laminated composite components [53], inverse analysis of dam-foundation systems [48] and welded beam and coil spring design [55]. 4.5. Strengths and limitations of swarm algorithms Like other heuristic methods, there is no mathematical convergence theory for swarm algorithms. Swarm algorithms are often designed with very specific problems in mind, and as a

5. Direct search methods The research area of Derivative-free Optimization has blossomed in recent years. As previously stated, these methods do not require derivative information and have mathematical convergence theory. The following Derivative-free Optimization algorithms will be discussed in the subsequent sub-sections: directional direct search, simplicial direct search, simplex gradient methods and trust region methods. 5.1. Directional direct search In Directional Direct Search (DDS) methods, a set of directions with suitable features are used to generate a finite set of points at which the objective function is evaluated. An example of such a set of directions is a positive basis. A finite set, or and infinite set of positive bases maybe used during the algorithm. Another example is an integer lattice, which is constructed from a positive basis. A well known class of mesh based directional direct search methods is Mesh Adaptive Direct Search (MADS), proposed by Audet and Dennis in 2006 [103]. The general structure of a DDS method is as follows [104]: procedure DirectSearch begin Initialize x0 and a set of directions D; while (not termination condition) do begin Search for a point with f(x) < f(xk) (optional); Poll points from fxk þ ak d : d 2 Dk ð2 DÞg; if f(xk + akdk) < f(xk) Stop polling; xk+1 xk + akdk; else xk+1 xk; Update mesh parameter ak; end end end

DDS methods have been applied to structural engineering problems such as optimizing braced steel frameworks [105], structural damage detection [106], and design optimization of reinforced concrete flat slab buildings [33] and viscous dampers [35]. In [82], a set of current configurations are used with a simulated annealing framework to create a direct search simulated annealing (DSA) method for design optimization of laminated composite structures. 5.2. Simplicial direct search In Simplicial Direct Search (SDS) methods, the algorithm evaluates the function at a set of points that form a simplex and uses those function values to decide the next move. A simplex in Rn is the convex hull of a set of n + 1 affinely independent points. By evaluating the function at a set of points that forms a simplex, the algorithm collects sufficient information from around the current iterate. (A shifted set of n + 1 affinely independent points forms a set of linearly independent points, i.e., the shifted set spans

Author's personal copy

W. Hare et al. / Advances in Engineering Software 59 (2013) 19–28

Rn .) The most well known simplex based simplicial direct search method is the Nelder-Mead method [107] (also known as the NM, the amoeba or the adaptive simplex method). We note that the original Nelder-Mead method proposed by Nelder and Mead in 1965 [107] does not have convergence theory, but many variants of the method do. SDS methods have been used in several structural engineering applications, including structural damage identification [86], truss design optimization [27] and estimation of a crack location and depth in a cantilever beam [37]. See also [36].

25

Carry out line search: find tk > 0 such that f(xk + tkdk) < f(xk)  gtkjdkj2; Success: update xk and loop; Failure: decrease accuracy measure and loop; else if Dk large Decrease Dk and loop; else Terminate; end end end

5.3. Simplex gradient methods A simplex gradient method (SGM) uses a simplex gradient instead of the true gradient to generate search directions that point towards nearby (local) minimizers. A simplex gradient is the gradient of the linear interpolation over a set of n + 1 points in Rn . Unlike SDS methods, which use simplices to provide a set of directions to evaluate the function on, SGMs calculate simplex gradients to find descent directions. The general structure of an SGM method is as follows [104]: procedure SimplexGradient begin Initialize x0, simplex Y0, search radius D0 and simplex accuracy measure lk; Initilize line search Armijo-like parameter g; while (not termination condition) do begin Compute a simplex gradient rSf(xk) such that Dk 6 lkkrSf (xk)k; Line search: Find tk > 0 such that f(xk  tk) rSf(Yk)) < f(xk)  gtkkrSf(Yk)k2; if do not find tk > 0 Decrease lk; else Let xkþ1 ¼ arg miny2Sk ff ðyÞg, where S k contains all f-evals from this iteration; end end end

An application of an SGM in structural engineering is seen in [55] for welded beam and coil spring design. The Robust Approximate Gradient Sampling (RAGS) algorithm is a novel derivative-free optimization algorithm for finite minimax problems, proposed by Hare and Nutini in 2012 [108]. The RAGS method is an improvement on SGMs for structured functions. By exploiting the substructure of the finite max function, the RAGS algorithm is able to minimize along non-differentiable ridges of nonsmooth functions and converge to minima of the objective function. The general structure of the RAGS algorithm is as follows [108]: procedure RAGS begin Initialize x0, search radius D0, Armijo-like parameter g and other parameters; begin Generate a set of n + 1 points; Use points to generate robust approximate subdifferential GkY ;   k Set search direction dY ¼ Proj 0jGkY ; if Dk small, but jdkj large

In [38], the RAGS algorithm is shown to be a quick converging and efficient method for solving the problem of minimizing the maximum inter story drift between two buildings. 5.4. Trust-region methods Trust-region (TR) methods locally minimize quadratic models of the objective function over regions where the quadratic model is ‘‘trusted’’ to be accurate. TR methods are Derivative-free Optimization methods, with an abundant number of publications illustrating the supporting convergence analysis and theory. An example of a TR method used in a practical application can be found in [109], where the design of a vehicle door is optimized. To the authors’ knowledge, TR methods have not been applied to any applications in structural engineering to date. 5.5. Strengths and limitations of derivative-free optimization Derivative-free Optimization’s strongest aspect is its mathematical convergence theory that guarantees the quality of the final solution. This makes Derivative-free Optimization very well suited for applying as a final step to ensure local optimality. However, Derivative-free Optimization methods typically scale poorly with dimension, so many require very large numbers of function calls in problems where the number of variables is very large. 6. Discussion and conclusion In the previous sections, we provided multiple references that use non-gradient methods in structural engineering applications. We provide a summary of the methods in Table 2. In Table 1 below, we summarize a few of the papers that compared the performance of several non-gradient methods on one application. While many other papers compared various non-gradient methods, we limit ourselves to those that do the comparison using a structural engineering problem and use at least two algorithms discussed in this article. 1. 2. 3. 4. 5. 6.

 Other memetic methods.  Directional direct search (MADS) and U RAGS. U Evolutionary strategies. U Adaptive harmony search.  Branch-and-bound metaheuristic. U Bee colony optimization and  simplex method.

It is obvious that non-gradient methods are well used in structural engineering applications. As seen in Table 1, there are three papers that compare non-gradient methods with gradient based methods [43,54,55]. In [43], the presented PSO method is shown to generate comparable results to several gradient based methods.

Author's personal copy

26

W. Hare et al. / Advances in Engineering Software 59 (2013) 19–28

Table 1 Comparison of methods: reference, application description and algorithms compared ( indicates that the corresponding algorithm was compared, U indicates the ‘winning’ algorithm(s)). Refs.

Application

Grad.

[20] [38] [40] [30] [45] [83] [43] [26] [54] [42] [55]

Bridge deck rehabilitation Seismic dampers Steel frames Steel frames Steel frames Steel frames Truss-structures Truss-strucutres Truss-structures Water distribution network design Welded beam/coil design

GA     

U U U

    

In [55], the presented bee colony algorithm is also shown to generate comparable results to a gradient based method. In [54], the gradient based methods win. As stated before, non-gradient methods are useful when gradient information is unavailable, unreliable or expensive in terms of computation time. However, when compared against a gradient based method for a function with gradient information, a non-gradient method will almost always come up short. We also observe from Table 1, as well as Section 2, that evolutionary algorithms are the most commonly used non-gradient methods in structural optimization. Both discrete and continuous problems can be handled by evolutionary algorithms, as well as constrained or unconstrained problems. Indeed, GAs are very versatile with respect to the types of problems they can be applied to. (For a complete summary table of the methods presented in this paper and the types of problems they can be applied to, see Table 2 at the end of this section.) However, this does not necessarily imply that evolutionary algorithms are the most appropriate method for black-box problems in structural engineering. In fact, we see multiple papers using evolutionary algorithms as benchmark methods to compare other methods against (see the end of Section 2.1). In all of these

HS

U  

TS

SA

  U



U



ACO

PSO

SFL

Other





U

1 2

  

 

3 4

U U 5  

U 6

papers, evolutionary algorithms are shown to perform comparably or worse with respect to efficiency and solution quality. This observation does spark the suggestion that evolutionary algorithms may be overused, specifically for continuous problems. As evolutionary algorithms were originally designed for discrete problems, it is not surprising to see an evolutionary algorithm come in second place to an algorithm designed for continuous problems. This being said, it is worth noting that it would be inaccurate to conclude that evolutionary algorithms are ‘bad’. Most of the papers in question focus on newly designed algorithms by the papers’ authors. As such, evolutionary algorithms used may not have been optimally adjusted to the problem in question. It is safe to say that it may be beneficial to use a method designed to deal with the specific structure of the problem under consideration. We see methods designed for specific problems in the area of Derivative-free Optimization. Since these algorithms have supporting convergence theory, specific assumptions are usually made about the objective function. Supporting convergence theory allows us to escape the uncertainty of a heuristic method; we know that when a Derivative-free Optimization algorithm terminates, it has found a locally optimal (or in the case of a convex function, globally optimal) solution. Furthermore, Derivative-free Optimiza-

Table 2 Summary of Methods: section number, algorithm, abbreviation for algorithm, description of algorithm, if algorithm is used for constrained or unconstrained, discrete or continuous optimization problems (bold indicates algorithm is primarily used for problems of that type) and any additional assumptions on the problem. Section

Algorithm

Abbr.

Description

Constrained/ unconstrained

Discrete/ continuous

Other assumptions on problem

2.1

Genetic algorithm

GA

Evolutionary genebased heuristic

constrained and unconstrained

discrete and continuous



3.1

Harmony search Simulated annealing Ray optimization Tabu search

HS

Music inspired heuristic Materials science based heuristic Light ray based heuristic Local search heuristic

Constrained and unconstrained Constrained and unconstrained Constrained

Discrete and continuous Discrete and continuous Continuous



Constrained

Discrete and continuous

Combinatorial

Stochastic pheromone mimicking heuristic Flocking behavior based heuristic Evolutionary memebased heuristic Bee foraging based heuristic

Unconstrained and constrained Unconstrained and constrained Constrained and unconstrained Unconstrained and constrained

Discrete

Stochastic combinatorial –

DFO mesh/lattice based algorithm DFO simplex based algorithm DFO substructure exploiting algorithm

Unconstrained and constrained Unconstrained and constrained Unconstrained

3.2 3.3 3.4 4.1 4.2 4.3 4.4 5.1 5.2 5.3

SA RO TS

Ant colony optimization Particle swarm opt. Shuffled frog-leaping Bee colony optimization

ACO

Directional direct search Simplicial direct search Robust approx. grad. sampling

DDS

PSO SFL BCO

SDS RAGS

Continuous and discrete discrete

Large search space –

Combinatorial

Continuous and discrete

Functional, combinatorial

Continuous

Non-linear

Continuous



Continuous

Finite minimax problem

Author's personal copy

W. Hare et al. / Advances in Engineering Software 59 (2013) 19–28

tion methods can easily incorporate a heuristic on top of their regular structure to decrease convergence time and increase solution quality. As seen in Table 1, heuristics such as SA and TS are commonly used in structural engineering. Similar to evolutionary algorithms, a heuristic has very few limitations as to what type of problem it can be applied to. As stated above, heuristics are often used in conjunction with other algorithms, which is just another way that algorithms can be easily tailored to the problem at hand. In conclusion, non-gradient methods are widely used in structural engineering applications. Most dominantly, we see heuristics being applied to various problems. The strengths of these methods include their flexibility and versatility to be applied to multiple different problem types. For difficult, restrictive problems, these methods are easy to implement and can provide reasonable solutions. However, by tailoring an optimization method to or using a method that is tailored to the problem at hand, a significant increase in solution quality and efficiency of the algorithm can be observed. References [1] Hasançebi O, Erbatur F. Layout optimisation of trusses using simulated annealing. Adv Eng Softw 2002;33(7–10):681–96. [2] Tang W, Tong L, Gu Y. Improved genetic algorithm for design optimization of truss structures with sizing, shape and topology variables. Int J Numer Methods Eng 2005;62(13):1737–62. [3] Adeli H, Kamal O. Efficient optimization of plane trusses. Adv Eng Softw Workst 1991;13(3):116–22. [4] Miguel LFF, Lopez RH, Miguel LFF. Multimodal size, shape, and topology optimisation of truss structures using the firefly algorithm. Adv Eng Softw 2013;56:23–37. [5] Jenkins WM. Towards structural optimization via the genetic algorithm. Comput Struct 1991;40(5):1321–7. [6] Lagaros ND, Papadrakakis M, Kokossalakis G. Structural optimization using evolutionary algorithms. Comput Struct 2002;80(7–8):571–89. [7] Saka MP. Optimum design of steel frames using stochastic search techniques based on natural phenomena: a review. Civil engineering computations: tools and techniques. Stirlingshire, UK: Saxe-Coburg Publications; 2007. p. 105–47 [chapter 6]. [8] Sonmez FO. Structural optimization using simulated annealing. InTech; 2008. p. 281–306 [chapter 14]. [9] Poli R. Analysis of the publications on the applications of particle swarm optimisation. J Artif Evol Appl 2008:10. [10] Geem ZW. Harmony search algorithms for structural design optimization. 1st ed. Springer Publishing Company, Incorporated,; 2009. [11] Lamberti L, Pappalettere C. Metaheuristic design optimization of skeletal structures: a review. Comput Technol Rev 2011;4:1–32. [12] Saka MP, Dogan E. Recent developments in metaheuristic algorithms: a review. Comput Technol Rev 2012;5:31–78. [13] Kaveh A, Motie S, Mohammad A, Moslehi M. Magnetic charged system search: a new meta-heuristic algorithm for optimization. Acta Mech 2013;224:85–107. [14] Kaveh A, Talatahari S. A novel heuristic optimization method: charged system search. Acta Mech 2010;213:267–89. [15] Holland JH. Adaptation in natural and artificial systems. Ann Arbor, MI, USA: University of Michigan Press; 1975. [16] Gen M, Cheng R. Genetic algorithms and engineering optimization (engineering design and automation). Wiley-Interscience; 1999. [17] Deng L, Ghosn M, Shao S. Development of a shredding genetic algorithm for structural reliability. Struct Safety 2005;27(2):113–31. [18] Wang J, Ghosn M. Linkage-shredding genetic algorithm for reliability assessment of structural systems. Struct Safety 2005;27(1):49–72. [19] Furuta H, Maeda K, Watanabe E. Application of genetic algorithm to aesthetic design of bridge structures. Comput-Aid Civil Infrastruct Eng 1995;10(6): 415–21. [20] Elbeltagi E, Elbehairy H, Hegazy T, Grierson D. Evolutionary algorithms for optimizing bridge deck rehabilitation. In: Soibelman Lucio, Pena-Mora Feniosky, editors. Proceedings of the 2005 ASCE international conference on computing in civil engineering, vol. 179. ASCE; 2005. 12 pp. [21] Elbehairy H, Elbeltagi E, Hegazy T, Soudki K. Comparison of two evolutionary algorithms for optimization of bridge deck repairs. Comput-Aid Civil Infrastruct Eng 2006;21(8):561–72. [22] Fu K, Zhai Y, Zhou S. Optimum design of welded steel plate girder bridges using a genetic algorithm with elitism. J Bridge Eng 2005;10(3):291–301. [23] García-Pérez J, Castellanos F, Díaz O. Optimum seismic zoning for multiple types of structures. Earthq Eng Struct Dynam 2003;32(5):711–30. [24] Li J, Liu W, Bao Y. Genetic algorithm for seismic topology optimization of lifeline network systems. Earthq Eng Struct Dynam 2008;37(11):1295–312.

27

[25] Dede T, Bekirog˘lu S, Ayvaz Y. Weight minimization of trusses with genetic algorithm. Appl Soft Comput 2011;11(2):2565–75. [26] Manoharan S, Shanmuganathan S. A comparison of search mechanisms for structural optimization. Comput Struct 1999;73(15):363–72. [27] Rahami H, Kaveh A, Aslani M, Najian Asl R. A hybrid modified genetic-Nelder Mead simplex algorithm for large-scale truss optimization. Iran Univ Sci Technol 2011;1(1):29–46. [28] Balling R, Briggs R, Gillman K. Multiple optimum size/shape/topology designs for skeletal structures using a genetic algorithm. J Struct Eng 2006;132(7): 1158–65. [29] Burns SA, editor. State of the art on the use of genetic algorithms in design of steel structures. ASCE; 2002. p. 55–77 [chapter 3]. [30] Hasançebi O, Çarbasß S, Dog˘an E, Erdal F, Saka MP. Comparison of nondeterministic search techniques in the optimum design of real size steel frames. Comput Struct 2010;88(17–18):1033–48. [31] Kaveh A, Talatahari S. An improved ant colony optimization for the design of planar steel frames. Eng Struct 2010;32(3):864–73. [32] Park H, Kwon Y, Seo J, Woo B. Distributed hybrid genetic algorithms for structural optimization on a pc cluster. J Struct Eng 2006;132(12):1890–7. [33] Sahab MG, Ashour AF, Toropov VV. A hybrid genetic algorithm for reinforced concrete flat slab buildings. Comput Struct 2005;83(8–9):551–9. [34] Khedr MAH. Optimum design of steel telecommunication poles using genetic algorithms. Can J Civil Eng 2007;34(12):1567–76. [35] Bigdeli K, Hare W, Tesfamariam S. Optimal design of viscous damper connectors for adjacent structures using genetic algorithm and NelderMead algorithm. In: Proceedings of SPIE conference on smart structures and materials. SPIE; 2012. [36] Akbulut M, Sonmez FO. Design optimization of laminated composites using a new variant of simulated annealing. Comput Struct 2011;89(17–18): 1712–24. [37] Vakil Baghmisheh MT, Peimani M, Sadeghi MH, Ettefagh MM, Tabrizi AF. A hybrid particle swarmNelderMead optimization method for crack detection in cantilever beams. Appl Soft Comput 2012;12(8):2217–26. [38] Bigdeli K, Hare W, Nutini J, Tesfamariam S. Optimal design of damper connectors for adjacent buildings. Comput Struct, submitted for publication. 20 pp. [39] Camp CV, Bichon BJ, Stovall SP. Design of steel frames using ant colony optimization. J Struct Eng 2005;131(3):369–79. [40] Degertekin SO. Optimum design of steel frames using harmony search algorithm. Struct Multidiscip Optimiz 2007;36(4):393–401. [41] Degertekin SO, Hayalioglu MS. Harmony search algorithm for minimum cost design of steel frames with semi-rigid connections and column bases. Struct Multidiscip Optimiz 2010;42(5):755–68. [42] Eusuff MM, Lansey KE. Optimization of water distribution network design using the shuffled frog leaping algorithm. J Water Resour Plan Manage 2003;129(3):210–25. [43] Fourie PC, Groenwold AA. The particle swarm optimization algorithm in size and shape optimization. Struct Multidiscip Optimiz 2002;23(4): 259–67. [44] Hadidi A, Kazemzadeh Azad S, Kazemzadeh Azad S. Structural optimization using artificial bee colony algorithm. In: 2nd International conference on engineering optimization; September 2010. [45] Hasançebi O, Erdal F, Saka M. Adaptive harmony search method for structural optimization. J Struct Eng 2010;136(4):419–31. [46] Jansen PW, Perez RE. Constrained structural design optimization via a parallel augmented lagrangian particle swarm optimization approach. Comput Struct 2011;89(13–14):1352–6. [47] Kang F, Li J, Xu Q. Damage detection based on improved particle swarm optimization using vibration data. Appl Soft Comput 2012;12(8):2329–35. [48] Kang F, Li J, Xu Q. Structural inverse analysis by hybrid simplex artificial bee colony algorithms. Comput Struct 2009;87(13–14):861–70. [49] Kaveh A, Farahmand Azar B, Hadidi A, Rezazadeh Sorochi F, Talatahari S. Performance-based seismic design of steel frames using ant colony optimization. J Construct Steel Res 2010;66(4):566–74. [50] Lee KS, Geem ZW. A new structural optimization method based on the harmony search algorithm. Comput Struct 2004;82(910):781–98. [51] Lee KS, Geem ZW. A new meta-heuristic algorithm for continuous engineering optimization: harmony search theory and practice. Comput Methods Appl Mech Eng 2005;194(3638):3902–33. [52] Luh GC, Lin CY. Optimal design of truss-structures using particle swarm optimization. Comput Struct 2011;89(2324):2221–32. [53] Omkar SN, Senthilnath J, Khandelwal R, Narayana Naik G, Gopalakrishnan S. Artificial bee colony (ABC) for multi-objective design optimization of composite structures. Appl Soft Comput 2011;11(1):489–99. [54] Perez RE, Behdinan K. Particle swarm approach for structural design optimization. Comput Struct 2007;85(19–20):1579–88. [55] Pham DT, Ghanbarzadeh A, Otri S, Koç E. Optimal design of mechanical components using the bees algorithm. Proc Inst Mech Eng, Part C: J Mech Eng Sci 2009;223(5):1051–6. [56] Rao ARM, Shyju PP. A meta-heuristic algorithm for multi-objective optimal design of hybrid laminate composite structures. Comput-Aid Civil Infrastruct Eng 2010;25(3):149–70. [57] Saka MP. Optimum design of steel sway frames to bs5950 using harmony search algorithm. J Construct Steel Res 2009;65(1):36–43. [58] Sonmez M. Discrete optimum design of truss structures using artificial bee colony algorithm. Struct Multidiscip Optimiz 2011;43(1):85–97.

Author's personal copy

28

W. Hare et al. / Advances in Engineering Software 59 (2013) 19–28

[59] Rechenberg I. Cybernetic solution path of an experimental problem. Library Trans 1965;1122. [60] Rechenberg I. Evolutionsstrategie: optimierung technischer systeme nach prinzipien der biologischen evolution. Ph.D. thesis; 1971. [61] Schwefel H-P. Kybernetische evolution als strategie der exprimentellen forschung in der strömungstechnik. M.Sc. thesis; 1965. [62] Schwefel H-P. Evolutionsstrategie und numerische optimierung. Dissertation; 1975. [63] Beyer H-G, Schwefel H-P. Evolution strategies: a comprehensive introduction. Nat Comput: Int J 2002;1(1):3–52. [64] Thierauf G, Cai J. Parallel evolution strategy for solving structural optimization. Eng Struct 1997;19(4):318–24. [65] Hasançebi O. Optimization of truss bridges within a specified design domain using evolution strategies. Eng Optimiz 2007;39(6):737–56. [66] Hasançebi O. Adaptive evolution strategies in structural optimization: enhancing their computational performance with applications to largescale structures. Comput Struct 2008;86(1–2):119–32. [67] Papadrakakis M, Lagaros ND, Tsompanakis Y. Structural optimization using evolution strategies and neural networks. Comput Methods Appl Mech Eng 1998;156(1–4):309–33. [68] Chen TY, Chen HC. Mixed-discrete structural optimization using a rank-niche evolution strategy. Eng Optimiz 2009;41(1):39–58. [69] Muc A, Muc-Wierzgon´ M. An evolution strategy in structural optimization problems for plates and shells. Compos Struct 2012;94(4):1461–70. [70] Geem ZW, Kim J, Loganathan GV. A new heuristic optimization algorithm: harmony search. Trans Soc Model Simul Int 2001;76(2):60–8. [71] Geem Zong Woo. Music-inspired harmony search algorithm: theory and applications. Studies in computational intelligence, vol. 191. Springer; 2009. 206 pp. [72] Kaveh A, Talatahari S. Particle swarm optimizer, ant colony strategy and harmony search scheme hybridized for optimization of truss structures. Comput Struct 2009;87(56):267–83. [73] Li LJ, Huang ZB, Liu F, Wu QH. A heuristic particle swarm optimizer for optimization of pin connected structures. Comput Struct 2007;85(78):340–9. [74] Erdal F, Dog˘an E, Saka MP. Optimum design of cellular beams using harmony search and particle swarm optimizers. J Construct Steel Res 2011;67(2): 237–47. [75] Kaveh A, Sabzi O. A comparative study of two meta-heuristic algorithms for optimum design of reinforced concrete frames. Int J Civil Eng 2011;9(3):193–206. [76] Kaveh A, Talatahari S. Charged system search for optimal design of frame structures. Appl Soft Comput 2012;12(1):382–93. [77] Sonmez M. Artificial bee colony algorithm for optimization of truss structures. Appl Soft Comput 2011;11(2):2406–18. [78] Kirkpatrick S, Gelatt CD, Vecchi MP. Optimization by simulated annealing. Science 1983;220(4598):671–80. ˇ erny´ V. Thermodynamical approach to the traveling salesman problem: an [79] C efficient simulation algorithm. J Optimiz Theory Appl 1985;45(1):41–51. [80] Eglese RW. Simulated annealing: a tool for operational research. Eur J Oper Res 1990;46(3):271–81. [81] Xu X, Luo Y. Force finding of tensegrity systems using simulated annealing algorithm. J Struct Eng 2010;136(8):1027–31. [82] Akbulut M, Sonmez FO. Optimum design of composite laminates for minimum thickness. Comput Struct 2008;86(21–22):1974–82. [83] Ohsaki M, Kinoshita T, Pan P. Multiobjective heuristic approaches to seismic design of steel frames with standard sections. Earthq Eng Struct Dynam 2007;36(11):1481–95. [84] Serra M. Optimum design of thin-walled closed cross-sections: a numerical approach. Comput Struct 2005;83(4–5):297–302. [85] Paya I, Yepes V, González-Vidosa F, Hospitaler A. Multiobjective optimization of concrete frames by simulated annealing. Comput-Aid Civil Infrastruct Eng 2008;23(8):596–610.

[86] Begambre O, Laier JE. A hybrid particle swarm optimization – simplex algorithm (PSOS) for structural damage identification. Adv Eng Softw 2009;40(9):883–91. [87] Kaveh A, Khayatazad M. A new meta-heuristic method: ray optimization. Comput Struct 2012;112–113:283–94. [88] Kaveh A, Khayatazad M. Ray optimization for size and shape optimization of truss structures. Comput Struct 2013;117:82–94. [89] Glover F. Tabu search – part 1. ORSA J Comput 1989;1(2):190–206. [90] Kargahi M, Anderson JC, Dessouky MM. Structural weight optimization of frames using tabu search. I: Optimization procedure. J Struct Eng 2006;132(12):1858–68. [91] Kargahi M, Anderson JC. Structural weight optimization of frames using tabu search. II: Evaluation and seismic performance. J Struct Eng 2006;132(12): 1869–79. [92] Dorigo M, Maniezzo V, Colorni A. The ant system: optimization by a colony of cooperating agents. IEEE Trans Syst Man Cybernet – Part B 1996;26(1):29–41. [93] Luh GC, Lin CY. Structural topology optimization using ant colony optimization algorithm. Appl Soft Comput 2009;9(4):1343–53. _ Saka MP. Ant colony optimization of irregular steel frames [94] Aydog˘du I, including elemental warping effect. Adv Eng Softw 2012;44(1):150–69. CIVIL-COMP. [95] Kennedy J, Eberhart R. Particle swarm optimization. Proceedings of IEEE international conference on neural networks, 1995, vol. 4. IEEE; 1995. p. 1942–8. [96] Shi Y, Eberhart R. A modified particle swarm optimizer. In: The 1998 IEEE International Conference on IEEE World Congress on Computational Intelligence. Evolutionary Computation Proceedings; 1998. p. 69–73. [97] Kennedy J. Swarm intelligence. In: Handbook of nature-inspired and innovative computing. Springer; 2006. p. 187–219. [98] Venter G, Sobieszczanski-Sobieski J. Multidisciplinary optimization of a transport aircraft wing using particle swarm optimization. Struct Multidiscip Optimiz 2004;26(1–2):121–31. [99] Luh GC, Lin CY, Lin YS. A binary particle swarm optimization for continuum structural topology optimization. Appl Soft Comput 2011; 11(2):2833–44. [100] Eusuff M, Lansey K, Pasha F. Shuffled frog-leaping algorithm: a memetic meta-heuristic for discrete optimization. Eng Optimiz 2006;38(2): 129–54. [101] Karaboga D. An idea based on honey bee swarm for numerical optimization. Technical report TR06, Erciyes University; October 2005. [102] Parpinelli RS, Benitez CMV, Lopes HS. Parallel approaches for the artificial bee colony algorithm. Springer; 2011. p. 329–345. [103] Audet C, Dennis Jr JE. Mesh adaptive direct search algorithms for constrained optimization. SIAM J Optimiz 2006;17(1):188–217. [104] Conn A, Scheinberg K, Vicente L. Introduction to derivative-free optimization. MPS/SIAM series on optimization, vol. 8. SIAM; 2009. [105] Baldock R, Shea K, Eley D. Evolving optimized braced steel frameworks for tall buildings using modified pattern search. In: Soibelman Lucio, Pena-Mora Feniosky, editors. Proceedings of the 2005 ASCE international conference on computing in civil engineering, vol. 179. ASCE; 2005. 12 pp. [106] Kourehli SS, Ghodrati Amiri G, Ghafory-Ashtiany M, Bagheri A. Structural damage detection based on incomplete modal data using pattern search algorithm. J Vib Contr 2012. [107] Nelder JA, Mead R. A simplex method for function minimization. Comput J 1965;7(4):308–13. [108] Hare W, Nutini J. A derivative-free approximate gradient sampling algorithm for finite minimax problems. Comput Optimiz Appl 2013, accepted for publication. 33 pp. [109] Chen G, Han X, Liu G, Jiang C, Zhao Z. An efficient multi-objective optimization method for black-box functions using sequential approximate technique. Appl Soft Comput 2012;12(1):14–27.

Suggest Documents