Struct Multidisc Optim (2013) 48:821–836 DOI 10.1007/s00158-013-0932-7
RESEARCH PAPER
Adaptive heuristic search algorithm for discrete variables based multi-objective optimization Long Tang · Hu Wang · Guangyao Li · Fengxiang Xu
Received: 29 November 2012 / Revised: 1 March 2013 / Accepted: 26 March 2013 / Published online: 30 April 2013 © Springer-Verlag Berlin Heidelberg 2013
Abstract Although metamodel technique has been successfully used to enhance the efficiency of the multiobjective optimization (MOO) with black-box objective functions, the metamodel could become less accurate or even unavailable when the design variables are discrete. In order to overcome the bottleneck, this work proposes a novel random search algorithm for discrete variables based multi-objective optimization with black-box functions, named as k-mean cluster based heuristic sampling with Utopia-Pareto directing adaptive strategy (KCHSUPDA). This method constructs a few adaptive sampling sets in the solution space and draws samples according to a heuristic probability model. Several benchmark problems are supplied to test the performance of KCHS-UPDA including closeness, diversity, efficiency and robustness. It is verified that KCHS-UPDA can generally converge to the Pareto frontier with a small quantity of number of function evaluations. Finally, a vehicle frontal member crashworthiness optimization is successfully solved by KCHS-UPDA. Keywords Discrete variables based multi-objective optimization · Random search · UPDA strategy · KCHS method
L. Tang · H. Wang · G. Li () · F. Xu State Key Laboratory of Advanced Design and Manufacturing for Vehicle Body, Hunan University, Changsha 410082, China e-mail:
[email protected]
List of key symbols v x F n ni np m Ni ST SF E Siter Mi xiter xUPF iter Mi Siter UPF Siter xnor nk c P G
Choice vector of the discrete variables Solution Vector of the objective functions Number of the discrete variables Number of the parameters of the ith discrete variable Dimension of the solutions Number of the objective functions Number of candidate choices of the ith discrete variable Set of total solutions Set of feasible solutions Set of evaluated solutions until the iterth iteration Evaluated solution with minimum value of the ith objective until the iterth iteration (feature solution) Evaluated solution corresponding to current UPF until the iterth iteration (feature solution) Sampling set corresponding to xMi iter Sampling set corresponding to xUPF iter Solution after normalization Number of the clusters in the k-mean cluster Vector of the cluster centers in the k-mean cluster Probability distribution Cumulative distribution
1 Introduction In MOO problems, the design variables can be continuous or discrete. In this paper, the discrete variable is concerned.
822
L. Tang et al.
The discrete variables can be generally categorized into two classes. Some discrete variables are intervals of continuous variables, such as the thickness of a vehicle member or the size of a screw (Sharif et al. 2008). They are selected from the continuous intervals for special commercial application or industry requirement. Other discrete variables are choice variables. The candidates can be described by a choice set, such as the type of material, the selection of motor for a car. In fact, the choice variable implicates several components that originate from reformulation of discrete multidimensional sets (Fuchs and Neumaier 2010). Definitely, for each choice, there is a corresponding parameter vector. The objectives are evaluated based on the parameter vector rather than the choice. In this work, the interval discrete variable is regarded as choice variable with a single parameter so that two types of discrete variable can be uniformed. For the optimization including a series of discrete variables, a solution can be expressed as follows: T x = (v) = 1 (v1 )T , 2 (v2 )T , . . . n (vn )T
(1)
where v = (v1 , v2 , . . . vn )T is a vector of choice variables; i : R → R ni (i = 1, 2, . . . n) is a mapping that converts the ith choice to the corresponding parameter vector; ni is the number of of the ith choice variable; : the parameters R n → R np np = ni converts the choice vector to the entire parameter vector. For discrete variables based multi-objective optimization (DMOO) problems, it is desirable to identify a series of discrete points as an approximation of the Pareto frontier. The widely used approaches are evolutionary algorithms (Schaumann et al. 1998; Deb 1999; Deb and Goel 2001; Deb et al. 2003; Srinivas and Deb 1994; Nain and Deb 2002; Luh et al. 2003; Cetin and Saitou 2004). They do not need any prior knowledge about the objectives and can easily deal with the discrete variables. Although evolutionary algorithms have advantage in global optimization, they usually need a larger number of function evaluations to converge to the Pareto set. However, most engineering optimizations require complicate finite element analysis (FEA). This process is very time-consuming and absolutely “black-box” for which only the input and output are available. The expensive computational cost makes evolutionary algorithms impractical in the engineering optimization. Many scholars successfully used metamodel technique to enhance the efficiency of continuous variables based MOO problems. They modeled the objective functions and then approximated the Pareto frontier using the metamodels of the objective functions (Li et al. 1998; Tappeta
and Renaud 2001; Wilson et al. 2001; Yang et al. 2002; Shan-and-Wang 2005; Su et al. 2011; Li 2011; Koltinis and Kulkarni 2012). The quality of the Pareto frontier depends on the accuracy of the metamodels. In DMOO, according to (1), when choice variables exist, the solutions used to construct the metamodel, however, may become very high-dimensional so that the accuracy of the metamodels can hardly be guaranteed. Therefore, metamodel technique is difficult to be used for DMOO problems. An efficient means to tackle discrete variables based optimization is random search. Random search algorithms generally draw samples from the neighborhood of a feature solution. In recent decades, many random search algorithms are developed. Classical random search algorithms have an offline neighborhood structure. The neighborhood structure should be pre-defined once and used for entire optimization procedure. It means that when a solution is revisited, candidate solutions are generated from the same neighborhood. The representative literatures of random search algorithms with off-line neighborhood structures are following: Alrefaei and Andradottir (1999, 2001), Andradottir (1995, 1996), Gong et al. (1999), Yan and Mukai (1992). Various on-line neighborhood schemes are developed to improve the random search algorithms, such as nested partitions method (Shi and Olafsson 2000; Pichitlamken and Nelson 2003) and COMPASS (Hong and Nelson 2006). Such schemes appropriately alter the neighborhood structures according to the prior knowledge. Compared with the off-line neighborhood structures, on-line neighborhood strategies have better performance. However, these methods are used for single objective optimization (SOO) problems, so a random search method for DMOO problems is essential. This paper proposes a Utopia-Pareto directing adaptive (UPDA) strategy for DMOO problems. According to the characteristics of Pareto frontier, UPDA defines several feature solutions with adaptive on-line neighborhood structures (Hong and Nelson 2006). The solutions in these neighborhood structures constitute different sampling sets from which new samples are generated. A k-mean cluster based heuristic sampling (KCHS) method is also proposed to promote UPDA’s convergence. The rest of this paper is organized as follows. Section 2 introduces basic principle involving UPDA strategy and KCHS method. The UPDA strategy is integrated with the KCHS method in Section 3. A classical test problem is also used to observe the optimization process of KCHS-UPDA in this section. To verify the feasibility, KCHS-UPDA is tested by several benchmark problems in Section 4. In Section 5, a vehicle frontal member crashworthiness optimization is successfully solved by KCHS-UPDA. Finally, Section 6 gives the conclusion.
Adaptive heuristic search algorithm for discrete variables based multi-objective optimization
823
(k = 1,2,. . . m) objective function value of the ith solution. The objectives are all scaled to a range [0, 1]. For instance,
2 Basic theory 2.1 Utopia-Pareto directing adaptive strategy According to (1), a general DMOO problem can be expressed as following form: ⎫ min F (x) = [f1 (x), f2 (x) , . . . , fm (x)] ⎪ ⎪ v ⎪ ⎪ ⎪ ⎪ s.t. ⎪ ⎪ ⎬ Li ≤ hi (x) ≤ Ui (i = 1, 2, . . . q) x = (v) ⎪ ⎪ ⎪ T ⎪ v = (v , v , . . . v ) ⎪ n 1 2 ⎪ ⎪ N ⎪ j 1 2 vj ∈ choicej , choicej , . . . choicej (j = 1, 2, . . . n) ⎭ (2) where fi (x)(i = 1, 2, . . . m) are objectives; hi (x) (i = 1, 2, . . . q) are constraints; Nj is the number of the candidate choices of the j th variable. If some constraints require time-consuming evaluations, they could be handled as other special objective functions (Audet and Dennis 2004) or could be added to the existing objectives as penalty terms (Michalewics 1995). In this work, we focus on the DMOO problems with black-box expensive objectives and inexpensive constraints.
E We respectively use S T , S F and Siter = x1 , x2 , . . . xNiter to denote the set of total solutions, the set of feasible solutions (solutions satisfying the constraints) and the set of evaluated solutions (solutions that have been evaluated by the objectives) until the iterth iteration. According n to (2), S T is a finite set and it has Nk members. In k=1
this work, all evaluated solutions are selected from S F , so E ⊂ S F ⊂ S T . As is mentioned in introducwe have Siter tion, random search generally collects samples based on a feature solution. For SOO problems, the solution with the best objective value is naturally selected as the feature solution. Differently, MOO is to pursue an optimal Pareto frontier rather than a single optimum. Therefore, in this work, UPDA strategy is proposed. UPDA defines several different feature points based on the Pareto set. A solution E is a Pareto solution if there is no other solution in of Siter E Siter that would decrease some objectives without increasing any other objective. In order to identify the Pareto set conveniently, the following fitness function (Shan and Wang 2005) is defined as: j j j i i i Gi = 1−max min fs1 − fs1 , fs2 − fs2, · · · , fsm − fsm
i fsk =
fk (xi ) − fk,min fk,max − fk,min
(4)
fk,min and fk,max is respectively the minimum and maxiE . If the mum function value of the kth objective in the Siter E , function value of the kth objective is constant in the Siter i the default of fsk is 1. It can be deduced that the fitness function value of the solution in Pareto set should in the range [1, 2], and the fitness function value of the solution outside
i i Paretoi set is in the range [0, 1). Thus, the Pareto set x 1 , x 2 , . . . x l (1 ≤ i1 < i2 < . . . il ≤ Niter ) can be easily identified. According to the Pareto set, the Pareto frontier in the objective space can be obtained, shown as Fig. 1. Commonly, we consider the minimums of all individual objectives as feature points. The minimums well reflect the boundary information, but can they capture all the characteristics of the Pareto frontier? It is impossible to give a definite answer for the reason that only individual objectives are focused on. In order to establish robust search scheme, another feature point is selected by the minimum distance criterion as following form:
Min
D=
m k=1
i fskj
2 12 −
min
1≤i≤Niter
i (fsk )
(5) D is the minimum distance from the Pareto frontier to the “Utopia” point that is given by the minimums of the individual objectives (Fig. 1). In fact, the feature point is selected as the closest Pareto frontier point to the “Utopia” point, termed as UPF. Based on the feature points, a novel method originated from the COMPASS (Hong and Nelson 2006) is adopted to
f2
Minimum of f1
where Gi is the fitness value of the ith solution; i and j E ; f i is the scaled kth are two different solutions in Siter sk
Pareto frontier point Non-Pareto frontier point
UPF
j =i
(3)
(j = 1, 2, . . . l)
Minimum of f2 Utopia point f1
Fig. 1 Illustration of the Pareto frontier
824
L. Tang et al.
construct the corresponding sampling sets. We use xMi iter (i = 1, 2, . . . m) to denote the solution with minimum value of E , and use xUPF to denote the ith objective among all x ∈ Siter iter the solution corresponding to the UPF. xMi iter (i = 1, 2, . . . m) and xUPF iter are feature solutions. For each feature solution, the corresponding sampling set is constructed by following rule: If a feasible solution is at least as close to the feature solution as it is to any other evaluated solution, it should be accepted as a member of the sampling set; else it is deserted. Figure 2 illustrates such sampling rule when the solutions are 2-dimensional. Mathematically, the sampling sets are defined as follows: Mi E E Siter = x| x ∈ S F − Siter , ∀y ∈ Siter − xMi iter , (6) d (x,y) ≥ d x,xMi iter UPF Siter =
E E x| x ∈ S F − Siter , ∀y ∈ Siter − xUPF iter , d (x,y) ≥ d x,xUPF iter
(7)
where d(•, •) is the distance of two solutions. In this work, d(x, y) is calculated as following formula: np 12 2 d(x, y) = xinor − yinor (8) i=1
where np is the length of the solutions. According to (8), before calculation, solutions x and y should be respectively normalized to xnor and ynor . For xjnor , R(xj , Hj ) nor xjnor = xj ∈ [0, 1] Hj − 1
Hj = h| h = zj , z ∈ S T (9) The set Hj includes all possible values of the jth component of the solutions in S T . If some solutions have a same value of the j th component, the value emerges in Hj only once. Hj denotes Thus, any two members of Hj are different. the number of the members of Hj (Hj > 1). R(xj , Hj )
Deserted solution min(d) Othere valuated solution
Othere valuated solution
denotes the number of the members in Hj that are smaller than xj . A simple example is employed to clarify (9). Suppose that S T contains five solutions which are listed in the two fore rows of Table 1. We have: H1 = {−1, 5, 7, 2, 0}, |H1| = 5 H2 = {2, 5, 4, 11}, |H2 | = 4 The middle and latter two rows respectively list the values of R(•, •) and the solutions after normalization. The sampling sets defined in (6) and (7) are out of manual intervention and absolutely adaptive. From each sampling set, a few solutions are selected as new samples. They are evaluated and become the members of the set of evaluated solutions. Figure 3 shows the basic procedures of the UPDA. Obviously, if there is no variation in xMi (i = 1, 2 . . . m) or xUPF during several continuous iterations, the corresponding sampling set will constantly shrink until empty: Mi Siter ⊃ SitMier+1 ⊃ SitMier+2 · · · (i = 1, 2 . . . m)
(10)
UPF UPF Siter ⊃ SitUPF er+1 ⊃ Sit er+2 · · ·
(11)
If a sampling set becomes empty, it is indicated that the corresponding feature point reaches a local optimum. Despite of the adaptive search mode of UPDA, there is still a lack of an explicit sampling method to generate solutions from the sampling sets. Therefore, in this work, a k-mean cluster based heuristic sampling (KCHS) method is proposed. 2.2 K-mean cluster based heuristic sampling method In this section, the KCHS method is introduced. For a sample set S ∗ , KCHS selects solutions according to a fitness function defined as the following form: g(x) = max∗ d y, x∗ − d x, x∗ + ε0 x ∈ S∗ (12) y∈S
where x∗ is the corresponding feature solution; ε0 is a very small positive number ensuring that the fitness function is always positive. The value of ε0 can be set by the user (in this work, the default is 10−6 ). The sampling principle of KCHS is that the solutions with larger g(x) values are prone to be sampled. In practice, the solutions Table 1 Example for the solution normalization
min(d) min(d) Deserted solution
Accepted solution
Fig. 2 Illustration of the sampling rule for UPDA
Feature solution
x1 x2 R(x1 , H1 ) R(x2 , H2 ) x1nor x2nor
−1 2 0 0 0 0
5 5 3 2 3/4 2/3
7 4 4 1 1 1/3
2 11 2 3 1/2 1
0 4 1 1 1/4 1/3
Adaptive heuristic search algorithm for discrete variables based multi-objective optimization
825
g 1 , g 2 , . . . g nk . Then, a discrete probability distribution can be constructed as: ⎫ ⎧ ⎪ ⎪ ⎪ ⎪ ⎪ ⎪ ⎬ ⎨ g g g 1 2 nk P = {p1 , p2 , . . . pnk } = , ,... nk nk nk ⎪ ⎪ ⎪ ⎪ ⎪ gk gk gk ⎪ ⎭ ⎩
Given the set of evaluated solutions SE Identify the Pareto solution Set Identify the feature points of the Pareto frontier (minimums of individual objectives and the UPF) Find out the corresponding feature solutions in the feasible solution space
Select solutions
Construct the sampling sets according to Eq. (6) and (7)
SM1
SM2
SMm
Empty? Yes Converge
k=1
(14) Stochastic sampling can be carried out according to the discrete probability distribution P and uniform distribution within each cluster. Summarily, there are two main steps in KCHS: clustering and sampling.
No
of S ∗ are grouped into several different clusters according to their g(x) values and the sampling is carried out based on these clusters. A general example is taken to illustrate the sampling process. Generally, the g(x) range can be denoted by min∗ g(x), max∗ g(x) . It can be divided into nk x∈S
subintervals as:
k=1
SUPF
Fig. 3 Basic procedures of the UPDA
x∈S
k=1
min∗ g(x), s1 , [s1 , s2 ) , [s2 , s3 ) , . . . snk−1 , max∗ g(x)
x∈S
x∈S
Clustering In above example, those partition points {s1 , s2 , . . . snk−1 } can significantly influence the result of grouping. A concerned problem is that how they can be determined. The sampling is carried out based on the clusters, so it is desirable that the solutions having close g(x) values should be grouped together. Therefore, in order to obtain reasonable grouping scheme, a classical clustering algorithm, k-mean cluster (MacQueen 1967) is suggested in this work. K-mean cluster can automatically identify natural structure of a given data set and find the most reasonable clusters. The algorithm is based on the minimization of the dissimilarity of the data in each cluster. Combined with this work, the total dissimilarity can be represented as:
(13) For each subinterval, there is a corresponding cluster. If the g(x) value of a solution belongs to a subinterval, the solution should be assigned to the corresponding cluster. Thus, the solutions can be grouped into nk different clusters. The mean g(x) values of these nk clusters are respectively Fig. 4 Flowchart of KCHS-UPDA
J (w, c) =
nk
2 wj (x) g(x) − cj
(15)
x∈S ∗ j =1
where c = [c1 , c2 , · · · cnk ] are the nk cluster centers; wj (x) denotes the relationship between x and the j th cluster. If min |g(x) − ci | = |g(x) − cj |, wj (x) = 1, else i=1,2,···nk
Start K-mean Cluster based Heuristic Sampling
Randomly sample 2n feasible solutions
Draw new samples according to the cumulative distribution
Identify current Pareto frontier Find out current feature solutions
Construct cumulative distribution{G0 , G1, G2 , Yes
Generate N feasible solutions Delete the feasible solutions overlapped current samples
Update the cluster centers
No
Stop?
,Gnk}
K-mean Cluster
Construct the sampling sets Calculate the total dissimilarity
SM1
SM2
SMm
SUPF
Obtain initial cluster centers c (c1, c2 , Calculate the g x values of all points
Utopia-Pareto Directing Adaptive Strategy
Empty? Yes end
No
cnk )
826
L. Tang et al.
Fig. 5 The optimization process of the FON
Iteration 4
Iteration 4
Sample Pareto frontier point
1 4
0.8
2
f2
x3
0.6 0.4
0 -2 -4 4
0.2 Sample Pareto frontier point
0 0
0.2
2 0
0.4
0.6
0.8
-2
1
-4
x2
f1
0
-2
-4
4
x1
Iteration 10
Iteration 10
2
Sample Pareto frontier point
1 4
0.8
2
f2
x3
0.6 0.4
0 -2 -4 4
0.2 Sample Pareto frontier point
0 0
0.2
2 0
0.4
0.6
0.8
-2
1
-4
x2
f1 Iteration 17
-4
0
-2
2
4
x1
Iteration 17
Sample Pareto frontier point
1 4
0.8
2 f2
x3
0.6 0.4
0 -2 -4 4
0.2 Sample Pareto frontier point
0 0
0.2
2 0
0.4
0.6
0.8
-2
1
-4
x2
f1
-4
4
x1
Iteration 30
Iteration 30
2
0
-2
1
Sample Pareto frontier point
4
0.8
2
f2
x3
0.6 0.4
0 -2 -4 4
0.2 Sample Pareto frontier point
0 0
0.2
2 0
0.4
0.6
0.8
-2
1
x2
f1
Fig. 6 Variation of the number of solutions in the sampling sets
-4
-4
0
-2
2
4
x1
4
x 10
60 SUPF
2.5
M1
S SM2
2 1.5 1 0.5 0 0
Number of feasible solutions
Number of feasible solutions
3
SUPF
50
SM1 SM2
40 30 20 10 0
2
4
6 8 Iteration
10
12
14
15
20
25 Iteration
30
Adaptive heuristic search algorithm for discrete variables based multi-objective optimization
827
Step 3 Find the index i satisfying Gi−1 < rand(0, 1) ≤ Gi and select a solution by the discrete uniform distribution on the ith cluster.
f2 W Reference point A
where rand(0, 1) returns a random value by the uniform distribution on the unit interval (0, 1). B
3 K-mean cluster based heuristic sampling with Utopia-Pareto directing adaptive strategy
C
Pareto frontier
f1
Fig. 7 Illustration of hyper volume
wj (x) = 0. The aim is to find appropriate w and c to minimize (15). The key steps of the k-mean cluster are listed as follows: Step 1 Randomly select nk points with different g(x) values and define these values as initial cluster centers. Step 2 Calculate the current total dissimilarity J (w, c). Step 3 If the variation of total dissimilarity during two continuous iterations is less than the threshold, stop; else, go to Step 4. Step 4 Update the cluster centers as following form: wj (x) g(x) x∈S ∗ cj = (j = 1, 2, · · · nk) (16) wj (x) x∈S ∗
Step 5 Go back to Step 2. Sampling After clustering, stochastic sampling can be carried out by following steps: Step 1 Based on the nk clusters, obtain the discrete probability distribution P = {p1 , p2 , . . . pnk } according to (14). Step 2 Construct the discrete cumulative distribution: G = {G0 , G1 , G2 , . . . Gnk } 2 nk−1 = 0, p1 , pk , . . . pk , 1 k=1
Table 2 Test results of the FON problem
(17)
k=1
Algorithm
KCHS-UPDA [115.8] NSGAII [150] NSGAII [200] NSGAII [250]
In this section, the UPDA strategy is combined with the KCHS method. The remarkable characteristic of UPDA strategy is to adaptively establish different sampling sets based on the feature solutions. Promising solutions are assigned into the sampling sets according to the distance. In each sampling set, KCHS draws samples based on a probable model that is constructed also according to the distance. Therefore, the UPDA and KCHS can be seamlessly integrated. In actual operation, it is inefficient to find out all the members of the sampling sets, particularly when the solutions are numerous. In this study, a fixed number of solutions are randomly generated in each iteration. Those solutions dropped into the sampling sets will be picked up as the candidate solutions for KCHS. The details of KCHS-UPDA are described as follows (iter is current iteration): Step 1 Select 2n (n is the number of the discrete variables) solutions stochastically as the initial samples, and evaluate them. Set iter = 0. Step 2 Set iter = iter + 1; identify the current Pareto set and obtain current Pareto frontier. Step 3 Based on the Pareto frontier, update the feature UPF solutions: xMi iter (i = 1, 2 . . . m) and xiter . If two feature solutions are overlapped, they are regarded as one. Step 4 Generate N (the value of N should be set according to the scale of the problem; in this paper, the default is 100000) solutions randomly; delete those unsatisfying the constraints or overlapped the evaluated solutions. Step 5 According to the (6) and (7), assign the rest solutions into the corresponding sampling set:
HV
GD
IGD
mean
std
mean
std
mean
std
0.2586 0.1669 0.2480 0.2848
0.0143 0.0413 0.0233 0.0127
0.0068 0.0397 0.0136 0.0068
0.0023 0.0131 0.0054 0.0021
0.0106 0.0286 0.0116 0.0102
0.0023 0.0114 0.0027 0.0045
828
L. Tang et al. Table 4 Discrete variables and corresponding parameters
1 0.8
f2
0.6 0.4
TPF KCHS-UPDA NSGAII[150] NSGAII[200] NSGAII[250]
0.2 0 0
0.2
0.4
0.6
0.8
1
f1
Fig. 8 Pareto frontiers of the FON
Mi (i = 1, 2 . . . m) and S UPF . Sometimes, a soluSiter iter tion belongs to more than one sampling set. For M1 ∩ S M2 , x is assigned to S M1 example, if x ∈ Siter iter iter M2 or Siter randomly. If a solution does not belong to any sampling set, delete it. Step 6 If all the sampling sets are empty, the algorithm converges; else, go to Step 7. Step 7 For each non-empty sampling set:
Step 7.1 Set k = 1. Randomly select nk (in this work, the default is 5) solutions with different g(x) values and define these values as initial cluster centers. In this regard, nk should be at most as many as the number of total different g(x) values (denoted by nd), so the nk is set to the minimum of nd and the default. Step 7.2 Calculate the total dissimilarity J k (w, c) according to (15).
Discrete variable
Parameter1
Parameter2
v1 v2 v3 v4
x1 x3 x5 x7
x2 x4 x6 x8
Step 7.3 If J k (w, c) − J k−1 (w, c) ≤ 0.001 (k > 1), go to Step 7.5; else go to Step 7.4. Step 7.4 Set k = k + 1; update the cluster centers according to (16) and go back to Step 7.2. Step 7.5 Obtain the discrete probability distribution {P1 , P2 , · · · Pnk } according to (14). Step 7.6 Build the cumulative distribution G0, G1 , G2 , · · · Gnk according to (17). Step 7.7 Draw new samples from the sampling set. A new sample can be denoted by xI J (the J th solution of the I th cluster). The two indices are determined according to (18) and (19): I = arg (Gi−1 < rand (0, 1) ≤ Gi ) i
(18) J = [ncI rand (0, 1)] + 1
(19)
where ncI is the number of the solutions of the I th cluster. Step 8 Go back to Step 2.
Table 3 Test problems Problem
F1
Variable bounds x ∈ [0, 1]
Objective functions 7
f1 (x) = f2 (x) =
i=1 7 i=1
F2
x ∈ [0, 1]
xi2
x2
i+1 +1
+
2 xi+1
xi2 +1
2 1 xi+1 − xi2 + (xi − 1)2 2
1 x1 x2 h(x), 2 1 f2 (x) = x1 (1 − x2 )h(x), 2 1 f3 (x) = (1 − x1 )h(x), 2 7 1 h (x) = 4xi2 − 2.1xi4 + xi6 3 2 + 44 + xi xi+1 − 4xi+1 i+1
f2 (x) = 1 − exp −
f1 (x) =
i=3
The flowchart of KCHS-UPDA is shown in Fig. 4. In order to clarify KCHS-UPDA clearly, an entire optimization process of a test problem-FON (Khokhar et al. 2010) is illustrated. The two objective functions of FON are follows: 3 1 2 f1 (x) = 1 − exp − xi − √ 3 i=1 3 i=1
1 xi + √ 3
2 (20)
FON has three continuous variables in a range [−4, 4]. In this work, they are discretized with 0.1. Several representative iterations of the sampling process are illustrated in Fig. 5. In iteration 4, for most samples, the two objective values are both very close to 1 and the Pareto frontier includes only two points; In iteration 10, the Pareto points increases gradually; there is almost no variation in the Pareto frontier from iteration 17; the algorithm converges
Adaptive heuristic search algorithm for discrete variables based multi-objective optimization Fig. 9 Distribution of the discrete variables
829
Distribution of v1
Distribution of v2 1
0.8
0.8
0.6
0.6 x4
x2
1
0.4
0.4
0.2
0.2
0
0
0.2
0.4
x1
0.6
0.8
0 0
1
0.2
0.4
x3
0.6
0.8
1
Distribution of v4
Distribution of v3 1
0.8
0.8
0.6
0.6
x6
x8
1
0.4
0.4
0.2
0.2
0
0
0.2
0.4
0.6
0.8
0
1
0
0.2
0.4
after 30 iterations. The total number of function evaluations is 108 and 27 Pareto points are obtained. The variation process of the solutions in the sampling sets is illustrated in Fig. 6. Overall, during the iterations each sampling set gradually shrinks until empty.
0.6
0.8
1
x7
x5
Miettinen 2008; Khokar et al. 2010), hyper volume (HV) generational distance (GD), inverted generational distance (IGD) are employed to assess the performance.
Hyper volume 4 Performance assessment with benchmarks
H V = volume
vi
(21)
i=1
4.1 Performance indicators In MOO, the quality of the optimal Pareto set involves two aspects: the closeness of Pareto frontier to the true Pareto frontier (TPF); the extent of the Pareto frontier. We refer these two properties as closeness and diversity (Aittokoski and Miettinen 2008), respectively. In this section, three performance indicators (Aittokoski and
Table 5 Test results of F1 and F2
K
Problem
F1
F2
Algorithm
KCHS-UPDA [141.2] NSGAII [200] NSGAII [500] NSGAII [1000] KCHS-UPDA [99.6] NSGAII [200] NSGAII [500] NSGAII [1000]
where vi is the hypercube constructed by the ith Pareto frontier point and a reference point W (determined by the worst function values) as its diagonal corners; K is the number of the Pareto frontier points. HV measures the volume in the objective space that is covered by the evaluated solutions (shown in Fig. 7) so that it can reflect both closeness and diversity. Larger value of HV is more desirable.
HV
GD
IGD
mean
std
mean
std
mean
std
0.5733 0.4942 0.5378 0.5908 1 0.9955 1 1
0.0258 0.0191 0.0299 0.0184 0 0.0086 0 0
0.0365 0.1188 0.0627 0.0378 0.0097 0.1106 0.0252 0.0109
0.0127 0.0186 0.0136 0.0087 0.0052 0.0426 0.0072 0.0042
0.0282 0.0468 0.0347 0.0219 0.0125 0.0451 0.0183 0.0100
0.0060 0.0065 0.0112 0.0059 0.0032 0.0138 0.0037 0.0037
830
L. Tang et al.
Fig. 10 Pareto frontiers of F1 and F2
6 TPF KCHS-UPDA NSGAII[200] NSGAII[500] NSGAII[1000]
5
-0.1
f3
f2
4
0
3
-0.2 TPF KCHS-UPDA NSGAII[200] NSGAII[500] NSGAII[1000]
-0.3
2
-0.4 0
1
0
-0.2
0
0
1
2
3
4
5
6
7
-0.2 -0.4 -0.4
f2
f1
(a)
f1
(b)
Table 6 Variables and corresponding candidate choices of FES1 Variable
Parameter
v1 v2 v3 v4 v5
x1 x2 x3 x4 x5 x6 x7 x8 x9 x10 x11 x12 x13 x14 x15 x16
v6
v7
v8
Candidate choices 0.8147 0.9058 0.1270 0.9134 0.6324 0.0975 0.2785 0.5469 0.9575 0.9649 0.1576 0.9706 0.9572 0.4854 0.8003 0.1419
0.4218 0.9157 0.7922 0.9595 0.6557 0.0357 0.8491 0.9340 0.6787 0.7577 0.7431 0.3922 0.6555 0.1712 0.7060 0.0318
0.2769 0.0462 0.0971 0.8235 0.6948 0.3171 0.9502 0.0344 0.4387 0.3816 0.7655 0.7952 0.1869 0.4898 0.4456 0.6463
0.7094 0.7547 0.2760 0.6797 0.6551 0.1626 0.1190 0.4984 0.9597 0.3404 0.5852 0.2238 0.7513 0.2551 0.5060 0.6991
0.8909 0.9593 0.5472 0.1386 0.1493 0.2575 0.8407 0.2543 0.8143 0.2435 0.9292 0.3500 0.1966 0.2511 0.6160 0.4733
0.3517 0.8308 0.5853 0.5497 0.9172 0.2858 0.7572 0.7537 0.3804 0.5678 0.0759 0.0540 0.5308 0.7792 0.9340 0.1299
Table 7 Test results of FES1 Algorithm
KCHS-UPDA [177.9] NSGAII [500] NSGAII [1000] NSGAII [1500]
HV
GD
IGD
mean
std
mean
std
mean
std
0.2668 0.2188 0.2550 0.2669
0.0196 0.0101 0.0078 0.0114
0.0614 0.2069 0.1005 0.0562
0.0277 0.0322 0.0271 0.0160
0.0672 0.1118 0.0792 0.0482
0.0207 0.0185 0.0181 0.0068
Adaptive heuristic search algorithm for discrete variables based multi-objective optimization
831
4.5 TPF KCHS-UPDA NSGAII[500] NSGAII[1000] NSGAII[1500]
4 3.5
f2
3 2.5 2 1.5 1
6
6.5
7
7.5
8
8.5
f1
Fig. 11 Pareto frontiers for FES1
Fig. 12 Illustration of the vehicle frontal member crashworthiness problem
Generational distance K ri2 GD =
i=1
(22)
K
where ri is the distance between the ith Pareto frontier point and the nearest TPF point (For a DMOO problem, there are a limited number of TPF points that can be identified in advance); K is the number of the Pareto frontier points. GD measures the distances from the Pareto frontier to the TPF and gives the information about the closeness. GD = 0 indicates that all the points of Pareto frontier are in the TPF. Inverted generational distance L qi2 I GD =
j =1
(23)
L
where qj is the distance between the j th TPF point and the nearest Pareto frontier point; Lis the number of the TPF points. IGD measures the distances from the TPF to the Pareto frontier. The closeness and diversity can be both reflected by IGD. If the Pareto frontier is incomplete or very poorly approximated, the value of IGD increases. IGD = 0 indicates that all the Pareto frontier points are in the TPF and the Pareto frontier covers the entire TPF. 4.2 Benchmarks To verify the performance of KCHS-UPDA, DMOO problems based on the single parameter discrete variable and the multi-parameter discrete variable will be considered
respectively. The comparison of the KCHS-UPDA is implemented with the controlled elitist non-dominated sorting genetic algorithm (NSGAII) (Deb and Goel 2001). For a detailed comparison, NSGAII is executed with different numbers of function evaluations. In order to avoid unrepresentative numerical results, each problem is carried out for 10 different runs and the mean and standard deviation of above performance indicators are calculated. Case 1 Single-Parameter Problems
Discrete
Variables
The FON problem is employed to show the performance of KCHS-UPDA for the single parameter discrete variable problem. NSGAII is carried out respectively with 150, 200 and 250 function evaluations. The results are listed in Table 2. For the 10 runs, KCHS-UPDA converges with average 115.8 function evaluations. In terms of HV, GD and IGD, KCHS-UPDA is much better than NSGAII [150] and NSGAII [200]; when the function evaluations increase to 250, NSGAII catch up and the values are respective 0.2848, 0.0068 and 0.0102 which are very close to those of KCHS-UPDA (0.2586, 0.0068 and 0.0106). Figure 8 shows the Pareto frontiers obtained by KCHSUPDA and NSGAIIs. Clearly, Pareto frontiers obtained by NSGAII [150] and NSGAII [200] are apart from the TPF. KCHS-UPDA can generate Pareto frontier points very close to the TPF.
upper part of rear ward
the welding line
the welding spot
upper part of foreside
the welding spot
Table 8 Comparison of numbers of function evaluations KCHS-UPDA
PSP (Khokhar et al. 2010)
CK-MOGA (Li 2011)
133.6
303.5
289.4
based
lower part of rearward
lower part of foreside
Fig. 13 Structure of the vehicle frontal member
832
L. Tang et al.
Fig. 14 Illustrations of the vehicle frontal member before and after impact
Case 2 Multi-Parameter Discrete Variables based Problems Two test problems are presented in Table 3. They both have eight continuous variables in the range [0, 1]. In this case, the multi-parameter discrete variable is considered. There are totally 4 discrete variables {v1 , v2 , v3 , v4 }. The eight variables are respectively seen as the corresponding parameters of different discrete variables, listed by Table 4. The number of candidates of each discrete variable is 20. Figure 9 shows the distribution of the candidates of each discrete variable in corresponding parameter space. NSGAII is carried out respectively with 200, 500 and 1000 function evaluations. The results are described in Table 5. For F1 and F2, it takes average 141.2 and 99.6 function evaluations respectively for KCHS-UPDA to obtain the Pareto frontiers. According to Table 5, KCHS-UPDA has a weak advantage in HV compared with NSGAIIs. In terms of GD, KCHS-UPDA is obviously superior to NSGAIIs; for F1 and F2, the values of KCHS-UPDA are 0.0365 and 0.0097, while those of NSGAII [1000] function evaluations (best) are 0.0378 and 0.0109. In terms of IGD, KCHS-UPDA has large advantage to NSGAII [200] and NSGAII [500]; the values of NSGAII [1000] (0.0219 and 0.01) are a little better than those of KCHS-UPDA (0.0282 and 0.0125). The visual results are shown in Fig. 10. In general, KCHSUPDA is able to generate Pareto frontier points very close to the TPF. Case 3 Problems with Mixed Single-Parameter and MultiParameter Discrete Variables Another example-FES1 (Huband et al. 2006) is used to investigate the applicability of the proposed algorithm
for the problems with mixed single-parameter and multi-parameter discrete variables. The two objectives and the range of the variables are shown in (24):
FES1 : f1 (x) =
16 xi − exp((i/n)2 )/3 i=1
f2 (x) =
16
(xi − 0.5 cos(10πi/n) − 0.5)2
i=1
x ∈ [0, 1]
(24)
In this case, there are eight discrete variables (four singleparameter discrete variables and four multi-parameter discrete variables), and each variable has six candidate choices (Table 6). Table 7 shows the results. NSGAII is carried out respectively with 500, 1000 and 1500 function evaluations. The average value of function evaluations for KCHS-UPDA is 177.9. For HV, KCHS-UPDA (0.2668) surpasses NSGAII [500] (0.2188) and NSGAII [1000] (0.2550), and is very close to NSGAII [1500] (0.2669). For GD and IGD, KCHSUPDA (0.0614 and 0.0672) is much better than NSGAII [500] (0.2096 and 0.1118) and NSGAII [1000] (0.1005 and 0.0792) but a little worse than NSGAII [1500] (0.0562 and 0.0482). The Pareto frontiers obtained by KCHS-UPDA and NSGAIIs are illustrated in Fig. 11. Clearly, KCHSUPDA can generate Pareto frontier points very close to the TPF. Furthermore, we also use the test results performed by Khokhar et al. (2010) and Li (2011) to show KCHS-UPDA’s advantage in efficiency. Table 8 lists the mean values of the numbers of function evaluations for two metamodel-assisted
Table 9 Design variables and corresponding candidate choices Upper part of foreside Lower part of foreside Upper part of rearward Lower part of rearward
Material Thickness (mm) Material Thickness (mm) Material Thickness (mm) Material Thickness(mm)
TRIP590 1.0 TRIP590 1.0 TRIP590 1.0 TRIP590 1.0
DP590 1.2 DP590 1.2 DP590 1.2 DP590 1.2
1.5 1.5 1.5 1.5
DP780 1.8 DP780 1.8 DP780 1.8 DP780 1.8
2.0 2.0 2.0 2.0
DP980 2.2 DP980 2.2 DP980 2.2 DP980 2.2
Adaptive heuristic search algorithm for discrete variables based multi-objective optimization
833
Fig. 15 Illustration of the FE model
MOO methods applied to different continuous test problems. According to Table 8, it can be obviously seen that compared with metamodel-assisted methods, KCHS-UPDA can generally converge with a smaller number of function evaluations.
Table 10 JC parameters of the candidate materials Material
A (MPa)
B (MPa)
n
C
TRIP590 DP590 DP780 DP980
345.5 371.9 452.0 562.3
598.1 845.1 987.5 1094.6
0.4291 0.4517 0.3716 0.2630
0.0162 0.0151 0.0138 0.0172
5 Crashworthiness optimization 5.1 Problem description In this section, the proposed KCHS-UPDA is applied to a vehicle frontal member crashworthiness problem, illustrated by Fig. 12. Two same frontal members are fixed at the front of a frame. The entire structure impacts the rigid wall with an initial velocity. Figure 13 shows the structure of the vehicle frontal member. It is composed by four different parts. The welding line connects the foreside with the rearward. Upper and lower parts are jointed together by welding spots. The frontal members before and after impact are shown in Fig. 14. The objectives are respectively to maximize the absorbed energy and to minimize the maximum rigid wall force:
10 0
− min(−Eint ) min(max(Frw ))
max(Eint ) ⇒ min(max(Frw ))
(25)
200 Sample Pareto frontier point
1
Sample Pareto frontier point
0.8
150
0.6 fs2
Fig. 17 Pareto frontier for the crashworthiness problem
Maximum of rigid wall force(KN)
Fig. 16 Comparison of the acceleration curves for experiment and simulation
100
0.4 UPF
50
0.2
UPF Utopia
0 8
8.5
9
9.5
Internal energy(KJ)
10
0
0.2
0.4
0.6 fs1
0.8
1
834
L. Tang et al.
Table 11 Optimization results Design variable
Upper part of foreside Lower part of foreside Upper part of rearward Lower part of rearward
Thickness (mm) Material Thickness (mm) Material Thickness (mm) Material Thickness (mm) Material
Initial design
Optimum design
1.8 DP980 2.0 DP980 1.0 DP590 1.5 DP980
1.8 TRIP590 1.0 DP780 2.2 DP590 2.2 TRIP590
The thicknesses and the composed materials of individual parts of the front member are respectively regarded as independent discrete variables. For each part, there are six various thicknesses and four types of AHSS for selection, summarized in Table 9. 5.2 Finite element modeling and experimental validation A popular commercial software HYPERMESH is used to construct the FE model. Due to the symmetry of the structure (Fig. 12), a simplified FE model is adopted, illustrated in Fig. 15. In the FE model, only one front member is considered and the mass of the frame is halved so that much computational time can be saved. The individual parts of the front member are modeled as quadrilateral shell elements. The welding line and welding spot are respectively modeled as rigid body and solid elements. The entire FE model is composed of 13719 nodes and 12994 elements (the number of shell elements is 12954). In this work, Johnson-Cook (JC) material constitutive model is employed to represent the relationship between the stress and strain. The JC model is proposed by Johnson and Cook (1983). It has simply mathematical form and can well simulate the behavior of metals subjected to different high strain rates.
and (1 + C ln ε˙ ∗ ) respectively In (26), A + Bεpn describes the hardening effect and strain rate effect; εp is equivalent plastic strain; ε˙ ∗ = ε˙ε˙0 is the dimensionless plastic strain for ε˙ 0 = 0.001s −1 . A, B, n, and C are four material parameters to be determined. A is the yield stress; B and n are the hardening coefficients; C is the strain rate effect coefficient. The parameters of the candidate materials obtained by experiment are listed in Table 10. A try-out experiment is implemented to validate the FE model. 1.0 mm thickness and AHSS DP590 are adopted in the two foreside parts and 1.5 mm thickness and AHSS DP780 are adopted in the two rearward parts. The mass of the frame is 536 kg. The initial velocity of the entire structure is 30 km/h. The two acceleration curves of the frame during the impact process by experiment and the FE simulation are compared in Fig. 16. Despite of some differences, the basic trend, peak and peak moment of the two curves exhibit general agreements. This indicates that the FE model is effective and can be used in the optimization analysis. 5.3 Optimization and results For this problem, the computational time of FE simulation is about 0.25 h. It takes 130 evaluations and 35 iterations for KCHS-UPDA to converge. Total 12 Pareto solutions are found, shown in Fig. 17. The UPF of the Pareto frontier are selected as the optimum and the corresponding design variables are listed in Table 11. The changes in the internal energy and the rigid wall force of initial design and optimum design are respectively shown in Fig. 18. After the optimization, the energy absorption ability of the front member is
15
150 Before optimization After optimization Rigid wall force(KN)
Before Optimization After Optimization Internal energy(KJ)
Fig. 18 Comparisons of the internal energy and rigid wall force before and after optimization
According to the JC model, the equivalent Von-Mises flow stress is given by: σ = A + Bεpn 1 + C ln ε˙ ∗ (26)
10
5
0
100
50
0 0
10
20
30 40 Time(ms)
50
60
70
0
10
20
30 40 Time(ms)
50
60
70
Adaptive heuristic search algorithm for discrete variables based multi-objective optimization
improved. The optimum design also efficiently reduces the peak of the rigid wall force in the impact.
6 Conclusion In this paper, an adaptive heuristic search algorithm, KCHSUPDA is proposed for DMOO problems. Different from popular MOO algorithms, KCHS-UPDA replaces the metamodel technique to handle the discrete variables more appropriately. Compared with metamodel-assisted methods, it has better efficiency. Tested by nonlinear problems, KCHS-UPDA is proved to be suitable and potential for the DMOO problems. A representative crashworthiness problem is also successfully solved by the proposed method. According to the numerical results, the following conclusions can be generalized: •
•
UPDA defines several feature points to search the Pareto frontier. Assisted with the feature points, the sampling process can be adaptively guided into promising solution sets. KCHS further improves the sampling efficiency using a heuristic probable model. Generally, KCHS-UPDA is able to converge to the TPF with a small number of function evaluations. A tailor-welded vehicle frontal member for crashworthiness is studied. The energy absorption and rigid wall force are regarded as the two objectives. The design variables are the thicknesses (single parameter discrete variable) and materials (multi-parameter discrete variable). It can be observed that both objectives are obviously improved by the optimum design.
Furthermore, this method should be completed with an effective means to handle expensive black-box constraints and mixed variables should be considered in the future work. Acknowledgments This work is supported by Project of National Science Foundation of China (NSFC) under the grant number 11172097 and 61232014; Program for New Century Excellent Talents in University under the grant number NCET-11-0131; the National 973 Program of China under the grant number 2010CB328005; Hunan Provincial Natural Science Foundation of China under the grant number 11JJA001.
References Aittokoski T, Miettinen K (2008) Efficient evolutionary method to approximate the Pareto optimal set in multiobjective optimization. In: Proceedings of the international conference on engineering optimization. Rio de Janeiro Brazil
835
Alrefaei MH, Andradottir S (1999) A simulated annealing algorithm with constant temperature for discrete stochastic optimization. Manage Sci 45:748–764 Alrefaei MH, Andradottir S (2001) A modification of the stochastic ruler method for discrete stochastic optimization. Eur J Oper Res 133:160–182 Andradottir S (1995) A method for discrete stochastic optimization. Manage Sci 41:1946–1961 Andradottir S (1996) A global search method for discrete stochastic optimization. SIAM J Optim 6:513–530 Audet C, Dennis JE (2004) A pattern search filter method for nonlinear programming without derivatives. SIMA J Optim 14(4):980– 1010 Cetin OL, Saitou K (2004) Decomposition-based assembly synthesis for structural modularity. ASME J Mech Des 126:234–243 Deb K (1999) Evolutionary algorithms for multi-criterion optimization in engineering design. In: Proceedings of evolutionary algorithms in engineering and computer science. Eurogen-99 Deb K, Goel T (2001) Controlled elitist non-dominated sorting genetic algorithms for better convergence Lecture Notes in Computer Science 1993/2001, pp 67–81 Deb K, Mohan M, Mishra S (2003) A fast multi-objective evolutionary algorithm for finding well-spread Pareto-optimal solutions. Indian Institute of Technology Kanpur, report no. 2003002 Fuchs M, Neumaier A (2010) Discrete search in design optimization. In: Complex system design & management, pp 113–122 Gong WB, Ho YC, Zhai W (1999) Stochastic comparison algorithm for discrete optimization with estimation. SIAM J Optim 10:384– 404 Hong LJ, Nelson BL (2006) Discrete optimization via simulation using COMPASS. Oper Res 54(1):115–129 Huband S, Hingston P, Barone L, While L (2006) A review of multiobjective test problems and a scalable test problem toolkit. IEEE Trans Eval Comput 10(5):477–506 Johnson GR, Cook WH (1983) A constitutive model and data for metals subjected to large strains, high strain rate, and temperatures. In: International symposium on ballistics. The Hague, The Netherlands, pp 1–7 Khokhar ZO, Vahabzadeh H, Ziai A, Wang GG, Menon C (2010) On the performance of the PSP method for mixed-variable multi-objective design optimization. ASME J Mech Des 132: 071009-1-11 Kotinis M, Kulkarni A (2012) Multi-objective shape optimization of transonic airfoil sections using swarm intelligence and surrogate models. Struct Multidiscip Optim 45:747–758 Li M (2011) An improved Kriging-assisted multi-objective genetic algorithm. J Mech Des 133:07100801–07100811 Li Y, Fadel GM, Wiecek MM (1998) Approximating Pareto curves using the hyper-ellipse. In: Seventh AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and optimization. St Louis, MO, paper no AIAA-98-4961 Luh GC, Chueh CH, Liu WW (2003) MOIA: multi-objective immune algorithm. Eng Optim 35(2):143–164 MacQueen JB (1967) Some methods for classification and analysis of multivariate observations. In: Proceedings of 5th Berkeley symposium on mathematical statistics and probability, vol 1. University of California Press, pp 281–297 Michalewics Z (1995) A survey of constraint handling techniques in evolutionary computation methods. In: Proceedings of the fourth annual conference on evolutionary programming. MIT Press, Cambridge, pp 135–155 Nain PKS, Deb K (2002) A computationally effective multiobjective search and optimization technique using coarse-to-fine
836 grain modeling. Indian Institute of Technology Kanpur, report no 2002005 Pichitlamken J, Nelson BL (2003) A combined procedure for optimization via simulation. ACM Trans Model Comput Simul 13:155–179 Schaumann EJ, Balling RJ, Day K (1998) Genetic algorithms with multiple objectives In: Seventh AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and optimization. St. Louis, pp 2114–2123 Shan S, Wang GG (2005) An efficient Pareto set identification approach for multi-objective optimization on black-box functions. ASME J Mech Des 127(5):866–874 Sharif B, Wang GG, Eimekkawy TY (2008) Mode pursuing sampling method for discrete variable optimization on expensive black-box functions. J Mech Des 130:021402-1-11 Shi L, Olafsson S (2000) Nested partitions method for stochastic optimization. Methodol Comput Appl Probab 2:271–291
L. Tang et al. Srinivas N, Deb K (1994) Multi-objective optimization using nondominated sorting in genetic algorithms. Evol Comput 2(3):221– 248 Su RY, Gui LJ, Fan ZJ (2011) Muti-objective optimization for bus body with strength and rollover safety constraints based on surrogate models. Struct Multidiscip Optim 44:431–441 Tappeta RV, Renaud JE (2001) Interactive multi-objective optimization design strategy for decision based design. ASME J Mech Des 123:205–215 Wilson B, Cappelleri DJ, Simpson TW, Frecker MI (2001) Efficient Pareto frontier exploration using surrogate approximations. Optim Eng 2:31–50 Yan D, Mukai H (1992) Stochastic discrete optimization. SIAM J Control Optim 30:594–612 Yang BS, Yeun YS, Ruy WS (2002) Managing approximation models in multi-objective optimization. Struct Multidiscip Optim 24:141– 156