A metamodel optimization methodology based on ... - Semantic Scholar

5 downloads 15770 Views 4MB Size Report
Jan 26, 2008 - bilities, and stable crash behaviors also with regard to the different impact conditions considered in this work. ..... H. D., & Simpson, T.W. (2003).
Available online at www.sciencedirect.com

Computers & Industrial Engineering 55 (2008) 503–532 www.elsevier.com/locate/caie

A metamodel optimization methodology based on multi-level fuzzy clustering space reduction strategy and its applications Wang Hu *, Li Enying, G.Y. Li, Z.H. Zhong Key Laboratory of Advanced Technology for Vehicle Body Design & Manufacture, M.O.E, Hunan University, Yuelu Mountain, Changsha, Hunan 410082, PR China Received 12 January 2007; received in revised form 17 January 2008; accepted 17 January 2008 Available online 26 January 2008

Abstract This paper proposes metamodel optimization methodology based on multi-level fuzzy-clustering space reduction strategy with Kriging interpolation. The proposed methodology is composed of three levels. In the 1st level, the initial samples need partitioning into several clusters due to design variables by fuzzy-clustering method. Sequentially, only some of the clusters are involved in building metamodels locally in the 2nd level. Finally, the best optimized result is collected from all metamodels in the 3rd level. The nonlinear problems with multi-humps as test functions are implemented for proving accuracy and efficiency of proposed method. The practical nonlinear engineering problems are optimized by suggested methodology and satisfied results are also obtained. Ó 2008 Elsevier Ltd. All rights reserved. Keywords: Multi-level; Metamodel; Fuzzy clustering; Nonlinear; Kriging interpolation

1. Introduction When simulations become computationally expensive, the number of simulation-based function evaluations required for optimization must be carefully controlled. Due to the development of complexity and scale of optimizations in industries, much of today’s design involves the use of highly computation-intensive analyses and simulations. To incorporate such simulations in design optimization imposes daunting computational challenges, since at least one function – the objective function or a constraint function – requires a computation-intensive process for function evaluation, such as metal forming and crashworthiness optimization problems. To that end, researchers have explored the use of metamodels, namely, simpler approximate models calibrated to sample runs of the original simulation. The approximate or metamodel can replace the original one, thus reducing the computational burden of evaluating numerous designs. Metamodels can be used to aid in improving the efficiency of computationally expensive optimization algorithms in a variety of applications. *

Corresponding author. E-mail address: [email protected] (W. Hu).

0360-8352/$ - see front matter Ó 2008 Elsevier Ltd. All rights reserved. doi:10.1016/j.cie.2008.01.011

504

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

These techniques are currently being employed to develop inexpensive surrogates of these analyses and simulations. Many of metamodeling techniques in engineering design and other disciplines have been well developed, and recent reviews can be found in Simpson, Peplinski, Koch, and Allen (2001), Haftka, Scott, and Cruz (1998), and Sobieszczanski-Sobieski and Haftka (1997). Among the various metamodeling techniques for optimization, response surface methodology (RSM) and Kriging attract the most attention. RSM explores the relationships between several explanatory variables and one or more response variables. The method was introduced by Box and Wilson (1951). The main idea of RSM is to use a sequential experimental procedure to obtain an optimal response. Box and Wilson suggested using a first-degree polynomial model to do this. They acknowledged that this model was only an approximation, but used it because such a model was easy to estimate and apply, even when little was known about the process. RSM originated from the formal design of experiments theory (Box, Hunter, & Hunter, 1978; Myers & Montgomery, 1995). Kriging is a regression technique used in geostatistics to approximate or interpolate data. The theory of Kriging was developed from the seminal work of its inventor, Danie G. Krige and further developed by Matheron (1963). In the statistical community, it is also known as Gaussian process regression. Kriging is also a reproducing kernel method (like splines and support vector machines). Comparisons on the performance of these two types of methods and other metamodeling methods have been archived in Simpson, Mauery, Korte, and Mistree (1998). Generally, RSM that employs low-order polynomial functions (two-order is implemented often) can efficiently model low-order problems, and the computation of a RS model is fast and cheap. In addition, RSM facilitates the understanding of engineering problems by comparing parameter coefficients and also in the elimination of unimportant design variables. Low-order polynomial response surfaces, however, are not good for highly nonlinear problems, especially those with wavy (multi-hump) behaviors. On the contrary, Kriging models can accurately approximate an unknown system even with high nonlinearity, and the number of samples needed to construct a Kriging model, theoretically, is lower than that for RS (Koch, Simpson, Allen, & Mistree, 1999). However, the computational effort required to fit a Kriging model is much greater, and the interpretation of Kriging model parameters is not intuitive. Kriging models can be used for screening, but the procedure is not as straightforward as it is with RS (Welch et al., 1992). According to Kriging models, it is observed that the modeling efficiency and accuracy are directly related to the design space. An active branch of research in metamodeling is in designing methods that can gradually reduce the design space to improve modeling accuracy as shown in Fig. 1.

Fig. 1. Example of one-dimensional data interpolation by Kriging, with confidence intervals. Squares indicate the location of the data. The Kriging interpolation is in red. The confidence intervals are in green (http://en.wikipedia.org/wiki/Kriging).

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

505

There are several strategies of design space reduction schemes presented in the literature. Box and Draper (1969) suggested a method to gradually refine the response surface to better capture the real function by screening out unimportant variables (little sensitive with objective function). The essential of this method is to reduce the dimensionality of the design space by reducing the number of design variables. Another type of design space reduction seeks to reduce the size of the design space while assuming that the dimensionality cannot be further reduced. Such strategy mainly depends on the experience of engineers and it is an objective method in essential. Since the combined range of each design variable dictates the size of the design space, the larger the range for each design variable, the larger the design space; the larger the design space, the more difficult and costly it is to construct accurate metamodels. Engineers tend to give very conservative upper and lower bounds for design variables at the initial stage of setting up a design optimization problem. This is often due to the lack of sufficient knowledge of function behavior and interactions between objective and constraint functions in the early stages of problem definition. In some fields with enough experience, this method is very effective and easy to employ. In this branch of research in reduction space, several methods were also introduced in the literature. Chen, Allen, Schrage, and Mistree (1997) developed heuristics to lead the surface refinement to a smaller design space. Wujek and Renaud (1998) and Wujek et al. (1995) compared several move-limit strategies that focus on controlling the function approximation in a more meaningful design space. Toropov and his cooperators advocated the use of a sequential metamodeling approach using move limits (Toropov, van Keulen, Markine, & de Doer, 1996) and trust regions are also implemented through this method by Alexandrov, Dennis, Lewis, and Torczon (1998) and Alexandrov (1998). More recently, an adaptive experimental strategy for reducing the number of sample points needed to maintain accuracy of second-order response surface approximations during optimization was presented by P’erez, Renaud, and Watson (2002, 2002). A small response surface design for metamodel estimation was developed by Batmaz and Tunali (2003). Osio and Amon (1996) developed a multi-stage Kriging strategy to sequentially update and improve the accuracy of metamodels as additional sample points are obtained. Schonlau, Welch, and Jones (1998) describe a sequential algorithm to balance local and global searches using metamodels during constrained optimization. Gary Wang et al. (2004) proposed fuzzy-clustering-based hierarchical metamodeling for design space reduction and optimization; this method proposed methodology can intuitively capture promising design regions and can efficiently identify the global or near-global design optimum in the presence of highly nonlinear constraints. Wang and Li developed intelligent sampling scheme based on ‘‘nature based” methods such as particle swarm method (Hu, Li, & Zhong, 2008) and boundary-based methods (Wang, Li, Li, & Zhong, 2007). In this paper, we present a metamodeling methodology based on multi-level fuzzy-clustering strategy for nonlinear optimization problems. First, optimization domain is partitioned into several clusters according to design variables by FCM, and then some of them should be neglected due to optimization objective. Remaining clusters will be involved in building patch metamodeling for obtaining optimization. The major advantage of proposed strategy is to search a sub-domain which contains the optimal solution. The interval of sub-domain is limited and corresponding metamodel can be easy to approximate. As known, the optimal results are determined by the accuracy of metamodel. Thus, the optimization problems can be benchmarks for proposed strategy. Several high nonlinear multi-hump functions are employed for proving the proposed methodology. In the finial, the practical engineering problems are also optimized by advised methodology and the satisfied by satisfactory results are also obtained. 2. Kriging modeling approach The computer analysis codes are deterministic and thus not subjected to measurement error since we get the same output (the response matrix) for the same input (experimental points). Hence, the usual measures of uncertainty derived from least-squares residuals have no obvious meaning, and some statisticians (Peplinski, Koch, & Allen, 1996; Simpson, Peplinski, Koch, & Allen, 2001; Welch et al., 1992) have argued to use them for a deterministic analysis. Consequently, the following model is suggested by Sacks, Schiller, and Welch (1989) to model the deterministic computer response Y(x) as

506

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

Y ðxÞ ¼ f T ðxÞb þ ZðxÞ

ð1Þ T

T

with f(x) = [f1(x), . . . , fm(x)] and b = [b1, . . . , bm] where m denotes the number of basis function’s in regression model, Y(x) is the unknown function of interest, f(x) is a known function of x, b is the regression coefficient vector, and Z(x) is assumed to be a Gaussian stationary process with zero mean and covariance covðxi ; xj Þ ¼ r2 Rðxi ; xj Þ;

i; j ¼ 1; 2; . . . ; n

ð2Þ 2

where n denotes the number of experiments (sampled points), R(.,.) is a correlation function and r is the process variance, xi and xj present different two variables and cov(xi, xj) is the covariance of these two variables. The term fT(x) b in Eq. (1) indicates a global model of the design space, which is similar to the polynomial model in a RSM. The second part in Eq. (1) is used to model the deviation from fT(x) b so that the whole model interpolates the experimental points generated according to one of the designs of experiment approach. The construction of a Kriging model can be explained as follows. For a certain experimental design such as Bucher’s design or central composite design, a set of experimental points are generated as x ¼ fx1 ; x2 ; . . . ; xn g;

x i 2 Rp

ð3Þ

where p is the number of design variables. Thus, x is a n  p design matrix. The resulting outputs from the function to be modeled are given as: Y ¼ fy 1 ðxÞ; y 2 ðxÞ; . . . ; y n ðxÞg

ð4Þ 2

From these outputs the unknown parameters b and r can be estimated ^ ¼ ðF T R1 F Þ1 F T R1 Y b 1 ^ 1 ðY  F bÞ ^ T ^2 ¼ ðY  F bÞR r n

ð5Þ ð6Þ

where F is a vector including the value of f(x) evaluated at each of the experimental points and R is the correlation matrix, which is composed of the correlation function evaluated at each possible combination of the experimental points 2 3 Rðx1 ; x2 Þ    Rðx1 ; xn Þ 6 7 .. .. .. 7 ð7Þ R¼6 . . . 4 5 Rðxn ; x1 Þ    Rðxn ; xn Þ ^ and r ^2 , first the unknown parameters of the correlation function have to be However, before calculating b estimated. Using maximum likelihood estimation, they result from the minimization of Welch et al. (1992) ^2 2ðn ln r

1 þ ln det RÞ

ð8Þ

which is a function of only the correlation parameters and the response data. For the estimation of these parameters, the best linear unbiased prediction of the response is ^y ðxÞ ¼ f T ðxÞb þ rT ðxÞ^ a

ð9Þ

where the column ^ a is defined by ^ ^ a ¼ R1 ðY  F bÞ

ð10Þ

where rT(x) is a vector representing the correlation between an unknown set of points x and all known experimental points rT ðxÞ ¼ fRðx; x1 Þ; Rðx; x2 Þ; . . . ; Rðx; xn Þg

ð11Þ T

a of Eq. (9) is in fact an interpolation of the residuals of the regression model f (x) b. The second part rT ðxÞ^ Thus, all response data will be exactly predicted.

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

507

The stochastic process part of the model given in Eq. (9) includes a correlation function which affects the smoothness of the model (Martin & Simpson, 2003). In all the literature concerning the method of Sacks et al. (1989), a correlation binary function of the type R(xi, xj) = R(xi  xj) is generally selected, and a product correlation rule is used for mathematical convenience Rðxi ; xj Þ ¼

p   Y R xki  xkj

ð12Þ

k¼1

where xki and xkj denote kth component of the experiments. The correlation function R(xi, xj) is specified by the user, and several correlation functions exist in the literature. Among them, however, the following function is mostly used " # p X k k 2 Rðxi ; xj Þ ¼ exp  hk jxi  xj j ð13Þ k¼1

which permits control of both the range of influence and the smoothness of the approximation function. In this paper, the correlation function given in Eq. (13) is implemented for building metamodels. 3. Fuzzy C-mean method (FCM) Clustering of numerical data forms the basis of many classification and system modeling algorithms. The purpose of clustering is to identify natural groupings of data from a large data set to produce a concise representation of a system’s behavior (Mathworks, 2001). Clustering also can be considered the most important unsupervised learning problem because no information is provided about the ‘‘right answer” for any of the objects. It classifies a set of observations in the data and it finds a reasonable structure in the data set. Fuzzy C-means is a data clustering technique originally introduced by Bezdek (1981) as an improvement on earlier clustering methods. It provides a method for grouping data points that populate some multidimensional space into a specific number of different clusters. The algorithm is based on the minimization of the overall dissimilarity of all cluster data. When a cluster number c is selected, the algorithm can automatically identify cluster centers and distribute all given data into the appropriate data clusters. For a specified number of sub-sets, c, the overall dissimilarity between each data point and each fuzzy prototype can be represented as J ðU ; DÞ ¼

n X c X k¼1

ðuik Þm kxk  d i k2

ð14Þ

i¼1

where U is a fuzzy C-partition of the m data points xk (k = 1, 2, . . . , n; x 2 Rn), D = (d1, d2, . . . , dc), with di as the ith cluster center, 1 6 i 6 c; m is a constant greater than 1 (typically d = 3); and uik is the degree of membership of the kth data in the ith cluster uik ¼

1 2 Pc kxk d i kðm1Þ

ð15Þ

j¼1 kxk d j k

ðtþ1Þ

and the clusters dj given by maxij fjuij PN m i¼1 uij xi d j ¼ PN m I¼1 uij

ðtÞ

 uij jg < e; where e 2 [0, 1] represents the termination criterion. ð16Þ

The iteration will stop then The optimal solution U* and D* is obtained by the following equation: MinfJ ðU ; DÞg

ð17Þ

508

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

The FCM clustering algorithm always converges to a strict local minimum of J. Given a group of data points and the number of clusters, the cluster centers D* and the degree of membership of each point to each cluster uik (saved in U*) is computed. It is easy to show that for a given uik, satisfy uik 2 ½0; 1 and

c X

uik ¼ 1

ð18Þ

i¼1

The fuzzy-clustering method is used in this work due to its simplicity, robustness, and convenience. The suggested methodology does not dictate the exclusive use of fuzzy-clustering, and other clustering methods may be equally acceptable. For a review of other clustering methods, please refer to Bezdek (1981). 4. Multi-level methodology based on fuzzy C-mean clustering 4.1. Fuzzy-clustering-based hierarchical metamodeling Wang and Timothy proposed fuzzy-clustering-based hierarchical metamodeling (Gary Wang et al., 2004). But the proposed methodology is based on the observation that design engineers tend to give very conservative lower and upper bounds for design variables at the initial stage of setting up a design optimization problem. The key of such methodology is to limit the design variable in a small interval to avoid multiple hump curves, so the metamodeling can be easy to construct. For overcoming bottom neck of this algorithm, they proposed a hierarchical metamodeling methodology, by which more insight into the design problem can be gained in the initial design space, more details in Gary Wang et al. (2004). For improving it, the six-hump camel-back (SC) function given by Eq. (19) is a well-known example for testing global optimization algorithms (Branin & Hoo, 1972). 1 fscðxÞ ¼ 4x21  2:1x41 þ x61 þ x1 x2  4x22 þ 442 ; x1;2 2 ½2; 2 3

ð19Þ

The first phase of the above methodology is to build metamodel by RSM. All the following works are based on such created metamodel by RSM. It will bring the two defects: 1. How many points do we need to be distributed? 2. How are about the estimation results by RSM? Of course, it is difficult to give definite answers for both of them. The RSM function often cannot represent the accurate global approximation even if employed with large amount of points and high order approximation, because it is impossible for some objectives to be represented. So the necessary requirement of this method is to guarantee that the objective problems can be expressed with mathematic function approximately at least. We reconsider the nonlinear function as shown in Fig. 2. It is easy to notice that the SC function can be approximated based on the enough points. In fact, although the SC function is nonlinear, the amplitude of fluctuation of SC can be well approximated or interpolated by two-order polynomial regression or Kriging methods. There are some aspects to take into account when attempts are made to apply optimization to crashworthiness design and metal forming problems with metamodel methodology. First, it is common that the optimization is carried out with a sub-set of conditions, e.g., load cases, and the optimization will render a sub-optimal solution. Second, it might be difficult to interpret a demand on the structure as a mathematical expression or a numerical value due to high nonlinear problems. Third, even if metamodeling methodologies can build efficient metamodel for high nonlinear problems, large number of forward computation procedures should be performed prior. Some proposed optimization algorithms based on the metamodel methodologies were applied for high nonlinear problems such as crashworthiness and metal forming problems. Almost entire of these works (Etman, Adriaens, van Slagmaat, & Schoofs, 1996; Etman, 1997; Schramm & Thomas, 1998; Schramm, 2001; Sobieszczanski-Sobieski, Kodiyalam, & Yang, 2000; Yamazaki & Han, 1998; Yang, Gu, Tho, &

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

Fig. 2. Six-hump camel-back (SC) function.

509

510

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

Sobieszczanski-Sobieski, 2001; Ohata, Nakamura, Katayama, & Nakamachi, 2003; Lei, Hwang, & Kang, 2000; Guo, Batoz, Bouabdallah, & Naceur, 2000; Kurtaran, Eskandarian, Marzougui, & Bedewi, 2002): (a) The small number of parameters. (c) The constraints of design parameters should be determined in rational range first. (b) The large amount of computation is required. For these reasons, the efficiency and accuracy of metamodel of high nonlinear problems are difficult to guarantee, it is also the bottleneck of optimization based on metamodel methodologies. 4.2. Proposed methodology The hierarchical clustering methodology proposed by Gary Wang et al. (2004) solely clusters the region by consideration of objective function value. For this reason, the rigorous constraint must be defined and the RSM function needs to construct the metamodel in the global domain. According to such a defect, this paper proposed the fuzzy C-mean multi-level clustering methodology based on objective function and design variables. Now, let us consider the high nonlinear function: Rastrigin function. For two independent variables, Rastrigin function is defined as f ðx1 ; x2 Þ ¼ 20 þ x21 þ x22  10ðcos 2px1  cos 2px2 Þ;

x1;2 2 ½5; 5

ð20Þ

Rastrigin function (To¨rn & Zilinskas, 1989) has many local minima – the ‘‘valleys” as shown in Fig. 3. However, the function has just one global minimum, which occurs at the point [0, 0] in the x–y plane, as indicated by the vertical line in the plot, where the value of the function is 0. At any local minimum other than [0, 0], the value of Rastrigin function is greater than 0. The farther the local minimum is from the origin, the larger value of the function is at that point. Obviously, the Rastrigin function is difficult to approximate by RSM except in a small region such as in (1, 1). When the FCM is implemented for clustering simply, the result is obtained as shown in Fig. 4. Each cluster cannot be approximated by RSM at all and it also proves that the hierarchical clustering methodology proposed by Wang cannot solve such multi-hump problems. For illustrating this kind of problem, one-dimensional problem is presented as shown in Fig. 5. According to the hierarchical clustering methodology, points of domain can be clustered into two and gray points upper the cut line are eliminated. If the rest of points only are involved in approximating function, the approximate curve (the dot bold curve line in Fig. 5) would break the continuity of original function and obtain the wrong optimization results as well. For such problems, we propose multi-level FCM method based on patches as shown in Fig. 6. Only design variables are considered to be clustered first. For instance, as shown in Fig. 6, design variables x1 and x2 are partitioned into Clusters I and II as shown in Fig. 6a. The points of both clusters naturally generate the two surfaces. Fig. 6b demonstrates the surface map to spatial coordinate. The difficulty of FCM is that the number of clusters is required as an input, and subtractive clustering can estimate the number of clusters in a point set as long as the radius of influence of each cluster is given. The key factor is how to estimate the number of clusters. Thus, the saddle point strategy is suggested for accounting the number of clusters in domain. To illustrate the strategy, the one dimension problem is demonstrated as shown in Fig. 7. Proposed strategy requires searching saddle point firstly. For instance, A, B, and C points are the initial samples in domain. If according to objective it is to obtain the minimum in domain, point B is the saddle point. In current phase, threshold of angle (angle 1 and angle 2 as shown in Fig. 7) needs determination. How to determine the threshold value is the important factor in this strategy. If the threshold angle is too small, larger number of clusters should be produced; on the contrary, smaller number of clusters should be generated. In the first instance, it requires much more time to construct the metamodel with few points. Furthermore, some of them can be joined indeed. In the second instance, some of the saddle points may be ignored. Thus, the accuracy metamodel in each cluster often cannot be promised in practice. For the above

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

Fig. 3. An illustration of Rastrigin function in 3D and 2D style.

511

512

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

Fig. 4. Clusters of Rastrigin function by FCM.

reasons, the threshold values need to guarantee construction of suitable quantity and accuracy metamodels. Thus, they should be flexible and adjusted in different cases. That is to say, the threshold value is case dependent and sensitive to nonlinear extent. For high nonlinear solution domain, we suggest default values for angle threshold selection. In order to estimate the value of threshold and provide a clear guideline to users, the fuzzy-based strategy should be implemented in this scheme. The proposed strategy is composed of two steps: In the first step, the value of angle between two nearest samples in design space should be measured as Agli, i = 1,n, n denotes the number of samples. In the second step, the FCM should be applied for clustering sampling according to Agli. In default setting, the three clusters should be generated. Thus, the upper and lower bounds should be easy to obtain as Threskupper ; Thresklower , where k is the id of cluster. In our self-developed metamodel program, the threshold is divided into three intervals as ½Thres1lower ; Thres1upper ; ½Thres2lower ; Thres2upper ; and ½Thres3lower ; Thres3upper ; the threshold of each interval obeying the following rules:

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

Fig. 5. An illustration of wrong multi-hump function approximation by hierarchical clustering method.

Fig. 6. An illustration of multi-level FCM method for two design variables’ problems.

513

514

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

Fig. 7. An illustration of saddle points searching strategy.

Fig. 8. An illustration of trends of found saddle points of Rastrigin function in x1.

1. If current angle is in the ½Thres1lower ; Thres1upper , the current one is not saddle point. 2. If current angle is in the ½Thres1upper ; Thres2upper , the value of current angles is kept and process continues untill we find the next points. If the next angle is ½Thres1upper ; Thres2upper , the current one is not regarded as saddle points. Otherwise, the current one is saddle point. 3. If current angle is in the ½Thres2upper ; Thres3upper , the current one is saddle point. The saddle point searching strategy for one dimension is summarized as: 1. Sorting points of domain according to the coordinate x. 2. Connecting the neighbor samples and calculating the angle and distance between the two points (as angle 1 and angle 2 shown in Fig. 6). 3. In this step, two strategies can be chosen in this system.1

1

In this study, both of them should be applied for benchmark.

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

515

(a)

If both angles are beyond the threshold and the distance is larger than 10% of interval of design variable, the current sample is set as saddle one. (b) All distances between two nearest samples should be obtained; the mean value of distance can be calculated. If the value of current distance is larger than the mean, the current sample is set as saddle one. 4. Number of clusters = number of clusters +1; 5. Go to 2 till over of all points. With regard to the Rastrigin function, the saddle points are chosen as shown in Fig. 9. This process is called design variables’ clustering as the 1st level phase of proposed method. In Step 4.3.a, the distance of new saddle point also can be set freely. In the limited range of constraints of design variables, 10% distance of design space can be approximated well in our assumption. The user can also set the value more than 10% when the surface or curve is smoother and there are enough samples in design-parameter-domain. On the contrary, if approximation is not good enough, the distance less than 10% should be set and more clusters should be generated correspondingly. User can apply 4.3.b strategy for controlling the number of saddles and may obtain the same results as the previous one. For Rastrigin function, 121 saddle points are found by proposed strategy in interval 5 and 5, thus Rastrigin function needs to divide into 121 clusters. Sequentially, we also need to build metamodel in each cluster. According to proposed strategy, metamodel should be constructed in each cluster. Therefore, in this case, there are 121 metamodels need construction. It is a very complicated and time consuming procedure to build such a big sum of metamodels in practice. Obviously, it is not our expected result. In order to save the computation consumption of this problem, several clusters with worse results compared with the mean of problem should be neglected. So, the threshold value for objective needs defining. For instance, setting a high-threshold value during minimization can immediately eliminate points that are far from the optimum or are infeasible of a penalty function is combined with the objective function to ensure constraint satisfaction. As seen in Fig. 9, the threshold can be any value between its upper and lower limits. The threshold value is obtained based on the distance between the maximum and minimum function values from each cluster. Thus, the threshold can be presented by   fMax þ fMin fThreshold ¼ ð1 þ aÞ; a 2 ½0:5; 0:5 ð21Þ 2 where f is the average objective value of each cluster and a is the coefficient to control threshold value. With regard to Fig. 8, a is set to zero, and clusters I, II, and V are eliminated, clusters III and IV only are involved in building metamodel. This phase is named filtering clusters as the 2nd level of suggested method.

Fig. 9. An illustration of filtering clusters screening.

516

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

The 3rd phase is to collect the best value from all involved patches. For this problem, after the saddle point searching method is implemented for Rastrigin function optimization, five clusters are chosen to build metamodel and five metamodels are applied for optimization. The optimization result can be expressed as the following:  1  1 n ; fcluster ; . . . ; fcluster fopt ¼ Min fcluster ð22Þ where n is the number of clusters. The method implemented for generating metamodel needs discussion. In this study, Kriging interpolation is adopted. Of course, the other methods such as RSM can also be applied for approximation. The essential 1st level of proposed method is to convert multi-hump problems into approximated convex problems. It must be noticed that approximated convex problem is not completed problem indeed. The ‘‘valleys” may also exist in a local patch, so Kriging interpolation is a better method for such a nonlinear problem. Additionally, convergence for the several methods such as RSM, CG, etc., Kriging is the fastest algorithm as mentioned in Etman et al. (1996). 4.3. Summary of proposed methodology Based on the above discussions, the proposed multi-level clustering methodology is summarized in Fig. 10. The first phase of proposed method is by the saddle point searching method and FCM, and then some clusters should be ignored due to worse mean values of objective and the others are involved in generating metamodel based on according clusters. 5. Numerical example 5.1. One dimension function optimization One dimension function is given by n  pffiffiffiffiffiffi X min f1 ðxÞ ¼ xi sin jxi j i¼1

Fig. 10. Flowchart and basic frame of proposed multi-level clustering method.

ð23Þ

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

517

where xi 2 [500, 500], n = 10,000, should be employed for the proposed scheme first.

Level 1: The saddle point searching strategy described in Section 4.2 is performed. In the Step 3, both of strategies are implemented, the same results were obtained. Seven saddle points are found in interval [500, 500], thus this function needs partitioning into seven clusters. Level 2: The design space is reduced to [430, 200] and [300, 500], two clusters are chosen to build metamodel. The surfaces generated by Kriging method are presented in Fig. 11, and summary of the cluster’s properties for 1D function is listed in Table 1. Level 3: According to Eq. (22), the optimization result is f(420) = 418,860.

5.2. Rastrigin function optimization As we mention in the last section, we implement multi-level FCM method for solving such a problem.

Level 1: One hundred and twenty-one saddle points are found in interval [5, 5] by the saddle point searching strategy with 3.a, and 52 saddle points are obtained by saddle point searching strategy with 3.b. Thus, Rastrigin function needs partitioning into 121 or 52 clusters. Level 2: The design space is reduced to [1, 1] based on 121 and 52 clusters, five clusters are chosen to build metamodel finally. The surfaces generated by Kriging method are presented in Figs. 12 and 13, and summary of the cluster’s properties for Rastrigin function is listed in Table 2. Level 3: According to Eq. (22), the optimization result is f(0, 0) = 0.00.

5.3. Rosenbrock function optimization The two dimension Rosenbrock function (Rosenbrock, 1960) has two design variables and is given by

Fig. 11. An illustration of metamodels for 1D problem according to each cluster based on Kriging method.

518

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

Table 1 Summary of the cluster’s properties for 1D function Item

Cluster1

Cluster2

Center Number of points Minimum Mean Range x1 Optimum at

(430, 200) 12 293,470 44,663 [430, 200] (300)

(300, 500) 11 418,860 75,412 [300, 500] (420)

2

f ðx1 ; x2 Þ ¼ ð1  x1 Þ þ 100ðx2  x21 Þ

2

ð24Þ

That is often used as a test problem for optimization algorithms (Germundsson, 2000). It has a global minimum of 0 at the point (1, 1) as shown in Fig. 14.

Level 1: Three saddle points are found in interval [5, 5], thus Rosenbrock function needs partitioning into three clusters as shown in Fig. 15. Level 2: Cause number of clusters is small; this step could be ignored. The surfaces generated by Kriging method are presented in Fig. 16, and summary of the cluster’s properties for Rosenbrock function is listed in Table 3. Level 3: According to Eq. (22), the optimization result is f(1, 1) = 0.00.

5.4. Summary In this section, different types of nonlinear function are applied for improving the validation of proposed methodology. All of them have been solved successfully by introduced strategy. On the contrary, the functions in Sections 5.1 and 5.2 should be broken down implemented with scheme proposed in Gary Wang et al. (2004) due to the large regions of constraints. It is demonstrated that the proposed multi-level strategy has wider usage range and is more robust than hierarchical clustering methodology (Gary Wang et al., 2004).

Fig. 12. An illustration of reduced clusters for building metamodel.

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

519

Fig. 13. An illustration of metamodels according to each cluster based on Kriging method.

6. Practical engineering optimization problems with high nonlinear problems 6.1. Problem 1: crashworthiness for absorbing energy A square tube is crashed by a moving rigid wall, the initial velocity of 10 m/s is considered. The initial geometry of the tube is shown in following Fig. 17. The angle between the corners is 1100, the height of

520

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

Table 2 Summary of the cluster’s properties for Rastrigin function Item

Cluster1

Cluster2

Cluster3

Cluster4

Cluster5

Center Number of points Minimum Mean Range x1 Range x2 Optimum at

(0.6057, 0.6085) 89 1.9198 19.7810 [0, 1] [0, 1] (0, 1.0)

(0.6057, 0.6085) 89 1.9198 20.6120 [0, 1] [1, 0] (0, 1.0)

(0.0117, 0.0000) 85 0.0000 16.3002 [0.6, 0.6] [0.6, 0.6] (0, 0)

(0.6063, 0.6036) 85 1.9198 21.2897 [1, 0.1] [0, 1] (1, 0)

(0.6063, 0.6036) 93 1.9198 19.9927 [1, 0] [1, 0] (0, 1)

the tube is 272.483 mm, the distance between the concave is 23.2 mm, and the thickness of shell is 2.0 mm. The problem, although simple, presents some challenging features encountered in vehicle crashes such as nonlinearity, buckling, and dynamics. In order to find the optimal criteria such as angle, length, and thickness of the according tube, different criteria can be considered such as minimum rigid wall force, higher absorption capabilities, and stable crash behaviors also with regard to the different impact conditions considered in this work. The optimization problem in the standard mathematical format is presented: Maximize: internal energy (absorption behavior of cylinder). Constraints: Rigid wall force : F rig 6 40 KN

ð25Þ

Design range: 100 6 Angle 6 120

ð26:1Þ

20:2 6 d 6 26:2 1:8 6 t 6 2:2

ð26:2Þ ð26:3Þ

where the design variables Angle and d present the angle and the distance of the tube, respectively. The t is the thickness of the shell. With regard to such a problem, the HyperMorph of Hyperworks Help (2004) is implemented for altering model in useful, logical, and intuitive ways while keeping mesh distortion to a minimum. So the FEM model with Morph is created as shown above inFig. 17b. The auto shape of the HyperMorph is applied for altering the angle and distance design variables. The initial change of angle and distance is 10° and 3 mm, so the constraints of Angle and d are converted to the following Eqs. (26) 16d 61

ð27:1Þ

16t 61

ð27:2Þ

Latin HyperCube design randomly samples the entire design space which is broken down into equal-probability regions. Latin HyperCube design can be looked upon as a stratified Monte-Carlo sampling where the pairwise correlations can be minimized to a small value or else set to a desired value. Latin HyperCube design is especially useful in exploring the interior of the parameter space and for limiting the experiment to a fixed (36 in this example) number of runs. HyperStudy attempts to bring the pairwise correlation values close to zero. According to the first step of proposed method, the Latin HyperCube with 36 runs is employed for generating the initial samples. The distributions of design variable samples are illustrated in Fig. 18. The samples now are partitioned into four clusters as shown in Fig. 19. The clusters I and II are selected as patches to build metamodel. The correlation function R of Kriging interpolation is decided by the distribution function. For this problem, the Gaussian function is employed due to the normal distribution as shown (Fig. 20). The patch metamodel of each cluster is built based on Kriging method as shown in Fig. 21. According to Eq. (21), optimization results of design variables (2.012, 0.2204, and 0.9308) are obtained and the corresponding real values are (2.012 mm, 112.204°, and 26.014 mm), the values of energy and rigid body force estimated by Kriging metamodel are 1879.178 KJ and 37.978 KN. Finally, the optimized values are employed for FEM models, the corresponding simulation result being 1812.561 KJ and 37.612 KN.

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

Fig. 14. An illustration of Rosenbrock function in 3D and 2D style.

521

522

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

Fig. 15. An illustration of clusters which clustered by FCM.

6.2. Problem 2: topologic optimization of initial blank shape This section deals with the determination of the optimal shape of the initial blank. Fig. 22 shows the initial FEM model of outer flank of vehicle forming problem as shown in Fig. 21. The material data are as follows: Young’s modulus, E = 206,000 MPa, initial thickness h0 = 1.00 mm, Lankford coefficient r ¼ 1:77, friction coefficient, v = 0.144, a stress–strain curve: r = 567.29 (ep  0.007127)0.2637 MPa. The elements under the blank-holder are considered as design variables. The objective of the optimization is to search for the best shape under blank holder that at allows the obtaining of a drawn part avoiding defects such as rupture or wrinkles. In order to define the design variables, the initial shape of blank is designed as a rectangle same as the boundary of blank-holder. The sheet forming process is simulated by LSDYNA and the finial deformation is obtained as shown in Fig. 23a. The 14 key points (nodes) are selected for defining design variables as presented in Fig. 23b. The key points inside the blank are fixed and others should change in fixed direction, so the distance between the connected key points are defined as design variables (D1–D7) as shown in Fig. 23b. The constraints of design variables are defined

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

523

Fig. 16. An illustration of metamodels according to each cluster based on Kriging method.

Table 3 Summary of the cluster’s properties for Rosenbrock function Item

Cluster1

Cluster2

Cluster3

Center Number of points Minimum Mean Range x1 Range x2 Optimum at

(0.9344, 0.9438) 510 1.30 712.3747 [0.4, 2] [2, 0.4] (0.0002, 0.0003)

(0.3925, 1.1797) 576 0.00 143.2447 [1.9, 2] [0, 2] (1, 1)

(1.1826, 0.3748) 595 1.000 648.9756 [2, 0] [2, 2] (0.0012, 0.0012)

0 < D1 < 683; < 996;

0 < D2 < 410;

0 < D7 < 638

0 < D3 < 1105;

0 < D4 < 680;

0 < D5 < 440;

0 < D6 ð28Þ

The first objective function of sheet forming problem is based on the thickness variation between the initial and the final state and is given by Hyperworks Help (2004).

524

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

Fig. 17. FEM model of tube impacting rigid wall.

fh ðxÞ ¼

sum X e¼1

!1n fhe ðxÞ

with

fhe ðxÞ

 ¼

he  h0 h0

n ;

ð29Þ

where h0, he are the initial and final thicknesses, is the principal stretch along the thickness. The coefficient n = 2, 4, 6 . . . is introduced here to emphasize the extremes of the objective function. The minimization of the objective function Eq. (29) lets to attenuate the thickness variations. Therefore, it allows improving the

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

525

Fig. 18. An demonstration of distribution samples.

Fig. 19. An illustration of clusters by FCM in crashworthiness problem.

work piece feasibility; because in practice a rupture is always preceded by a high thinning and wrinkling is always preceded by a high thickening (http://en.wikipedia.org/wiki/Kriging). In order to evaluate the possibility of wrinkling, cracking, etc., the strains in the formed component are analyzed and compared against the forming limit curve (FLC) as the second objective function. This curve is extracted from biaxial strain tests, for example, via the Erichsen test. The test specimen of the material has been drawn until fracture or diffuse necking. We define two FLC in the principal plan of logarithmic strains proposed by Jakumeit1, Herdy, and Nitsche, 1 et al. (2005).

526

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

Empirical CDF of Internal Energy Normal 100

Percent

80 Mean StDev N

60

1616 163.3 36

40

20

0 1200

1300

1400

1500

1600

1700

1800

1900

2000

Internal Energy Fig. 20. An illustration of internal energy of distribution (normal distribution).

Fig. 21. An illustration of selected clusters for generating metamodel by Kriging interpolation.

e1 ¼ us ðe2 Þ;

e1 ¼ uw ðe2 Þ

ð30Þ

where /s is the FLC which controls the rupture phenomenon and /w is the FLC controlling the wrinkling. Both of them depend on the material; they are generally given as knots data in tables. Then, they are defined as safety FLC by the following equation: e1 ¼ us ðe2 Þ  s e1 ¼ uw ðe2 Þ þ s

ð31Þ

where s is called safety tolerance, taken from the true FLC. This tolerance is constant during the optimization process and defined by the engineers. Therefore, the second objective function is defined for each element (e1, e2) by the tolerance between the actual strain e1 obtained by computation and the safety FLC for a given strain e2 8

n !1n for ½ee1 > us ðee2 Þ fee ¼ jee1  us ðee2 Þj > sum < X fe ¼ fee with fee ¼ ðjee1  uw ðee2 ÞjÞn ð32Þ for ½ee1 < uw ðee2 Þ > : e¼1 e fe ¼ 0 otherwise

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

527

Fig. 22. An illustration for FEM model of sheet forming in different view.

The multi-objective function given in Eq. (31) is used in this paper f ¼ C 1 jfh j þ C 2 jf je

ð33Þ

In which, C1 and C2 are weighting factors, and fh and fe are the square mean values in each design variable condition. In this study, C1 and C2 are set to 1. The Latin HyperCube with 49 runs is employed for generating the initial samples in this case. The distributions of design variable samples are illustrated in Fig. 24. The initial samples need partitioning into four clusters according to saddle searching strategy. Because of the number of design variables, clusters cannot be demonstrated in spatial space directly after completing forward problems. According to the proposed scheme, only clusters 1 and 4 (in Table 4) are involved in building metamodel and to obtain the optimum results as shown in Table 5. The objective value is 62.179.

528

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

Fig. 23. Shape optimization of addendum surface of drawn piece.

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

529

Fig. 24. Dot plot of design variables (D1, D2, D3, D4, D5, D6, D7).

Table 4 The clusters’ data information of sheet forming problem Cluster1 26.411 75.528 311.75 438 574.35 607.99 359.37 517.52 558.99 262.92 640.86 211.64

110.62 127.31 13.97 209.58 255.39 339.83 161.1 392.31 245.05 47.34 297.79 153.3

839.37 1032.3 885.97 1100.7 640.05 1057.3 1007.2 907.95 789.68 865.5 976.55 931.03

437.16 103.05 465.64 580.21 34.824 156.61 65.936 254.66 381.72 42.486 304.13 2.1605

71.777 244.26 123.58 404.64 304.45 27.123 366.8 41.283 152.57 10.234 351.5 327.08

59.172 353.91 423.34 452.44 131.94 21.227 529.88 278.14 193.04 340.34 75.465 170.18

237.53 154.48 106.32 157.59 351.86 59.072 142.08 488.83 408.38 561.97 90.923 192.02

Cluster4 417.77 462.48 40.264 12.019 451.79 247.66 149.49 272.86 282.04 176.3 368.2 89.362

23.572 359.58 101.08 377.71 87.026 287.39 207.33 329.22 364.4 199.29 314.05 272.65

782.68 731.82 601.45 702.89 553.94 685.64 747.59 956.25 655.33 831.1 1066 585.55

493.25 389.39 133.78 646.47 282.63 521.72 535.66 507.22 89.652 166.69 364.18 199.09

202.26 175.44 214.8 224.6 224.35 141.3 237.69 61.109 194.94 399.02 275.67 391.81

793.88 657.24 769.31 287.68 435.95 482.83 599.35 569.35 671.12 617.81 850.44 734.61

460.72 282.5 307.72 388.25 46.39 637.57 223.75 15.527 328.85 260.15 443.68 475.78

Table 5 The optimum results of problem 2 D1

D2

D3

D4

D5

D5

D6

f

560.126

288.0

949.537

518.135

192.513

825.99

476.124

62.179

In order to prove the validation and effect of results, the finial results are applied for forward problem. The objective value obtained form FEM simulation process is 64.675. Figs. 25 and 26 present the formability and FLC diagram of before and after optimization process implemented with proposed method, respectively. Obviously, nodes of the deformed blank of optimization are in the safety region and the formability status the optimized results have much better improved results.

530

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

Fig. 25. The status of blank before optimization.

Fig. 26. The status of blank after optimization.

7. Conclusion In this study, multi-level metamodeling method for reduction space with Kriging interpolation is suggested. The proposed method is composed of three levels. In the 1st level, the initial samples need partitioning into several clusters due to design variables by fuzzy clustering method. Sequentially, only some of the clusters are involved in building metamodels locally in the 2nd level. Finally, the best optimized result is collected from all metamodels in the 3rd level. The nonlinear problems with multi-humps as test functions are implemented for proving accuracy and efficiency of proposed method. Compared with the hierarchical metamodeling strategy, proposed guarantee the continua of objective function and solve the highly nonlinear problems such as Rastrigin function. The practical nonlinear engineering problems such as crashworthiness and sheet metal forming are also optimized successfully by proposed method. All evidences prove it is an effective method for nonlinear engineering problem in practice.

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

531

In this paper, the threshold values are determined by users or default settings. The default values of proposed optimization system are given first. Both of them are difficult to give an identify value, they are both case-dependent questions and should be changed due to different cases. Therefore, it requires us to summarize and extract the characters of different practical engineering problems, and classify them into different types. In the finial, the threshold values should be given according to these types. Acknowledgements This work is supported by the National 973 Program of China under the grant number 2004CB719402; the Outstanding Youth Foundation of NSFC under the grant number 50625519; Program for Changjiang Scholars and Innovative Research Team in University. References Alexandrov, N. M. (1998). On managing the use of surrogates in general nonlinear optimization and MDO. In Proceedings of the 7th AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and optimization (Vol. 2, pp. 720–729). St. Louis, MO: AIAA. Alexandrov, N., Dennis, J. E., Jr., Lewis, R. M., & Torczon, V. (1998). A trust region framework for managing the use of approximation models in optimization. Structural Optimization, 15(1), 16–23. Batmaz, I., & Tunali, S. (2003). Small response surface designs for metamodel estimation. European Journal of Operational Research, 145, 455–470. Bezdek, J. C. (1981). Pattern recognition with fuzzy objective function algorithms. New York: Plenum Press. Box, G. E. P., & Draper, N. R. (1969). Evolutionary operation: A statistical method for process management. NewYork: JohnWiley & Sons. Box, G. E. P., Hunter, W. G., & Hunter, J. S. (1978). Statistics for experimenters: An introduction to design, data analysis, and model building. New York: John Wiley & Sons. Box, G. E. P., & Wilson, K. B. (1951). On the experimental attainment of optimum conditions (with discussion). Journal of the Royal Statistical Society Series B, 13(1), 1–45. Branin, F. H., & Hoo, S. K. (1972). A method for finding multiple extrema of a function of n variables. In F. Lootsma (Ed.), Numerical methods for non-linear optimization (pp. 231–237). NewYork: Academic Press. Chen, W., Allen, J. K., Schrage, D. P., & Mistree, F. (1997). Statistical experimentation methods for achieving affordable concurrent systems design. AIAA Journal, 35(5), 893–900. Etman, L. F. P. (1997). Optimization of multibody systems using approximations concepts. Eindhoven: Technical University Eindhoven. Etman, L. F. P., Adriaens, J. M. T. A., van Slagmaat, M. T. P., & Schoofs, A. J. G. (1996). Crashworthiness design optimization using multipoint sequential linear programming. Structural Optimization, 12, 222–228. Gary Wang, G., & Simpson, T. W. (2004). Fuzzy clustering based hierarchical metamodeling for design space reduction and optimization. Engineering Optimization, 36(3), 313–335. Germundsson, R. (2000). Mathematica Version 4. Mathematica Journal, 7, 497–524. Guo, Y. Q., Batoz, J. L., Bouabdallah, S., & Naceur, H. (2000). Recent developments on the analysis and optimum design of sheet metal forming parts using a simplified Inverse Approach. Computers and Structures, 78, 133–148. Haftka, R., Scott, E. P., & Cruz, J. R. (1998). Optimization and experiments: A survey. Applied Mechanics Review, 51(7), 435–448. Available from http://en.wikipedia.org/wiki/Kriging. Hu, W., Li, G. Y., & Zhong, Z. H. (2008). Optimization of sheet metal forming processes by adaptive response surface based on intelligent sampling method. Journal of Materials Processing Technology, 197(1–3), 77–88. Hyperworks Help. (c) (2004). Altair Engineering, Inc. Jakumeit1, J., Herdy, M., & Nitsche, M. (2005). Parameter optimization of the sheet metal forming process using an iterative parallel Kriging algorithm. Structural and Multidisciplinary Optimization, 29, 498–507. Koch, P. N., Simpson, T. W., Allen, J. K., & Mistree, F. (1999). Statistical approximations for multidisciplinary optimization: The problem of size. Special Multidisciplinary Design Optimization Issue of Journal of Aircraft, 36(1), 275–286. Kurtaran, H., Eskandarian, A., Marzougui, D., & Bedewi, N. E. (2002). Crashworthiness design optimization using successive response surface approximations. Computational Mechanics, 29, 409–421. Lei, L. P., Hwang, S. M., & Kang, B. S. (2000). Finite element analysis and design in stainless steel sheet forming and its experimental comparison. Journal of Materials Processing Technology, 110, 70–77. Martin. H. D., & Simpson, T.W. (2003). A study on the use of Kriging models to approximate deterministic computer models. In Proceedings of DETC’03 ASME 2003 design engineering technical conferences and computers and information in engineering conference. Chicago, IL, USA. Matheron, G. (1963). Principles of geostatistics. Economic Geology, 58, 1246–1266. Mathworks. (2001). Matlab 6.0 User Guide: Fuzzy Logic Toolbox. Myers, R. H., & Montgomery, D. C. (1995). Response surface methodology: process and product optimization using designed experiments. NewYork: John Wiley & Sons.

532

W. Hu et al. / Computers & Industrial Engineering 55 (2008) 503–532

Ohata, T., Nakamura, Y., Katayama, T., & Nakamachi, E. (2003). Development of optimum process design system for sheet fabrication using response surface method. Journal of Materials Processing Technology, 143–144, 667–672. Osio, I. G., & Amon, C. H. (1996). An engineering design methodology with multistage Bayesian surrogates and optimal sampling. Research in Engineering Design, 8(4), 189–206. Peplinski, J., Koch, P. N., & Allen, J. K. (1996). On the use of statistics in design and the implications for deterministic computer experiments. In Proceedings of the 6th AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and optimization (Vol. 2, pp. 1535–1545). Bellevue (WA): AIAA, 4–6 September. P’erez, V. M., Renaud, J. E., & Watson, L. T. (2002). Reduced sampling for construction of quadratic response surface approximations using adaptive experimental design. In Proceedings of the 43st AIAA/ASME/ASCE/AHS/ASC structures, structural dynamics, and materials conference, AIAA 2002-1587. Denver, Colorado, April 22–25. P’erez, V. M., Renaud, J. E., & Watson, L. T. (2002). Adaptive experimental design for construction of response surface approximations. AIAA Journal, 40(12), 2495–2503. Rosenbrock, H. H. (1960). An automatic method for finding the greatest or least value of a function. Computer Journal, 3, 175–184. Sacks, J., Schiller, S. B., & Welch, W. J. (1989). Design for computer experiment. Technometrics, 31(1), 41–47. Schonlau, M., Welch, W. J., & Jones, D. R. (1998). Global versus local search in constrained optimization of computer models. In N. Flournoy, W. F. Rosenberger, & W. K. Wong (Eds.), New developments and applications in experimental design (pp. 11–25). Institute of Mathematical Statistics. Schramm, U. (2001). Multi-disciplinary optimization for NHV and crashworthiness. In K. J. Bathe (Ed.), The first MIT conference on computational fluid and solid mechanics (pp. 721–724). Boston, Oxford: Elsevier. Schramm, U., & Thomas, H. (1998). Crashworthiness design using structural optimization. AIAA Paper No. 98-4729. Simpson, T. W., Mauery, T. M., Korte, J. J., & Mistree, F. (1998). Comparison of response surface and Kriging models for multidisciplinary design optimization. In Proceedings of the 7th AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and optimization (Vol. 1, pp. 381–391). St. Louis, MO: AIAA. Simpson, T. W., Peplinski, J., Koch, P. N., & Allen, J. K. (2001). Metamodels for computer-based engineering design: Survey and recommendations. Engineering with Computers, 17(2), 129–150. Simpson, T. W., Peplinski, J., Koch, P. N., & Allen, J. K. (2001). Metamodels for computer-based engineering design: Survey and recommendations. Engineering with Computers, 17, 129–150. Sobieszczanski-Sobieski, J., Kodiyalam, S., & Yang, R.-J. (2000). Optimization of car body under constraints of noise, vibration, and harshness (NVH), and crash. AIAA Paper No. 2000-1521. Sobieszczanski-Sobieski, J., & Haftka, R. T. (1997). Multidisciplinary aerospace design optimization: Survey of recent developments. Structural Optimization, 14(1), 1–23. To¨rn, A., & Zilinskas, A. (1989). Global optimization. Lecture notes in computer science (Vol. 350). Berlin: Springer-Verlag. Toropov, V., van Keulen, F., Markine, V., & de Doer, H. (1996). Refinements in the multi-point approximation method to reduce the effects of noisy structural responses. In Proceedings of the 6th AIAA/USAF/NASA/ISSMO symposium on multidisciplinary analysis and optimization (Vol. 2, pp. 941–951). Bellevue, WA: AIAA. Wang, Hu., Li, G. Y., Li, Enying., & Zhong, Z. H. (2007). Development of metamodeling based optimization system for high nonlinear engineering problems. Advances in Engineering Software, doi:10.1016/j.advengsoft.2007.10.001. Welch, W. J., Buck, R. J., Sacks, J., Wynn, H. P., Mitchell, T. J., & Morris, M. D. (1992). Screening, predicting, and computer experiments. Technometrics, 34(1), 15–25. Welch, W. J., Buck, R. J., Sacks, J., Wynn, H. P., Mitchell, T. J., & Morris, M. D. (1992). Screening, predicting and computer experiments. Technometrics, 34(1), 15–25. Wujek, B. A., Renaud, J. E., Batill, S. M., & Brockman, J. B. (1995). Concurrent subspace optimization using design variable sharing in a distributed computing environment. In Proceedings of the 1995 design engineering technical conference (pp. 181–188). Boston, Massachusetts. Wujek, B. A., & Renaud, J. E. (1998). New adaptive move-limit management strategy for approximate optimization, Part 1. AIAA Journal, 36(10), 1911–1921. Yamazaki, K., & Han, J. (1998). Maximization of the crushing energy absorption of tubes. Structural Optimization, 16, 37–46. Yang, R.-J., Gu, L., Tho, C. H., & Sobieszczanski-Sobieski, J. (2001). Multidisciplinary design optimization of a full vehicle with high performance computing. AIAA Paper No. 2001-1273.

Suggest Documents