Development of a computational efficient tool for ...

3 downloads 0 Views 782KB Size Report
Jun 4, 2015 - Renato de Siqueira Motta Silvana Maria Bastos Afonso Paulo Roberto ...... Afonso, S.M.B., Lyra, P.R.M., Albuquerque, T.M.M. and Motta, R.S. ...
Engineering Computations Development of a computational efficient tool for robust structural optimization Renato de Siqueira Motta Silvana Maria Bastos Afonso Paulo Roberto Lyra Ramiro Brito Willmersdorf

Article information:

Downloaded by UFPE At 05:04 04 June 2015 (PT)

To cite this document: Renato de Siqueira Motta Silvana Maria Bastos Afonso Paulo Roberto Lyra Ramiro Brito Willmersdorf , (2015),"Development of a computational efficient tool for robust structural optimization", Engineering Computations, Vol. 32 Iss 2 pp. 258 - 288 Permanent link to this document: http://dx.doi.org/10.1108/EC-06-2013-0172 Downloaded on: 04 June 2015, At: 05:04 (PT) References: this document contains references to 34 other documents. To copy this document: [email protected] The fulltext of this document has been downloaded 44 times since 2015*

Users who downloaded this article also downloaded: Saeed Maleki Jebeli, Masoud Shariat Panahi, (2015),"An evolutionary approach for simultaneous optimization of material property distribution and topology of FG structures", Engineering Computations, Vol. 32 Iss 2 pp. 234-257 http://dx.doi.org/10.1108/EC-07-2013-0188 Luciano Andrea Catalano, Domenico Quagliarella, Pier Luigi Vitagliano, (2015),"Aerodynamic shape design using hybrid evolutionary computing and multigrid-aided finite-difference evaluation of flow sensitivities", Engineering Computations, Vol. 32 Iss 2 pp. 178-210 http://dx.doi.org/10.1108/ EC-02-2013-0058 Mário Rui Tiago Arruda, Dragos Ionut Moldovan, (2015),"On a mixed time integration procedure for non-linear structural dynamics", Engineering Computations, Vol. 32 Iss 2 pp. 329-369 http:// dx.doi.org/10.1108/EC-05-2013-0136

Access to this document was granted through an Emerald subscription provided by 478307 []

For Authors If you would like to write for this, or any other Emerald publication, then please use our Emerald for Authors service information about how to choose which publication to write for and submission guidelines are available for all. Please visit www.emeraldinsight.com/authors for more information.

About Emerald www.emeraldinsight.com Emerald is a global publisher linking research and practice to the benefit of society. The company manages a portfolio of more than 290 journals and over 2,350 books and book series volumes, as well as providing an extensive range of online products and additional customer resources and services. Emerald is both COUNTER 4 and TRANSFER compliant. The organization is a partner of the Committee on Publication Ethics (COPE) and also works with Portico and the LOCKSS initiative for digital archive preservation.

Downloaded by UFPE At 05:04 04 June 2015 (PT)

*Related content and download information correct at time of download.

The current issue and full text archive of this journal is available on Emerald Insight at: www.emeraldinsight.com/0264-4401.htm

EC 32,2

Development of a computational efficient tool for robust structural optimization

258

Renato de Siqueira Motta and Silvana Maria Bastos Afonso

Received 4 July 2013 Revised 10 February 2014 13 March 2014 Accepted 20 March 2014

Department of Civil Engineering, Federal University of Pernambuco, Recife, Brazil, and

Paulo Roberto Lyra and Ramiro Brito Willmersdorf Department of Mechanical Engineering, Federal University of Pernambuco, Recife, Brazil Abstract

Downloaded by UFPE At 05:04 04 June 2015 (PT)

Purpose – Optimization under a deterministic approach generally leads to a final design in which the performance may degrade significantly and/or constraints can be violated because of perturbations arising from uncertainties. The purpose of this paper is to obtain a better strategy that would obtain an optimum design which is less sensitive to changes in uncertain parameters. The process of finding these optima is referred to as robust design optimization (RDO), in which improvement of the performance and reduction of its variability are sought, while maintaining the feasibility of the solution. This overall process is very time consuming, requiring a robust tool to conduct this optimum search efficiently. Design/methodology/approach – In this paper, the authors propose an integrated tool to efficiently obtain RDO solutions. The tool encompasses suitable multiobjective optimization (MO) techniques (encompassing: Normal-Boundary Intersection, Normalized Normal-Constraint, weighted sum method and min-max methods), a surrogate model using reduced order method for cheap function evaluations and adequate procedure for uncertainties quantification (Probabilistic Collocation Method). Findings – To illustrate the application of the proposed tool, 2D structural problems are considered. The integrated tool prove to be very effective reducing the computational time by up to five orders of magnitude, when compared to the solutions obtained via classical standard approaches. Originality/value – The proposed combination of methodologies described in the paper, leads to a very powerful tool for structural optimum designs, considering uncertainty parameters, that can be extended to deal with other class of applications. Keywords Robust optimization, Multiobjective optimization, Probabilistic collocation method, Reduced basis method, Uncertainty propagation Paper type Research paper

Engineering Computations: International Journal for ComputerAided Engineering and Software Vol. 32 No. 2, 2015 pp. 258-288 © Emerald Group Publishing Limited 0264-4401 DOI 10.1108/EC-06-2013-0172

1. Introduction The research area of optimal structural design has received increasing attention from both academia and industry over the past three decades in order to improve structural performance and to reduce design costs. With the rapid growth of computational power, remarkable progress has been made in the field (Haftka and Grandhi, 1986; Sobieszczanski-Sobieski and Haftka, 1997; Simpson et al., 2001; Spillers and MacBain, 2009; Venkata and Savsani, 2012) mainly under consideration of deterministic models and deterministic parameters. However, in the real world, uncertainty and randomness are prevalent in the context of structural design due to the lack of accurate data in the The authors acknowledge the Brazilian research agency CNPq and Pernambuco state research agency FACEPE for the financial support of various research projects developed in this area by the PADMEC Research Group.

Downloaded by UFPE At 05:04 04 June 2015 (PT)

structural design, manufacturing process, material models and properties, among other uncertainty sources (Bucher, 2009). Commonly such uncertainties are included in the design process by introducing simplified hypothesis and safety or design factors. This tendency has been changing due to the increase in the use of adequate numerical tools for dealing with this class of problems. It is well known (Marczyk, 2000; Beyer and Sendhoff, 2007; Schuëller and Jensen, 2008; Doltsinis and Kang, 2004) that optimization under a deterministic approach generally leads to a final design whose performance may degrade significantly and or a design that can violate constraints because of perturbations arising from uncertainties. In this scenario a better target, one that provides an optimal design and gives a high degree of robustness, is indicated. That is a feasible design which is relatively invariant with respect to changes in uncertain parameters. The process of finding such optimum is referred to as Robust Design Optimization (RDO), (Beyer and Sendhoff, 2007; Schuëller and Jensen, 2008; Doltsinis and Kang, 2004; Papadrakakis et al., 2005) in which feasibility improvement and variability reduction in the performance are the targets. Several robustness measures have been proposed in the literature, in particular, the expected value and standard deviation are considered here (Bucher, 2009). When these two robustness measures are combined it follows that the mathematical formulation of the RDO problem emerges as a multiobjective optimization (MO) problem, in which, both the expected value and the standard deviation of the output of interest have to be optimized (for instance minimized). In addition to that, the robustness in terms of feasibility conditions is also taken into account, considering the variability for some of the constraints. As general purpose optimizers do not directly solve MO problems, Pareto concept (Collette and Siarry, 2003) based methodologies are used here to obtain robust designs subjected to several constraints. For that, efficient techniques such as Normal-Boundary Intersection (NBI), (Das and Dennis, 1998) and Normalized Normal-Constraint (NNC) (Messac et al., 2003) are implemented (Motta et al., 2012). Apart from these two strategies, two other approaches commonly used in the literature, weighted sum (WS) method and min-max method are also considered (Hwang et al., 1980; Huang et al., 2008). To compute the objective functions of the RDO problem, two nonintrusive methods for uncertainty propagation analysis are used: the Monte Carlo (MC) method (Rubinstein and Kroese, 2007) and the probabilistic collocation method (PCM) (Webster et al., 1996). These approaches consider the computational system (code) as a black-box, which returns function values and its gradients given an input data. As the generation of Pareto points and the uncertainty analysis could be very costly, an approximation technique based on the use of Reduced Basis (RB) (Afonso et al., 2010) methodology is also incorporated in our procedure. The purpose of this scheme is to get high fidelity (HF) model information with low computational expense. In the RB procedure a parameter separability strategy together with the affine decomposition allows the development of an efficient offline/online calculation strategy for the computational implementation of the method. This makes it a very attractive tool for optimum design purposes as the offline calculations are computed only once and used subsequently in the online stage for each new function evaluation, error estimator and sensitivities calculation. This paper focusses on the main issues related to the above described topics considered to build a computational efficient integrated tool to carry out robust designs of structures. The developed tool can be applied to a wide range of structures. Here, 2D problems are considered to demonstrate the efficiency of such tool.

Development of computational efficient tool 259

EC 32,2

Downloaded by UFPE At 05:04 04 June 2015 (PT)

260

The remaining of this paper is organized as follows: in Section 2 robust optimization is described, presenting its mathematical formulation and the MO solution techniques and methodologies employed for the required statistics; Section 3 briefly describes the governing equation of the problems to be solved; the reduced basis method (RBM) is presented in Section 4; Section 5 presents the proposed procedure to combine all these methods in a single efficient computational tool to obtain suitable solutions of RDO structural engineering problems; in Section 6 two model examples illustrate the application of the proposed methodology and the performance of the developed tool and of the different strategies are discussed and compared; finally, in Section 7 the main conclusions of the present paper are drawn. 2. Robust optimization As already mentioned, RDO considers problem uncertainties to obtain a design less susceptible to variability. In this work, two objective controls will be considered: the mean and the standard deviation of a selected output function. Under such consideration, when the expected value is minimized, a less conservative design is found while when the standard deviation is minimized a design with a much smaller range of variation is obtained (Motta, 2009). The task of RBO is therefore to obtain the tradeoff between the two above aims. Such compromising solutions are obtained by MO techniques (Beyer and Sendhoff, 2007; Schuëller and Jensen, 2008). 2.1 Problem statement The RBO problem can be mathematically formulated as:   Minimize: FðxÞ ¼ E ðF ðx; U ÞÞ; sðF ðx; U ÞÞ subject to: g i ðx; U Þ p 0

i ¼ 1; . . . m

hj ðx; U Þ ¼ 0

j ¼ 1; . . . ‘



xk p xk p xuk

k ¼ 1; . . . ndv

(1)

(2)

where x is the design variables vector, U is the random variable vector, F(x) is the set of objective functions to be minimized, E(*) is the expected value , σ(*) is the standard deviation, F(x, U) is the selected output, gi(x, U) and hj(x, U) are inequality and equality constraints, respectively, that could be dependent on U, xlk , xuk are, respectively, the lower and upper bounds of a typical design variable, m, l and ndv are the number of inequality constraints, equality constraints and design variables, respectively. The MO problem presented above is solved using the techniques based on the Pareto minima concept that are described in Section 2.3. 2.2 Statistics calculations In the literature there are two groups of methodologies used to conduct uncertainty analysis through computational models: non-intrusive (black-box) and intrusive or physics-based methods (Keane and Nair, 2005). The statistics chosen for the RDO problem formulation are the expected value and the standard deviation, defined, respectively, as: Z 1 f ¼ E ½ f ðU Þ  ¼ f ðU ÞP ðU ÞdU (3) 1

Z

1



h i 2 f ðU Þf P ðU ÞdU ¼ E f ðU Þ2 E ½ f ðU Þ2

Development of computational 1 efficient tool In the above equations f is the output of interest, U is a random variable and P(U) is

Downloaded by UFPE At 05:04 04 June 2015 (PT)

SD2f

¼

(4)

its associated Probability Density Function (PDF). In this work, U represents the perturbations arising from uncertainties. To compute the statistics indicated in (3) and (4) we will consider two nonintrusive uncertainty analysis techniques, namely, the MC method (Rubinstein and Kroese, 2007) and the PCM (Webster et al., 1996). These methodologies exploit existing deterministic computer models and are described in the following subsections.Prior to being used in the RO context, the implementation of both techniques were exhaustively tested (Motta, 2009). 2.2.1 The MC method. The MC method is the most popular non intrusive method and can be used for any problem related to uncertainty propagation. Given the joint probability distribution function of the involved random variables, the MC method can be applied for approximating calculations of the statistics response of a particular quantity, including its distribution, with an arbitrary error, as long a sufficient number of sampling points are given. In this method the function f(U) of interest is calculated in several random points Ui, generated taking into account their probability distributions P(U), then the integrals of Equations (3) and (4) are, respectively, approximated as: m 1X f ðU i Þ m i¼1 " # m 1 X 2 2 2 _2 2 SD½ f ðU Þ ¼ SDf  s f ¼ f ðU i Þ mf M C m i¼1

f  f MC ¼

(5)

(6)

in which m is the number of sampling points, f M C is the MC approximation for the mean values of f (U) and _ s f is the MC approximation for the standard deviation of f(U). If f is integrable in U, then f M C -f and s^ f -SDf as mN. The variance calculation given by Equation (6), can bepffiffiffiffi used to verify the approximation f M C , and the error could be estimated by SDf / m, (Keane and Nair, 2005) which depends on the number of samples m. The major problem in thepMC ffiffiffiffi method is its convergence rate that is very slow as the error is of the order of O(1/ m). For instance, to improve the approximation in one decimal digit a 100 times larger sample is necessary. To overcome the drawbacks of high computational costs associated to the method, high performance computing can be used since the method is very easy to parallelize as the calculations of f(Ui) can be done independently. Moreover, for that propose, surrogate models are considered in this paper. From Equations (5) and (6) the gradients of the mean and standard deviation with respect to any variable (stochastic or deterministic) can be calculated. These gradients will be necessary during the optimization process and are obtained applying direct differentiation of Equations (5) and (6) with respect to the design variables x, giving: m df df M C 1X df ðU i Þ  ¼ dx m i¼1 dx dx

(7)

261

EC 32,2

  d SD2f dx

Downloaded by UFPE At 05:04 04 June 2015 (PT)

262



  d s^ 2f

" #

m 1 X df ðU i Þ df M C ¼ 2mf M C 2f ðU i Þ dx m i¼1 dx

dx ffiffiffiffiffiffiffiffiffiffi r ffi s^ 2f and using Equation (8), results into: By deriving s^ f ¼

  " #

m   d s^ 2f 1=2 ^ d sf 1 2 1 X df ðU i Þ df M C ¼ s^ f f ðU i Þ mf M C ¼ dx dx 2 s^ f m i¼1 dx dx

(8)

(9)

Despite the drawback of requiring a large number of samples, this technique is commonly used as a “benchmark” to validate other techniques. Sampling techniques. In the MC method the use of efficient sampling technique is advisable, as those in general provide better distributions of the sample points and as a consequence improve the MC convergence rate. In the present work pseudo random and Latin Hypercube Sampling (LHS) techniques (Forrester et al., 2008) were used for uniformly distributed sample generation. In the first approach, the “Mersenne Twister” (MT) algorithm (Barón and Mac Núñez Leod, 1999) was used followed by a polar algorithm to obtain the normal distribution samples. Both algorithms used are from the MATLAB 7.14 platform (Matsumoto and Nishimura, 1998). The same samples are used during the optimization process. Thus, a strategy for sampling selection was applied in which the best sampling is chosen from several samplings generated considering different seeds (Motta, 2009). 2.2.2 The PCM. The basic idea of PCM is to approximate the f(ξ) function by a series of orthogonal polynomial functions and to evaluate the integrals of Equations (3) and (4) by Gaussian quadrature (Webster et al., 1996; Ramamurthy, 2005). In the numerical integration by Gaussian quadrature for integrals of the form: Z f ðxÞPðxÞdx (10) F

the function f(x) is approximated by a polynomial of order (2n-1) from an orthonormal basis of the polynomial space H, (Nist/Sematech, 2009; Stoer and Bulirsch, 1991) as follows: ! ! n1 n1 X X ^ bi hi ðxÞ þ hn ðxÞ ci hi ðxÞ (11) f ðxÞ  f ðxÞ ¼ i¼0

i¼0

where hi(x) are the orthonormal polynomials with respect to the weighting function P(x) and ci, bi are the unknowns of the approximation. In Equation (11), the subscript (i ) indicates the polynomial degree. Hence, by orthogonality, Equation (10) can be approximated as follows (Nist/Sematech, 2009): Z Z f ðxÞPðxÞdx ¼ b0 h0 PðxÞdx (12) F

F

Equation (12) does not involve the coefficients ci, only b0, and as a consequence, it only requires the calculation of the function f(x) at the n roots x* of hn(x), cancelling in this way the second part of Equation (11) as hn(xi*) ¼ 0, i ¼ 1 .. n. For more details

concerning the evaluation of the coefficients see references Ramamurthy (2005); Nist/ Sematech (2009); Stoer and Bulirsch (1991). The evaluation of statistics with PCM consists on a direct application of Gaussian Quadrature considering the random variables space U and its PDF as the weighting function. The orthonormal polynomials are defined for each PDF, therefore we have R PðU ÞdU ¼ 1 and h0 ¼ 1 (Nist/Sematech, 2009). F It follows that the mean value and the standard deviation of an output of interest are approximated by PCM as: n X   (13) f PC ¼ P i f U ni i¼1

s^ 2PC ¼

n X

P i ðU ni Þ2 f PC 2

(14)

Downloaded by UFPE At 05:04 04 June 2015 (PT)

i¼1

where Ui* are the roots of the orthogonal polynomials. The gradients of above quantities will be required by the optimizer. Here they are calculated through direct differentiation of Equations (13) and (14) leading to:   n df U ni df ðU Þ df PC X ¼ Pi  (15) dx dx dx i¼1    

n d SD2f X d s^ 2PC df ðU ni Þ df  (16) 2f PC PC ¼ 2P i f ðU ni Þ dx dx dx dx i¼1 qffiffiffiffiffiffiffiffiffiffiffiffi and by differentiating s^ PC ¼ s^ 2PC , we get: " #   n n

d s^ PC 1 2 1=2 d s^ 2PC 1 X df PC n df ðU i Þ ¼ s^ PC (17) f ðU i Þ f PC ¼ dx dx 2 s^ PC i¼1 dx dx A drawback of Gaussian quadrature and consequently, PCM is the so called “curse of dimensionality,” as the number of integration points increases exponentially with the problem dimensionality. This means that in the PCM context the number of random variables must be small. To tackle large multidimensional problems, the use of numerical integration on sparse grids (Heiss and Winschel, 2008) might mitigate the above problem. 2.3 MO strategies The Pareto optimality concept (Collette and Siarry, 2003) is used here to obtain MO solutions. The Pareto minima are points xP such that no other point x exist that satisfies: ðaÞ f k ðxÞ p f k ðxp Þ; k ¼ 1; . . .; nobj ðbÞ f j ðxÞ o f j ðx p Þ; for one objective function ð f i Þ at least:

(18)

A detailed discussion about this concept can be found elsewhere (Collette and Siarry, 2003; Das and Dennis, 1998; Messac et al., 2003; Motta et al., 2012; Hwang et al., 1980; Huang et al., 2008). The solution of the problem defined in Equations (1) and (2) is very difficult to obtain because in general the objective functions conflict with each other. Using the Pareto

Development of computational efficient tool 263

EC 32,2

Downloaded by UFPE At 05:04 04 June 2015 (PT)

264

concept, the designer has to identify as many Pareto minima points as possible. These points can be used to construct a point-wise approximation to the Pareto curve or surface. There are several techniques to obtain the set of Pareto minima (Collette and Siarry, 2003; Das and Dennis, 1998; Messac et al., 2003; Motta et al., 2012; Hwang et al., 1980; Huang et al., 2008). In this work, we utilize the WS method, the min-max method, (Hwang et al., 1980) the NBI method (Das and Dennis, 1998) and the NNC method (Messac et al., 2003). Currently, in the literature, the latter two strategies are shown to have more success in obtaining the Pareto curves (bi-objective problems). For problems with more than three objective functions a novel scheme proposed in Motta et al. (2012) appears as an adequate choice. The main feature of NNC and NBI methodologies are described next. A more detailed description of both techniques as well as other existing techniques can be found elsewhere (Motta et al., 2012). 2.3.1 The NBI method. The NBI method (Das and Dennis, 1998) is based on the parameterization of the Pareto front and produces an evenly spread point distribution. The geometric representation of the NBI method is show in Figure 1 which illustrates the objective function space and its feasible region. The Pareto points are obtained in the intersection of the quasinormal lines emanating from a Convex Hull of Individual Minima (CHIM) and the boundary of the feasible objective function space (δF ). The figure illustrates a set of quasi-normal lines and each line is associated with a specific coefficient vector and, as can be seen, different (intersection) solutions are obtained (in most cases, Pareto points). A modification of this method, presented in, (Motta et al., 2012) which allows points not in the quasi-normal lines by means of an inequality constraint, will be considered. 2.3.2 The NNC method. The NNC method, introduced by Messac et al. (2003) works in a similar manner as the NBI method and its graphical representation can be seen in Figure 2, which illustrates the feasible space and the corresponding Pareto frontier for a bi-objective case. The utopia line indicated in Figure 2 (analogous to the CHIM in NBI method), is the line joining the two individual minima points (or end points of the Pareto frontier). To obtain the Pareto points for a bi-objective case, a set of points Xpj are created in the utopia line. Through an interactive process using a pre-selected point Xpj, a normal line is computed to reduce the feasible space as indicated in Figure 2. Minimizing f 2 results n in the Pareto point f , consequently, after translating the normal line from all points Xpj, the whole set of Pareto solutions will be found.

f2

F (X1*) CHIM

Figure 1. The geometric representation of the NBI method for bi-objective problems

*ℑ 0



n

F (X2*)

Source: Das and Dennis (1998)

f1

Development of computational efficient tool

F1*

1

Li ne

N

U

ƒ2

Feasible Region

265

Xpj

Infeasible Region

F* 0

F2*

0

1

Figure 2. Graphical representation of the NNC method for bi-objective problems

ƒ1

Downloaded by UFPE At 05:04 04 June 2015 (PT)

utopia line

Source: Messac et al. (2003)

3. Governing equations Adopting a standard Galerkin finite element formulation (Zienkiewicz and Taylor, 2000) the discrete static elastic equation can be written in compact form as: Kn un ¼ Fn

(19)

where K* is the structural stiffness matrix, u* is the vector of unknown nodal displacements and Fn is the vector of nodal loads. The stiffness matrix K* and the load vector F* are given by: Z Kn ¼

Fn ¼

Z V

n

Vn

NnT bn dV n þ

BnT Dn Bn dV n

Z G

nn

NnT fnn dGn þ

(20) Z G

nn

NnT fnt dGn

(21)

In above equations D* is the elasticity matrix, (Rubinstein and Kroese, 2007) b* is the body force vector, fnn ; fnt are vectors of acting tractions (normal and tangential, respectively) and, B* is the matrix which relates the displacements to their derivatives. Also, V* represents the discrete domain and Г*n the boundary portion in which the tractions are applied. 4. RBM The central idea of the RBM (Afonso et al., 2010; Motta, 2009; Prud’homme et al., 2002) is the recognition that the field variables depend on a reduced solution subspace dictated by a parametric dependence. To couple efficiency and reliability, the procedure encompasses several attributes, namely: dimension reduction, a posteriori error estimators and an effective off-line/on-line computational strategy. 4.1 RB governing equations The RBM is a data fit based approach which encompass as a first stage the generation of the samples. The most efficient way to do that is through the application of design of

EC 32,2

266

experiments (DOE) techniques (Motta, 2009). Here, LHS is adopted. In this work, the total set of samples is defined as:   Sr ¼ ðm1 ; :::; mnv Þ1 ; :::; ðm1 ; :::; mnv Þs (22) where s is the total number of samples and each (μ1,…,μnv)i belongs to the design and random domain D and nv is the total number of variables (random and design variables) nv ¼ nrv + ndv. They are the parameters, which in the context of this paper could be deterministic or stochastic. The approximation of the solution field (u), considering RB method is written as: uN ðlÞ ¼

s X

aðlÞj fj ; a A Rs

(23)

j¼1

Downloaded by UFPE At 05:04 04 June 2015 (PT)

where the index N indicate that the variable is related to the reduced space WN, which is formed by the vectors ζi ¼ u*(μi ), i ¼ 1,...,s. Alternatively, in matrix form: uN ðlÞ ¼ ZaðlÞ

(24)

h i Z ¼ fi ; :::; fs

(25)

in which:

Considering the FE governing Equation (19) and using the approximated form uN instead of u, it follows that: Kn ðlÞZaðlÞ ¼ Fn ðlÞ

(26)

By pre-multiplying both sides of the above equation by Z , the unknowns α(μ) satisfies the following governing equation in the reduced space WN: T

KN ðlÞaðlÞ ¼ FN ðlÞ

(27)

in which: KN ðmÞ ¼ ZT Kn ðm ÞZ A Rss FN ðmÞ ¼ ZT Fn ðm Þ A Rs

(28)

In this work, the structural compliance (C) and the von Mises stress (σ) are the investigated outputs in the RBO problem. Considering RB approximations, they are respectively computed as follows: C N ðmÞ ¼ aðmÞT FN rN ðmÞ ¼ YaðmÞ

(29)

in which Y is the set of von Mises stress solutions calculated for the points in the samples, Y ¼ [σ1,…,σs]. The RB equations can be solved in a very effective way applying the mapping transformations and separability concept described in Afonso et al. (2010), Motta (2009).

The mapping transformation allows the use of a unique computational (reference) design (Afonso et al., 2010) which is mapped to the real domain as this repeatedly changes in an optimization and/or stochastic process. The use of the separability concept means splitting the contributions of stiffness matrix and loading vector into distinct terms, which are parameter dependent or independent on the variables: N

K ðmÞ ¼

R X nt X

267 brj ðmÞKrj N

r¼1 j¼1

FN ðmÞ ¼

R X nt X jrj ðmÞFrj N

(30)

Downloaded by UFPE At 05:04 04 June 2015 (PT)

r¼1 j¼1

in which the index r refers to each region of the domain and R is the number of regions. Each region has its own transformation depending on the changes produced by the variables μ and nt is the number of terms of the transformation equation. The functions brj ðmÞ and jrj ðmÞ are the parameter dependent terms related to this transformations, Krj N and Frj N are the parameter independent terms of the reduced stiffness matrix and reduced load vector of each region, respectively. The sensibility can be computed as:   KN a;m ¼ F;Nm K;Nm a C;Nm ¼ a;m T FN þ aT F;Nm r;Nm ¼ Ya;m

(31)

where: K;Nmi ¼

R X nt X

brj ;mi ðmÞKrj N

r¼1 j¼1

F;Nmi ðmÞ ¼

Development of computational efficient tool

R X nt X

jrj ;mi ðmÞFrj N

(32)

r¼1 j¼1

As a consequence of both above considerations the stiffness and load terms will encompass explicitly the parameter dependent terms u. This allows the development of an efficient off-line/on-line calculation strategy for the computational implementation of the method, which follows the computational algorithm shown below. Details of the overall procedure are explained in Afonso et al. (2010). Algorithm RBM: off-line: OFF-LINE – independent of l 1. Choose a sample: S N ¼ {(l1,...,lndv,U1,...,Unrv)1,...,(μ1,...,μndv,U1,...,Unrv)s} 2. Construct the matrix of FE solutions: Z ¼ [ζ1,…, ζr]; 3. Construct the reduced split stiffness matrices: Krj N ¼ ZT Krj Z; 4. Construct the reduced split load vectors:Fj r N ¼ ZT Frj 5. Store the set of von Mises stress solutions: Y ¼ [σ1,...,σN]

EC 32,2

Downloaded by UFPE At 05:04 04 June 2015 (PT)

268

Algorithm RBM: on-line: ON-LINE – for a new vector l P P rN r 1. Form the RB matrix: KN ðlÞ ¼ Rr¼1 nt j¼1 bj ðmÞKj P P rN 2. Form the reduced-basis load vector Fr N ðlÞ ¼ Rr¼1 nt j¼1 jj Fj 3. 4. 5. 6. 7. 8. 9.

Solves: KN(l)α(l) ¼ FN(l); Evaluate: uN(l) ¼ αTZ Evaluate: CN(l) ¼ αTFN(l) Evaluate: σN(l) ¼ Yα(l) Compute the sensitivities: C N ðlÞ;xk ¼ a;Txk FN Compute the sensitivities: rN ðlÞ;xk ¼ Ya;xk Compute the output error as in (Afonso et al., 2010)

5. The integrated approach The proposed integration of the methods described previously to solve RDO problems is done by an off-line stage to compute the RBM data and a two loop procedure in the on-line stage. An inner loop is required to perform the statistic analysis (using PCM) in which the function evaluations are computed via RBM. The main loop (outer loop) is the optimization procedure in which the search for the optimum design variables is carried out. Figure 3 shows the flow chart of the macro computational implementation of the proposed RDO procedure. Some additional information is presented on the algorithms that follow. The off-line stage is done once: (1) define standard problem (Geometry, B.C. and Load, Design Variables Objective (s) / Constrains); (2) define random variables; (3) define samples using a DoE technique considering design and random variables space (Algorithm RBM: off-line); (4) obtain solutions of the sample points (deterministic analysis) (Algorithm RBM: off-line); (5) save RBM Data (Z RB, Krj N splitted reduced stiffness matrixes, Frj N splitted reduced load vectors) (Algorithm RBM: off-line); and (6) determine initial guess design. The on-line stage is where the optimization procedure initiates: 1. MO Loop (MO strategies, section 2.3) 1.1.Initial candidate design 1.2. Optimization Loop (search optimum design); 1.2.1. Set a new design; 1.2.2. Stochastic Loop (Stochastic Analysis, PCM section 2.2); 1.2.2.1.Run Structural Analysis via RBM (Algorithm RBM: on-line); 1.2.2.2. Run Sensibility Analysis via RBM (Algorithm RBM: on-line); 1.2.3. Obtain statistics of the responses and its gradients via PCM, Equation (14-18); 1.2.4. Convergence check; 1.3. Save Pareto Point.

Development of computational efficient tool

Begin

Define Random Variables

Define: • Geometry, B.C. and Load • Design Variables • Objective(s)/Constrains

RBM

269

RBM+Stoch. Stochastic MO + Stoch.

Define Samples

Multiobjective Optimization

Downloaded by UFPE At 05:04 04 June 2015 (PT)

Off-line Stage: Solve FEM and save RBM Data

Candidate Design

Structural Analysis On-line Stage

Stochastic Analysis

Sensibility Analysis

Optimization (SQP) New design

No

Repeat the procedure for each Pareto Point (MOP)

Check Convergence Yes End

Figure 3. General flow-chart of the RDO integrated procedure

EC 32,2

270

Remarks: (1) the initial designs of the scalar optimizations (1.1) are chosen from one of the previously found Pareto solutions, if there are any; (2) in the first iteration of the on-line stage, the initial guest design is adopted as the new design; (3) the scalar optimization method used in this work was the Sequential Quadratic Programming (SQP) presented on the MATLAB optimization toolbox; and (4) an important aspect when using the MC method (1.2.2) is the use of the same samples for all the statistic analyses during the optimization procedure.

Downloaded by UFPE At 05:04 04 June 2015 (PT)

6. Examples The procedure described in this work is now applied to obtain optimal designs with reduction in the structural performance variability. For each example, before presenting the RBO results, sampling selection studies are performed in two scenarios. In the first, the sampling (in which the HF model (FEM) is used) to built RB approximations has to be determined. In the RBM accuracy studies varying the number of samplings are conducted. In the second scenario, the sampling (in which the surrogate model (RBM) is used) for the statistics calculation has to be obtained. Again, in order to provide accurate statistics a convergence study over the samples for both MC and PCM methods is carried out. 6.1 Square plate with a central hole The first problem analyzed is a square plate with a central hole under plane stress conditions. Due to the double symmetry only a quarter of the domain is modeled, as shown in Figure 4. The problem geometry, boundary conditions and design variables are also shown in this figure. The Young’s modulus of region 3 (see Figure 5) is a random variable with a lognormal distribution, with mean value 5×104 MPa and standard deviation 104 MPa. The other material properties and geometric dimensions are: the Young’s modulus of regions 1 and 2 is E ¼ 105 MPa, the Poisson coefficient is υ ¼ 0.3, the plate thickness is t ¼ 1 mm, the lateral length is 100 mm and the distributed load is p ¼ 1 N/mm. The central hole dimensions are chosen as design variables for optimization. Their initial values are μ1 ¼ μ2 ¼ 50 mm, and the lower and upper bounds are 25 and 75 mm, respectively. Two stochastic objectives are considered: minimization of the mean and minimization of the standard deviation of the total strain energy. The total volume is constrained to be

Figure 4. A quarter of a square plate with a central hole: Problem description

µ2 1 2

3

100 mm

100 mm

p = 1N/mm

µ1

less than or equal to its initial value. Apart from that constraint, the mean value of the von Mises stress plus three times its standard deviation is imposed to be less than or equal to 7.0 N/mm² (s þ 3SDs ). The MO solutions will be obtained using 15 Pareto points. The robust optimization problem for this particular application is formulated as: h i Minimize: CðmÞ; SDC ðmÞ (33)

Downloaded by UFPE At 05:04 04 June 2015 (PT)

8 s þ 3SDsðiÞ r 7 MPa > < i Subjected to V ðmÞp V 0 > : 25 mm p m r 75 mm k

Development of computational efficient tool 271

i ¼ 1; :::nel (34) k ¼ 1; :::ndv

in which CðmÞ ¼ E ðCðm; E3 ÞÞ, SDC ¼ SD(C(μ, E3)) and C is the strain energy. The mean and standard deviation of the Von Misses stress at element i, are si and SDσ(i), respectively, V(μ) is the total structure volume, V0 is the initial volume, nel is the total number of elements and ndv is the number of design variables. 6.1.1 Sampling definitions for the RBM. A convergence study for the structural compliance for different numbers of samples is conducted. This is important as this selection determines the required total number of expensive HF simulations for this specific problem. Once the accuracy check is performed, the selected sample size is used to build the RB based surrogate. For this particular application, different sizes for the bases WN are considered. The computational domain is indicated in Figure 5. The obtained results are shown in Figure 6, in which the random variable is considered as a deterministic value. The convergence of the reduced-basis approximation is evident in the figure, in which the approximated compliance C N (Nmm) is plotted against several values of sample size N. A rapid convergence with N is noted. As the number of sample points increases the approximated results converge to the FE solution. The time required for the output evaluation considering the RBM with N ¼ 10 is more than 103 times faster than the time to obtain the FEM solution (off-line stage not included) and the relative error of the approximation associated is O(10  5). 6.1.2 Sampling definition for statistics calculations. The mean and standard deviation of the compliance will be calculated considering both MC and PCM. A convergence study is performed to define the number of points to be used in the 100 90 80 70 60 50

Figure 5. A quarter of a square plate with a central hole: computational domain and adoptd mesh

40 30 20 10 0

0

20

40

60

80

100

EC 32,2

Downloaded by UFPE At 05:04 04 June 2015 (PT)

272

computations for each technique. Such calculations will use the RB model (N ¼ 10) built previously. For MC method LHS sampling size (m) varies j from p ffiffiffiffik202 to 20,480. The number of 3 samples follows the empirical equation m ¼ 160 2k , where k varies in the range [1,2…21]. It is well known that MC results could present a substantial variability, due to the random characteristic of the sampling. In order to quantify this, a total of a hundred computations were performed, for each different sample size, to obtain the statistics of the structural compliance (Nmm). Then, the mean and the standard deviation of the result r (E[r] and SDr) were calculated, and the results are graphically represented in Figures 7(a) and (b), respectively. The dashed gray line shows the mean of 100 computations for each m value (sample size), and the dark black lines represents the variability of the MC response (E[r] + SDr and E[r] − SDr). This exercise was only possible due to the almost instantaneous computations of RBM. Besides that, for each sample size m, a sample was selected using a procedure described in Motta (2009), Motta et al. (2009) (continuous gray line in Figure 7), in which the marginal distribution of the samples are compared through its statistical moments. From that, seven different sample sizes, cases C1 to C7 indicated in Table I with their respective values were analyzed. From this study, it is recognized that with around 1,000 points, both mean (C) and standard deviation (SDC), vary slightly. For this reason, this is the sampling considered for the multiple statistics calculation required in the solution of Equations (33) and (34), using the MC methodology. Figure 8 shows the error of the mean value of compliance evaluated using both PCM and MC methodologies. A steep convergence is obtained using PCM even with few points, since the total structural compliance function varies smoothly with the Young’s module of region 3 of the plate. An error of the order of 10−5 is obtained when MC is adopted, for the selected LHS sampling using 1,000 points discussed previously. An error of the same order is achieved with only three integration points when using PCM. However, as even smaller error can be achieved using few more points, with a small computational time overhead, the optimization process using PCM will be performed adopting 5 collocation points which correspond to an error of the order of 10−11.

0.426 0.424

CN

0.422

Figure 6. Square plate with central hole: convergence of the approximated compliance CN(Nmm) against sample size (N)

0.42 0.418 0.416 0.414 4

5

6

7 N

8

9

10

(a)

Development of computational efficient tool

MC

0.3087

E[C] E[C] + SD(C)

0.3086

Selected sample

0.3086 C

273 0.3085 0.3085 0.3084 102

104

103 m

(b)

MC

Downloaded by UFPE At 05:04 04 June 2015 (PT)

0.3062

E[SDC]

0.036

E[SDC + SD[SDC] Selected sample

SDC

0.0358 0.0356

Figure 7. Square plate with central hole: convergence of the mean (C) and standard deviation (SDC) of the compliance, for different MC sample sizes

0.0354 0.0352 0.035 102

103

104 m

Notes: (a) Mean of CN (Nmm) and (b) standard deviation of CN (Nmm)

Case

Sampling size (m)

C1 202 C2 403 C3 806 C4 1,613 C5 3,225 C6 6,451 C7 12,902 Source: Motta (2009) and Motta et al. (2009)

Mean (Nmm)

SD (Nmm)

0.308530959713673 0.308550213518812 0.308556674596121 0.308558482597648 0.308559106575475 0.308559457041847 0.308559963766802

0.035326555620201 0.035410907482470 0.035455905892811 0.035455246318295 0.035453460837231 0.035451940103915 0.035455220139521

6.1.3 Robust optimization results. The MO techniques presented in this paper are used to obtain the Pareto solution of this specific problem in which the two considered objectives are calculated using both MC and PCM schemes. Figure 9 presents the Pareto’s points distribution obtained using the different methodologies for the MO considering 15 different β weight vectors. As already

Table I. Square plate with central hole: MC results of the compliance using selected samplings

EC 32,2

274

mentioned, MC and PCM methods were used with 1,000 and five points, respectively. The Pareto frontier using MC and PCM are with good agreement, even using such a different number of sampling points. As can be observed, the solutions using NBI and NNC present evenly distributed points at the whole of the Pareto’s frontier. Table II summarizes the computational performance for each of the analyzed methods. The solutions computed using PCM are at least two orders faster than that obtained by MC and with an error of 5 orders of magnitude smaller.

Mean value error

Figure 8. Square plate with central hole: convergence of the mean computed by PCM and MC methods

MC PCM

10–10

10–15 0 10

101

102

103

104

Number of points

(a)

(b)

0.056

0.056

0.054

0.054

0.052

0.052

SDC

SDC

MC-5,000 PC-5

0.05

0.048

0.046

0.046 0.37

0.38

0.39

0.4

0.41

0.42

0.044 0.36

0.43

MC-5,000 PC-5

0.05

0.048

0.044 0.36

0.37

0.38

0.39

C

(c)

(d)

0.056

Figure 9. Pareto’s solution

0.41

0.42

0.43

0.056 MC-5,000 PC-5

MC-5,000 PC-5

0.054

0.052

SDC

0.052

0.05

0.05

0.048

0.048

0.046

0.046

0.044 0.36

0.4

C

0.054

SDC

Downloaded by UFPE At 05:04 04 June 2015 (PT)

10–5

0.37

0.38

0.39

C

0.4

0.41

0.42

0.43

0.044 0.36

0.37

0.38

0.39

0.4

0.41

C

Notes: (a) WS (Nmm); (b) Min-Max (Nmm); (c) NBI (Nmm) and (d) NNC (Nmm)

0.42

0.43

Methods

WS (s)

Min-Max (s)

NBI (s)

NNC (s)

7,724 60

5,297 43

3,896 31

3,726 28

MC (1,000 points) PCM (5 points)

Development of computational efficient tool 275

Table II. Square plate with central hole – total computational time (s)

15 1 2

10

3

x2

Downloaded by UFPE At 05:04 04 June 2015 (PT)

These results show the tremendous advantage of using PCM for this class of problems, i.e. with few random variables and smooth functions. The integration of all the methodologies described slows the computation of robust optimization problem, using a finite element model of 3,900 degrees of freedom, in a practical time (less than a minute using an Intel i7-2.8 GHz CPU with 8 GB RAM). Figure 10 presents the Pareto’s optimum designs obtained via NBI. The Pareto designs related to the six circled points of the 15 Pareto optimum points are presented in Figure 11, considering the deformed configuration and the von Mises stress distribution. The histograms of the compliance, considering the design obtained by minimizing individually the mean and the standard deviation of the compliance, are presented in Figure 12. As can be noted in this case, by minimizing the variability of the output (line with dots •) a loss in the overall performance is obtained (line with “x”s). However, for that design, a smaller range of variation of the structural compliance is obtained. This mean that in terms of robustness this design is better, since the compliance is less sensitive to the parameter variation and has a smaller scatter when compared to the optimum design found minimizing the mean of the compliance. In summary. For an error tolerance of 10−5 a RB model built using just ten samples was used to approximate an output of a HF model with 3,900 degrees of freedom. The computational time to obtain results by RBM was about 1 percent of the computational time to obtain results via direct FEM, demonstrating the effectiveness of the method. The statistics computation via PCM requires at least a hundred times less integration points than the MC method, even for a relative error of six orders of magnitude smaller. For this bi-objective example, the most efficient MO methods were the NBI and the NNC method, both obtaining an even distribution of the Pareto points.

5 4

0

5 6

–5 –25

–20

–15

–10 x1

–5

0

5

Figure 10. Square plate with central hole – NBI Pareto optimum design variable values, in which xi ¼ 50 – μi (mm), i ¼ 1, 2

EC 32,2

σ

Pareto point - 1 100

0.12

80

σ

Pareto point - 3 100

0.12

80

0.1

0.1 60

60

0.08

cm

276

cm

0.08 40

40

0.06

0.06 20

20 0.04 0

–20 –20

0.04 0

0.02

0

20

40

60

80

–20 –20

100

0.02

0

20

cm σ

80

100

σ

Pareto point - 9 100

0.12

80

Downloaded by UFPE At 05:04 04 June 2015 (PT)

60

cm

Pareto point - 6

100

40

0.12

80 0.1

0.1

60

60 0.08

cm

cm

0.08 40

40

0.06

0.06

20

20 0.04

0

–20 –20

0.04 0

0.02

0

20

40

60

80

–20 –20

100

0.02

0

20

cm

60

80

100

cm σ

Pareto point - 12

100

40

0.12

80

σ

Pareto point - 15 100

0.12

80

0.1

0.1

60

60 0.08

cm

cm

0.08 40

40

0.06

Figure 11. Square plate with central hole – Pareto’s NBI designs (MPa)

20

0.06 20

0.04 0

0.04 0

0.02 –20 –20

0

20

40

cm

60

80

100

0.02 –20 –20

0

20

40

60

80

100

cm

6.2 2D truss A 2D truss under static load conditions with a total number of degrees of freedom of 1,210 (Motta, 2009) is used as the next example. The geometric configuration and boundary conditions are presented in Figure 13. Figure 13 shows the random variables (U) and the design variables (x). Two random variables are considered: the vertical load on the top of the structure (U1) and the horizontal load on the top-left side of the structure (U2). For the first random variable (U1) a log-normal distribution is adopted with mean U 1 ¼ 4KN=cmand standard deviation SD(U1) ¼ 2 KN/cm. For the second one (U2) a normal distribution is considered with mean U 2 ¼ 0 and standard deviation SD(U2) ¼ 1 KN/cm.

Development of computational efficient tool

5,000 Min SD Min Mean 4,000

3,000

277

2,000

1,000

0 0.1

0.2

0.3

0.4

0.5

0.6

0.7

Downloaded by UFPE At 05:04 04 June 2015 (PT)

Compliance

u(U,x)

Figure 12. Square plate with central hole – histograms obtained minimizing individually the mean and the standard deviation of the compliance

U1

(cm) 100

80

x1

60

U2 40

x2

20

0

–20

–40

x3 –60

–80

Figure 13. Plane truss: problem definition

–100

0

50

100

(cm)

EC 32,2

278

As can be observed, three designs variables are considered, which are the cross section areas of the bars of three regions, as shown Figure 13. The initial design variables values are equal to one. They are bounded by 0.1⩽x⩽10. The robust optimization problem is formulated as: min uðxÞ SDu ðxÞ subject to:

(35)

vol n ðxÞp 1 0:1 p xi p 10 ; i ¼ 1. . .3

Downloaded by UFPE At 05:04 04 June 2015 (PT)

in which u(U, x) is the horizontal displacement in the top left corner of the structure, u is its mean value and SDu is its standard deviation. The vol n ðxÞ is the relative volume of the structure, i.e., the current volume divided by the initial volume (vol* (x0) ¼ 12,371cm3). 6.2.1 Setting RBM basis. As in the previous example, a convergence study for different numbers of sample points was conducted for the output of interest which is the top left horizontal displacement (u) of the structure. The convergence of the reduced-basis approximation is shown in Figure 14, in which the top left horizontal displacement (u) error is plotted against N. An exponential convergence can be observed in this figure. A RB with 15 components (error magnitude of 10−6) was considered. This correspond to a 1,210 × 15 RB matrix (Z), obtained through the analysis via FEM of 15 different cases (considering different values for the design variables and for the random variables). The computational time for the statistical evaluation considering the initial design using the RBM was ten times lower than using the full FEM. 6.2.2 Sampling Definition for statistics calculations. The PCM solution for different approximation degrees was confronted against the MC solution using different sample sizes for the initial design. Figure 15 shows the errors for the mean value of the displacement u evaluated using both PCM and MC methodologies. As the example 6.1, a faster convergence is obtained using PCM. The PCM considering a 3 × 3 integration grid point (approximation of 5th degree) achieves a relative error of about 10–3. An error magnitude of 10−2 via MC is obtained

Figure 14. 2D truss: convergence of the approximated displacement field (uN) against basis size (N)

Displacement Error

10–2

10–4

10–6

10–8

10–10

5

10

15

20 N

25

30

using 1,000 points. However, the computational time via MC is more than 100 times greater than via PCM, considering these error levels. Thus, for the RMO study, the statistics will be computed using PCM considering 9 collocation points. 6.2.3 Robust optimization study. The RMO problem was solved using the PCM approximations to evaluate the statistics of the structure. The analysis were performed via RBM (using N ¼ 15). The Pareto points, obtained via the various MO methods presented here, are shown in Figure 16. As expected, the results via NBI and NNC agree closely. The best Pareto points distribution was obtained by these two methods. The MO performance for the four investigated techniques is shown in Table III. In that table, the number of function evaluations (F. Count) is the total number of

Development of computational efficient tool 279

Downloaded by UFPE At 05:04 04 June 2015 (PT)

Mean Value Error

100

10–2 PCM MC

10–4

10–6

10–8 100

101

103

Figure 15. 2D truss: convergence of the statistical computation

0.394

Figure 16. 2D truss: pareto’s solution

102

Number of Points

0.425 WS Min Max NBI NNC

SDu

0.42

0.415

0.41

0.386

0.388

0.39

0.392

E[u]

MO Method WS Min-Max NBI NNC

Time (s)

F. Count

Evenness

274.4 672.9 192.3 417.5

174 296 149 193

0.277 0.908 0.111 0.111

Table III. 2D truss: RMO performance considering PCM with RBM methods

EC 32,2

Downloaded by UFPE At 05:04 04 June 2015 (PT)

280

non-deterministic analysis evaluated to obtain the Pareto points. The uniformity distribution of the Pareto points parameter (Evenness), that appears in Table III, indicates the quality of the distribution of the points, the closer to zero the better (Motta et al., 2012; Messac and Mattson, 2004). The most efficient method in this example was the NBI method, about two times faster than the others with a fine evenness parameter. The results were obtained using an Intel i7-2.8 GHz CPU with 8 GB RAM. In summary. For an error tolerance of 10−5 a RB of just 15 components was used to approximate an output of 1,210 components. The computational time to obtain the results using RBM was at least 1/10 of the computational time to obtain the results via FEM, demonstrating the effectiveness of the method. The statistics computation via PCM requires about a hundred times less integration points than the MC method, for a even lower relative error. For this bi-objective example, the most efficient MO method was the NBI method, being about two times faster than the others and obtaining an even distribution of the Pareto points. 6.3 3D truss problem This non-dimensional problem was proposed by Doltsinis and Kang (2004) in which the robust optimization considering the structural compliance of a 25-bar truss structure (see Figure 17) is to be carried out. The design variables are cross-sectional areas of six sets of bars. The six independent design variables are selected by linking various member sizes as shown in Figure 17. The mass density of the material is 0.1. All four nodes of the base are fixed in the three degrees of freedom. Four nodal forces with values pAy ¼ pBy ¼ pAz ¼ pBz ¼ 104 are imposed at the first and the second nodes (A and B). Additionally, forces with random values are applied to the nodes C and D along x-direction. Doltsinis and Kang (2004) consider 14 random variables as follows: the two nodal forces applied at the nodes C and D, the six Young modulus and the six cross-sectional areas for the grouped bars as indicated in Figure 17. For each random variable its statistics: mean, standard deviation (SD) and coefficient of variation (COV), are shown in Table IV. Although not directly stated in the original paper, here, normal distributions were used for all random variables. Due to the small number of degrees of freedom of the structure (DoF ¼ 18), the solutions of this problem were based in full FEM.

A

1

B

2 22 2 3

3

200 3

z

150

6

50

Figure 17. 3D truss: structure and groups of bars

0 –100

D

4 4

4 4

C

100

3

5 6 5

5

5 5

6 5

5 5 6 10

–50

0 x

50

100

–100

0 y

I mpf k;i ¼

Development of computational efficient tool 281

Df k;i df ; with Df k;i ¼ k SDU i fk dU i

The results of the importance factor (Output Variation (percent)) of the mean and SD of the compliance, obtained through this procedure, is presented in Figure 18. Such Number

Variables

Mean

SD

COV

2 × 10 15 × 105 50 −

7

EI -EV EVI pCx, pDx AI -AVI

1-5 6 7, 8 9−14

5

10 107 500 μ1-μ6

− − − 0.05

Table IV. 3D truss: random variables parameter

R.V. importance for x = 2.3 2.3 2.3 2.3 2.3 2.3 0.2 Mean SD

0.18 0.16 Output Variation (%)

Downloaded by UFPE At 05:04 04 June 2015 (PT)

6.3.1 Random variable's importance quantification. In this work, a simple procedure to rank the importance of the random variables is performed, to take into account only the most relevant ones, to perform statistics calculations of the structural compliance. Moreover, taking into account only the relevant ones and as consequence, reducing the number of random variables, will allow the use of the PCM efficiently, without significant loss of accuracy. A better way to use PCM for high dimensional random space could be combining it with sparse grids, not considered in this work, as proposed in Heiss and Winschel (2008). To quantify the variation of a function of interest, due to a given variability of a random variable in a specific design point, a sensitivity analysis of the objective functions (mean and standard deviation of the compliance) with respect to the random variables was performed. The sensibility value was then multiplied by the standard deviation of the random variable. Thus, the importance factor (Impfk,i) of the output fk, when considering a parameter as a random variable Ui, can be evaluated as follow:

0.14 0.12 0.1 0.08 0.06 0.04 0.02 0

1

2

3

4

5

6

7

8

9

Random Variables

10 11 12 13 14

Figure 18. 3D truss: random variables importance

EC 32,2

Downloaded by UFPE At 05:04 04 June 2015 (PT)

282

calculations were performed at the initial design (μi ¼ 2.3, i ¼ 1,..,6). Analyzing these graphs it was determined the estimated order of the importance of the random variables. An analogous procedure was done for the computation of the stress statistics, the results obtained were similar. A set of approximations of the mean and SD of the compliance was evaluated considering different random variables and methods for statistics computation. The results are shown in Table V, where the design related to the initial point was considered. The results are in acceptable agreement with the ones presented in literature (Doltsinis and Kang, 2004). The results shown for the case 13 will be considered as exact for our purposes. As can be seen, case 6, with just four (out of fourteen) random variables, could achieve an error of just 1 percent when computing the SD of the compliance, using just 81 (PCM) integration points against 5 × 105 via MC. Therefore, this is the chosen case to perform the robust optimization study in the next section. 6.3.2 Robust optimization study. In the RO the constraints considered were the mean weight of the structure (w), the stress on the bars and the limits of the design variables. The optimization problem is stated as follows: h i min CðmÞ; SDC ðmÞ m A R1x6

subject to: jsi j þ 3SDsðiÞ p smax

i ¼ 1; 2; ::::; 25

w ðmÞ p 750 0:05 p mk p 10

k ¼ 1; :::; 6

The stress constraints parameter, in the original problem, Doltsinis and Kang (2004) is set assmax ¼ 5;000. In order to allow the solutions obtained by Doltsinis and Kang Cases

Table V. 3D truss: statistic calculation for different methods and random variables

Random variables

Method

F. Count

0 – Deterministic 1 1 14 PCM – 3 3 2 6 PCM – 3 3 3 6,14 PCM – 3 9 4 6,14,13 PCM – 3 27 5 6,14,13,5 PCM – 3 81 6 6,14,13,11 PCM – 3 81 7 6,14,13,11 PCM – 5 625 8 6,14,13,11 MC – 5e3 5,000 9 6,14,13,11,5 PCM – 3 243 10 6,14,13,11,5 MC – 5e3 5,000 11 01:14 MC – 5e3 5,000 12 01:14 MC – 5e4 50,000 13 01:14 MC – 5e5 50,0000 Papera 01:14 MC – 3e3 3,000 a Source: Results from paper Doltsinis and Kang (2004)

Mean (×103)

SD (×103)

7.7024 7.7085 7.7595 7.7656 7.7678 7.7681 7.7709 7.7710 7.7711 7.7712 7.7714 7.7722 7.7729 7.7730 7.763

– 0.1659 0.5204 0.5466 0.5533 0.5543 0.5612 0.5625 0.5651 0.5623 0.5637 0.5713 0.5667 0.5670 0.557

Development of computational efficient tool 283

380 WS Min Max NBI NNC

360 340 SD

Downloaded by UFPE At 05:04 04 June 2015 (PT)

(2004), the (Doltsinis and Kang, 2004) stress constraint should be modified (we found that some of the optimum solutions obtained in Doltsinis and Kang, 2004) are not (Doltsinis and Kang, 2004) feasible, due to a high stress on the bar number 13). Taking this finding into account, we considersmax ¼ 12;500. The solutions of multiobjective robust optimization obtained for this problem are presented in Figure 19 and Table VI. Again, the results via NBI and NNC agree closely to each other and these two methods obtained the Pareto point distribution with lowest evenness value. Additionally, as can be seen in the Figure 19, some solutions obtained by the Min Max method are not Pareto optimum. Table VII presents the results for the ten Pareto points. The respective values of the objective functions (Mean and SD of the compliance) obtained via NBI (using case 6 of the Table V) and the statists re-computed through MC with 5×104 points. The solution of number 1 and 10, are the optimum of the SD and mean of the compliance, respectively. The solutions P* (SD) and P* (Mean) are the optimum of the SD and mean of the compliance, respectively, given by the literature Doltsinis and Kang (2004) where these statistics values were obtained via MC with 3,000 points. It is worth to remember that here the stress constraints were modified (smax ¼ 12;500), the solution of the problem as stated in Doltsinis and Kang (2004) (smax ¼ 5;000Þ is presented in what follows. Note that the improvement on the structure variability (SD of the compliance) was more than 30 percent when comparing the individual optimum solutions. The problem is now considered with the “original” stress constraint (smax ¼ 5;000). The solutions of multiobjective robust optimization are presented in Figure 20 and Table VIII. Again, the results via NBI and NNC agree closely to each other and these two methods obtained the Pareto point distribution with lowest evenness value.

320 300 280 5,200

5,400

5,600

5,800

6,000

Figure19. 3D truss: pareto’s solution

6,200

Mean

MO Method WS Min-Max NBI NNC

Time (s)

F. Count

Evenness

338 177 181 454

730 454 519 921

0.4683 0.56072 0.23522 0.22504

Table VI. 3D truss: RMO performance considering PCM with RBM methods

Table VII. 3D truss: RMO Pareto solutions considering PCM with RBM methods

μ1

P (SD) 0.147 1 (SD) 0.2420 2 0.0718 3 0.0500 4 0.0500 5 0.0500 6 0.0500 7 0.0500 8 0.0500 9 0.0500 10 (Mean) 0.0500 a P (Mean) 0.0500 Source: aResults from paper

a

μ3

0.672 3.465 0.3087 3.5243 0.1360 4.0281 0.0500 4.3690 0.0500 4.7735 0.0500 4.8287 0.0500 4.9631 0.0500 5.0234 0.0500 5.1304 0.0500 5.4264 0.0500 5.6769 0.0500 5.74 Doltsinis and Kang (2004)

μ2 0.566 0.5693 0.5548 0.5406 0.5271 0.6522 0.8469 1.0567 1.3117 1.4534 1.6588 1.72

μ4 0.822 1.0858 1.1669 1.2682 1.2987 1.2652 1.1292 1.0938 1.0563 1.0471 1.0457 1.05

μ5 8.048 7.6058 7.2030 6.7505 6.3515 6.3279 6.4801 6.4101 6.2830 5.9915 5.6795 5.574

μ6

SD (c. 6) − 0.2868 0.2883 0.2928 0.3002 0.3092 0.3187 0.3296 0.3423 0.3573 0.3752 −

Mean (c. 6) − 6.1334 5.9542 5.8296 5.7180 5.6200 5.5280 5.4477 5.3843 5.3415 5.3257 −

6.184 6.1362 5.9569 5.8320 5.7204 5.6223 5.5305 5.4500 5.3866 5.3438 5.3278 5.322

Mean (MC)

284

Solution n.

Downloaded by UFPE At 05:04 04 June 2015 (PT)

0.287 0.2967 0.2977 0.3009 0.3070 0.3156 0.3257 0.3346 0.3486 0.3628 0.3801 0.377

SD (MC)

EC 32,2

Table IX presents the design results of the ten Pareto points and its, respectively, values of the objective functions (Mean and SD of the compliance) obtained via NBI. As can be seen, when this more strict constraint (smax ¼ 5;000Þ is assumed, the feasible space become narrower reducing the Pareto region. Note that the relative difference on the SD between the individual optimums is

Suggest Documents