worst case propagated uncertainty of multidisciplinary ... - CiteSeerX

3 downloads 244 Views 1MB Size Report
Page 1 ... MULTIDISCIPLINARY SYSTEMS IN ROBUST DESIGN OPTIMIZATION. Xiaoyu Gu ... the bias error associated with the tool and the precision error of ...
WORST CASE PROPAGATED UNCERTAINTY OF MULTIDISCIPLINARY SYSTEMS IN ROBUST DESIGN OPTIMIZATION Xiaoyu Gu*, John E. Renaud†, Stephen M. Batill‡ and Raymond M. Brach‡ Department of Aerospace and Mechanical Engineering University of Notre Dame Notre Dame, Indiana Amarjit S. Budhiraja¤ Department of Mathematics University of Notre Dame Notre Dame, Indiana Abstract: While simulation based design tools continue to be advanced at unprecedented rates, little attention has been paid to how these tools interact with other advanced design tools and how that interaction influences the multidisciplinary system analysis and design processes. In this research an investigation of how uncertainty propagates through a multidisciplinary system analysis subject to the bias errors associated with the disciplinary design tools and the precision errors in the inputs is undertaken. A rigorous derivation for estimating the worst case propagated uncertainty in multidisciplinary systems is developed and validated using Monte Carlo simulation in application to a small analytic problem and an Autonomous HoverCraft (AHC) problem. The method of worst case estimation of uncertainty is then integrated into a robust optimization framework. In robust optimization, both the objective function and the constraints consist of two parts, the original or conventional functions and an estimate of the variation of the functions. In robust optimization the engineer must trade off an increase in the objective function value for a decrease in variation. The robust optimization approach is tested in application to the AHC problem and the corresponding results are * † ‡ ¤

graduate research assistant associate professor professor assistant professor

discussed. Introduction: To date most simulation based design tools have been developed as single discipline design tools (i.e., FEA, CFD, etc.). The model-predicted performance estimated by a design tool and the actual system performance will deviate to some degree. Generally tool developers and designers have focussed on being able to quantify the uncertainty associated with a single discipline performance prediction. The goal of our research is to address a much more challenging problem by considering that most engineering systems are multidisciplinary in nature and that most system models are developed using a variety of disciplinary design tools/models each having some uncertainty associated with its performance. These multidisciplinary system models are often highly coupled, where the performance predictions of one discipline may be used as inputs by another discipline and vice versa. Thus, the execution of these multidisciplinary system models is iterative by nature. The resulting uncertainty in the performance predictions for such a system can no longer be answered by the single discipline designer, instead the uncertainty must be estimated by propagating the uncertainty of each discipline through the multidisciplinary system model. Based on work developed in preliminary studies7, we develop a worst case estimate of the propagated uncertainty in the

states output from a multidisciplinary system analysis. The derivation is based on the methodology of propagation of error in physical measurements. Having developed an estimate of worst case uncertainties, the next problem becomes how does one use this additional information to assist in making design decisions. Increasing emphasis in engineering design is focused on accounting for manufacturing variations and functional variances in operation. Engineers have traditionally relied on post-optimality analysis to evaluate the sensitivity of their designs to variations in the design variables. In this paper the worst case estimates of uncertainty are incorporated in a robust optimization framework which provides for trade-offs between improving performance and reducing variations. Sources of Uncertainty in Single Discipline Simulation Based Design Tools: Understanding the sources of uncertainty in single discipline simulation based design tools is the first step in studying the uncertainty in the states output from a multidisciplinary system analysis. Numerical simulation tools, such as finite element (FEM) or computational fluid dynamics (CFD) software, are widely used by engineers to assist in the design

Real World Scientific Theory Error Approximation Error

The bias error results from two sources, namely the approximation error and the algorithmic error. Approximation error is characterized as that error that is induced in transforming the physical principles of scientific theory into analytic or raw models for engineering use. For example, simple analytic models for beam theory make the assumption that plane sections through a beam, taken normal to its axis, remain plane after the beam is subjected to bending. A more rigorous solution from the mathematical theory of elasticity would show that a slight warpage of these planes can occur. Therefore the simple analytic model produces results which are generally accurate but with some approximation error where analytic modeling assumptions are violated. Algorithmic error is another source of bias error. Algorithmic error is induced in transforming the analytic or raw models into numerical simulation models. These algorithmic errors can be characterized as occurring at two

Scientific Theory Error Essentially Impossible to Estimate Approximation Error Algorithmic Error

Raw Model Algorithmic Error Data Data Error

process. The errors which produce uncertainty in the outputs of numerical simulation tools come from multiple sources, and we can classify these errors into two categories, that is, the bias error associated with the tool and the precision error of the input data (see Figure 1).

Numerical Simulation Tool

  

"Bias Error " Associated with the Tool Computational Error Possible to Maintain at a Negligible Level

Computational Error Results

Data Error "Precision Error " in the Input

Fig. 1 Sources and Classification of Error

stages. For example in the development of a commercial finite element code, certain decisions or assumptions are made such as, how to develop elements which best describe the analytic models when viewed as an assembly of discrete elements?, how many types of elements to offer?, etc. These decisions or assumptions with respect to finite element code development are an obvious cause of algorithmic error. A second source of algorithmic error is introduced when the end user sits down at her terminal and begins to build a finite element model for a specific design application. The end user chooses how many elements to use, their type, location, size and other algorithm inputs such as convergence criteria. Each of these decisions ultimately impact the accuracy of the resulting finite element model and contribute to the bias error. Another source of error in simulation based design are computational errors. These are errors such as round off errors in a computer and are believed to be negligible with respect to the bias error. The third source of error which contributes to the uncertainty of the state outputs or attributes is the precision error in the design variables input to the numerical simulation tool. This type of error which is attributed to the manufacturing variability of producing the actual artifact is probabilistic and has been dealt with in many studies including the robust optimization studies of Su and Renaud13. This section has illustrated that the uncertainty in a single discipline simulation tool results from three sources of error, approximation error, algorithmic error and precision error. In the next section we discuss how these single discipline simulation tools interact in a multidisciplinary design environment, resulting in the need to propagate uncertainties through the complex coupled system of simulation tools. Uncertainty in Multidisciplinary Systems: The process of predicting the performance of

an engineering artifact using numerical simulation is referred to in the context of multidisciplinary design as a system analysis. In a system analysis various numerical simulations are invoked in a structured sequence. Using a set of independent variables which uniquely define the artifact, referred to as design variables x, the system analysis is intended to determine the corresponding set of attributes which characterize the system performance and are referred to as states y herein. This notation is common in much of the literature related to multidisciplinary design. Figure 2 is a schematic representation of a multidisciplinary system analysis in which the system analysis is decomposed into three disciplines or contributing analyses (CA’s). Each discipline or CA makes use of a simulation based discipline design tool. Performance prediction information is exchanged between the discipline experts who contribute to the overall system analysis. Much effort has been expended in the past to develop and improve the modeling, analysis and simulation capabilities represented by some of the technical disciplines that are represented in Figure 2. The exponential growth in computing capabilities of the past few decades has lead to the development of complex simulation capabilities which can be used to augment the traditional handbooks, data bases and “rules of thumb” used by engineering designers. The effective exploitation of these new and complex tools as part of a multidisciplinary system is one of the primary goals of the emerging discipline of multidisciplinary design optimization (SobieszczanskiSobieski and Haftka12). In a multidisciplinary system analysis such as that illustrated in Figure 2, there are a number of disciplines between which information is exchanged. Therefore the uncertainty in the states computed by a given discipline is dependent on the variability in the design variables, the bias errors in the states computed by other

Variability ∆x Design Variables x

ya

CA1 Tool A Bias Error

CA2 Tool B

yb

Bias Error yc

CA3 Tool C

Bias Error

long recognized that decisions made using information derived from experimental sources requires a quantitative assessment of the uncertainty associated with those measurements. A similar recognition exists for information derived from numerical modeling and simulation and there is a need to develop methods which can be used to quantify this information and efficiently use it in the process of making design decisions.

State Variables ya, yb, yc Propagated Uncertainty ∆ya, ∆yb, ∆yc

Fig. 2 Model of a Multidisciplinary System Analysis

disciplines and the bias error of the discipline tool itself. This exchange of state information often leads to iterative system analysis solution strategies whose convergence can be influenced by the uncertainty of individual disciplines. The extension of current methods of system analysis to include the “propagation” of the uncertainty through the multidisciplinary system analysis is the primary purpose of this research. Propagation of Uncertainty in Multidisciplinary Systems: There exists an important body of pertinent experience related to the role of uncertainty in complex systems. The experimental community has a long history of quantifying the uncertainty in complex physical systems used in experimental research. These complex physical systems are often composed of subsystems each interacting with other subsystems. Note, we are not making an analogy between experiment and design but rather between the complex coupled physical systems used in experiments and the complex coupled systems of simulation based design tools used in multidisciplinary design. There has been much effort expended to develop techniques for determining and quantifying the uncertainty associated with the information resulting from experimental measurements (Kline and McClintock8, Taylor14, Bevington and Bevington2, 3 Robinson , Coleman and Steele6). It has been

A variety of techniques are used in the planning, execution and evaluation of results in experimental measurement systems. A system level uncertainty assessment for a complex experimental facility, like one recently conducted for the NASA National Transonic Facility at the Langley Research Center (Batill1) are not only used to provide quantitative uncertainty information regarding the results of a particular test but to evaluate the entire process and are intended to enhance the productivity of these facilities. A similar objective can be achieved in the planning, execution and evaluation of results for simulation-based, multidisciplinary design. A key issue in experimental uncertainty assessment is determining the most appropriate tools to use, defining procedures to follow and identifying the primary sources of uncertainty. Therefore one of the first tasks in the proposed research program is to extend existing techniques from experimental uncertainty analysis to allow for the quantification of uncertainty in system states for coupled, multidisciplinary systems. Once the uncertainty of the system states can be quantified, then, and only then, can non-deterministic methods be implemented to include that information in the design decision process. Uncertainty in the results of a multidisciplinary system analysis can be classified as resulting from three sources. These are:

• Errors/Variability in the information used as input to the simulation, • Errors associated with each numerical simulation tools (i.e., approximations, theories or models), • Errors attributed to the numerical procedures.

True Output Tool Output

y

3

2.8

True Output fixed bias error approximation

2.6

2.4

Error

2.2

2

Tool Output

1.8

Using the nomenclature of experimental uncertainty analysis, the first of these sources is related to “precision error” while the second and third are sources of “bias error.” Both of these “error sources” are then combined to provide the uncertainty in the results. It is proposed that the quantification of uncertainty in simulation-based system analysis be based upon the methodology of propagation of uncertainty developed for experimental systems. This will be an extension of the methods of Kline and McClintock8, as demonstrated for complex coupled systems by Batill1. This requires the computation of state sensitivities which will be accomplished using the Global Sensitivity Equations (GSE’s) developed by Sobieszczanski-Sobieski11. Worst Case Propagation of Uncertainty in Multidisciplinary Systems: In this research final results for propagating uncertainties in multidisciplinary systems are presented. This investigation is focused on developing a worst case sensitivity based estimate of the variation in performance predictions (i.e., states y) due to uncertainties both in the design variable inputs x and in the simulation based design tools T. The derivation is tested in application to two multidisciplinary system design problems. Monte Carlo simulation is used to evaluate the validity of the propagated uncertainties. In order to develop an estimate of the propagated uncertainty in the performance predictions of a system analysis, the following assumptions will be made. (1)

The bias error associated with a given simulation tool varies as a function of the tool’s inputs (refer to Figure 3). (2) Given the same input, the simulation tool

1.6 0.5

0.55

0.6

0.65

0.7

0.75

0.8

Input x

0.85

x0

0.9

0.95

1

Fig. 3 Example of Tool Bias Error

(3)

(4) (5)

(6)

will give the same output, and consequently, the same bias error in the corresponding output. If the tool input changes, in general, the bias error will change. However, if one is performing a sensitivity analysis, where the change in the inputs is very small then the bias error can assumed to be fixed. During a coupled system analysis, the tool used in each discipline remains the same. The resolution of each simulation tool is high enough so that its effect on the convergence of the coupled system analysis is negligible. The tool uncertainty information (i.e., bias error) is provided by the tool developer.

nomenclature CA SA x ∆x xtrue T ∆T Ttrue

δ

contributing analysis system analysis design variable (vector) precision error (vector) corresponding to the design variable (vector) ideal actual true value of the design variable (vector), which is unknown simulation tools (vector) bias error (vector) associated with the tool T ideal tool function (vector), which does not exist assumed bias error function (vector) of the tool

∂T ∂T , ∂x ∂ y

local partial derivatives (matrix)

I y

identity matrix state variable (vector)

∆y

propagated uncertainty (vector) in the state variable

true

ideal true value of the state variable (vector), which is unknown state variable (vector) which is the output of contributing analysis “a” propagated uncertainty in the state variable (vector) ya

ya

ya ∆ya

dy a dy b dy c , and dx dx dx

global sensitivities (matrices)

whose elements are global partial derivatives

A multidisciplinary system analysis SA can be abstracted as a set of nonlinear simultaneous equations (1), where each contributing analysis tool Ti outputs a unique state vector estimate (i.e., ya, yb and yc) that depends on both the input vector x and the state outputs yj ≠ yi of the other contributing analyses.  y a = T a ( x, y b, y c )   y b = T b ( x, y a, y c )   y c = T c ( x, y a, y b )

(1)

Note that errors exist in both the design variable x inputs (precision error ∆x) and the contributing analysis tools Ta, Tb and Tc (bias errors ∆Ta, ∆Tb and ∆Tc, respectively). The terms xtrue true true and T true and T c are defined in Equations a , Tb (2) and (3) and represent the true value of the design vector xtrue and three theoretically pertrue true and T c fect tools T true which have no bias a , Tb error. x true T a ( x,

true

= x + ∆x

( x, y a, y c ) = T b ( x, y a, y c ) + ∆T b

true

( x, y a, y b ) = T c ( x, y a, y b ) + ∆T c

Tc

(2)

The precision errors ∆x and bias errors ∆Ta, ∆Tb and ∆Tc, will propagate through the coupled SA and result in the propagated uncertainty ∆yi associated with the system states yi. ya

true

= ya + ∆ ya

true yb

= yb + ∆ yb

true

= yc + ∆ yc

yc

(4)

The term y true represents the ideal state output a of the SA, which can be obtained only if we know the actual value xtrue of design vector and true if we have ideal tools T true and T ctrue for a , Tb all the disciplines in this system. The term y true a is unknown to us, since we don’t know xtrue, true nor do we possess any of the tools T true a , Tb true and T c . The term ∆ya is the difference between the ideal state y true and the nominal a state ya, and our goal in this research is to develop a good estimation of this quantity. Note that if the ideal operator were available and the true design variable and states were known, then the true states would satisfy the set of simultaneous equations as Equation (5) given below.

y b, y c ) = T a ( x, y b, y c ) + ∆T a

true

Tb

(because of our lack of knowledge) and we can only have an estimation of it. The difference between output from the ideal operator T true a and that from the existent operator Ta when they operate on the same input is denoted by ∆Ta and is referred to as the “bias error”. ∆Ta is unknown in reality and needs to be estimated. In this report, we assume that this estimation is provided by the producer of the existent tools.

(3)

For discipline “a”, T true is the ideal operator of a the ideal tool which can supply the user with the exact solution of the CA by operating on does not exist the x, yb, yc. In reality T true a

 y true = T true ( x true, y true, y true ) a b c  a  true true true true true = Tb ( x , ya , yc )  yb  true true true true true y = Tc ( x , ya , yb )  c

(5)

According to Assumption 1, the bias error associated with the tool is a function of the tool

inputs.

∂T a ∂T a ∂T a , and ∂ x ∂ yb ∂ yc

∆T a = δ a ( x, y b, y c ) ∆T b = δ b ( x, y a, y c )

(6)

∆T c = δ c ( x, y a, y c )

 y true = T ( x true, y true, y true ) +δ ( x true, y true, y true ) a b c a b c  a  true true true true true true true , y a , y c ) +δ b ( x , y a , yc )  yb = T b( x   y true = T ( x true, y true, y true ) +δ ( x true, y true, y true ) c a b c a b  c

∆ ya = T a( x

,

true yb ,

true yc )

+ δa( x

true

,

true yb ,

true

true

, yc

) = T a ( x, y b, y c ) +

+

) = T a ( x, y b, y c )

. ∂T a ∂T a ∂T a ⋅ ∆x + ⋅ ∆ yb + ⋅ ∆ yc ∂ yb ∂ yc ∂x

(9)

According to Assumption 3, when the tool input varies over a small range (which it does in this derivation), the bias error can be assumed fixed (see Figure 3) and we have: true

true

, yb

true

, yc

) ≅ δ a ( x, y b, y c ) .

(10)

ya + ∆ ya = ya +

∂T a ∂T a ∂T a ⋅ ∆x + ⋅ ∆ yb + ⋅ ∆ yc ∂x ∂ yb ∂ yc

. (11)

+ δ a ( x, y b, y c )

Rearranging we have

true yc )

∆ ya –

The first term on the right hand side of Equation (7) can be expanded using a Taylor series expansion about the current design (x, yb, yc). , yb

true

, yc

Substitute Equation (1), (4), (9) and (10) into Equation (7) and we obtain:

true ) is what we would The term T a ( x true, y true b , yc obtain if the existent operator Ta operates on the true actual value x true, y true , and in general it is b , yc different from T a ( x, y b, y c ) corresponding to the same operator operating on the nominal values x, yb, yc. In other words, we have true true true , y b , y c ) ≠ T a ( x, y b, y c ) . As a result, we T a( x true ). get ∆ y a ≠ δ a ( x true, y true b , yc

true

true

, yb

(7)

– T a ( x, y b, y c ) .

T a( x

true

δa( x

From Equation (1), we know that y a = T a ( x, y b, y c ) and from Equation (4) we know true = ya + ∆ ya , ya that then we have true true ∆ y a = y a – y a = y a – T a ( x, y b, y c ) . If we substitute Equation (7) into this expression, we have true

If we substitute Equation (2) and (4) into Equation (8) we generate: T a( x

If we substitute Equation (6) into Equation (3), and if we were able to have the ideal operator Ttrue, the actual value xtrue of design vector and let operators Ttrue, Ta and δ operate on xtrue, we would get the true values of the states and they must satisfy the set of simultaneous equations

are the local partial derivatives.

∂T a true ⋅ (x – x) ∂x

∂T a ∂T a true true + ⋅ ( yb – yb ) + ⋅ ( yc – yc ) ∂ yb ∂ yc

(8)

∂T a ∂T a ∂T a ⋅ ∆ yb – ⋅ ∆ yc = ⋅ ∆ x + δ a ( x, y b, y c ) . (12) ∂ yb ∂ yc ∂x

In matrix form we can write this as: Ia –

∂T b ∂ ya



∂T c ∂ ya



∂T a ∂ yb Ib



∂T c ∂ yb

∂T a ∂ yc  ∆ ya    ∂T b  ∆ y b  = –   ∂ yc  ∆ yc 



Ic             

 . (13)  ∂T a ⋅ ∆ x + δ a ( x, y b, y c )  ∂x    ∂T b ⋅ ∆ x + δ b ( x, y a, y c )  ∂x    ∂T c ⋅ ∆ x + δ c ( x, y a, y b )  ∂x  

Rearranging the right hand side we obtain Equation 14 and simpliying further Equation 15. Ia –

∂T b ∂ ya



∂T c ∂ ya

∂T a ∂ yb



∂T a  ∂ yc  ∆ y a  ∂T b  ∆ y b –  ∂ yc  ∆ yc  Ic



Ib –

∂T c ∂ yb

             Ia ∂T b – ∂ ya –

∂T c ∂ ya



∂T a ∂ yb

∂T a  ∂ yc  ∆ y a  ∂T b  ∆ y b –  ∂ yc  ∆ yc  Ic

Ib –

∂T c ∂ yb

             Ia

  ∂T b  ∆ ya  –   ∂ ya  ∆ yb  =    ∆ yc  ∂T c   – ∂ ya             



    =   

  ∂T a . (14) ⋅ ∆x  ∂x    δ a ( x, y b, y c )     ∂T b ⋅ ∆ x  +  δ b ( x, y a, y c )  ∂x      δ c ( x, y a, y b )   ∂T c ⋅ ∆x  ∂x  



∂T a ∂x ∂T b ∂x ∂T c ∂x

Ib –

    =   

∂T c ∂ yb

∂T a – ∂ yc –

∂T b ∂ yc

Ia   ∂T b  ∆ ya  –   ∂ ya  ∆ yb  =    ∆ yc  ∂T c   – ∂ ya

Ia –

+





Ib –



∂T b ∂ ya

∂T a ∂ yb ∂T c ∂ yb

∂T a ∂ yb Ib

∂T c ∂ ya



∂T c ∂ yb



∂T a ∂ yc



∂T b ∂ yc Ic



∂T a ∂ yc



∂T b ∂ yc

 ∂T a     ∂x     ∂T b   ∆ x  ∂x     ∂T c     ∂x   

–1 

. (17)

–1

 δ a ( x, y b, y c )     δ b ( x, y a, y c )     δ c ( x, y a, y b ) 

Ic

Equation (18) details the global sensitivity equations (GSE) as derived by SobieszczanskiSobieski11.

 . (15)      δ a ( x, y b, y c )     ∆ x +  δ b ( x, y a, y c )       δ c ( x, y a, y b )     

∂T a – ∂ yb

Equation 15 is re-written to obtain Equation 16. Expanding the right side of Equation (16), we obtain

Ia –

∂T b ∂ ya



∂T c ∂ ya

The terms

–1

×

Ic

 . ∂T a   ∂x    δ a ( x, y b, y c )    ∂T b  ∆ x +  δ b ( x, y a, y c )  ∂x      δ c ( x, y a, y b )   ∂T c  ∂x  

(16)

∂T a – ∂ yb Ib –

∂T c ∂ yb

 ∂T a  dy a –  ∂ yc  d x  dy b ∂T b  –  dx ∂ yc   dy c Ic   dx

dy a dy b dy c , and dx dx dx

   ∂T   a     ∂x       ∂T b     =   ∂x       ∂T c       ∂x    

(18)

are the global sensi-

tivities. The word “global” is used because we treat all the design variables as the components of the single design vector x, and the sensitivity of the states versus this single vector can be thought as the “global sensitivities”. Note that since in general both ya and x are vectors, the terms

dy a dy b dy c , and dx dx dx

are used to denote the

matrices whose elements are global partial derivatives.

and substitute Equation (21) into Equation (20), we obtain

∂y a1 ∂y a1 ∂y a1 … ∂ x1 ∂ x2 ∂ xn

In detail,

dy a = dx

∂y a2 ∂y a2 ∂y a2 … ∂ x1 ∂ x2 ∂ xn …



, where n is

… …

∂y an ∂y ana ∂y an a a … ∂ x1 ∂ x2 ∂ xn

the number of design variables and na is the number of states calculated in discipline “a” through CA1. Equation (18) can be rewritten as Equation 19. Substituting Equation (19) into Equation (17), we obtain Equation (20).           

      

 Ia dy a   dx  ∂T b dy b  = – ∂ y  a dx   ∂T c dy c  – ∂ ya  dx 

    ∆ ya     ∆ yb  =    ∆ yc      

+



∂T a ∂ yb



∂T b – ∂ yc

Ib –

∂T a ∂ yc

∂T c ∂ yb

Ic

∂T a ∂x ∂T b ∂x ∂T c ∂x

       . (19)      

 dy a  d x  dy b ∆ x  dx   dy c   dx  Ia





 –1            

∂T b ∂ ya ∂T c ∂ ya



(20) ∂T a ∂ yb ∂T c ∂ yb

Ia B=

B ba B bb B bc = B ca B cb B cc

      

    ∆ ya     ∆ yb  =    ∆ yc      



∂T a ∂ yc



∂T b ∂ yc

–1

 δ a ( x, y b, y c )     δ b ( x, y a, y c )     δ c ( x, y a, y b ) 

Ic





∂T b ∂ ya ∂T c ∂ ya



∂T a ∂ yb Ib



∂T c ∂ yb



∂T a ∂ yc



∂T b ∂ yc Ic

–1

(21)

 δ a ( x, y b, y c )     δ b ( x, y a, y c )  .(22)    δ c ( x, y a, y b ) 

 dy a  d x  dy b  ∆ x  dx   dy c   dx  B aa B ab B ac B ba B bb B bc B ca B cb B cc

and if we let

B aa B ab B ac

   B aa B ab B ac   ∆ x + B ba B bb B bc   B ca B cb B cc    

In order to get the worst case estimate of uncertainty, the sum of absolute values is taken. The estimate of worst case uncertainty is then:

+

Ib –

  dy a    dx  ∆ ya      dy  ∆ yb  =  b    dx  ∆ yc      dy c   dx

. (23)   δ a ( x, y b, y c )   δ b ( x, y a, y c )   δ c ( x, y a, y b ) 

      

In order to show that Equation (23) is the worst case estimation of the uncertainty in the state vector y, we first need to explain what we mean by worst case estimation. We will use a simple function as follows: z = f ( x, y )

(24)

and we assume that x has a variation of ∆x, y has a variation of ∆y. When x changes to x±∆x, the change in z is correspondingly (neglecting H.O.T.) ∆z x = ± ∂z ⋅ ∆x . Similarly When y  ∂x changes to y±∆y, the change in z is correspondingly ∆z y = ± ∂z Then the worst case ⋅ ∆y .  ∂y variation in z due to the variation in x and y can be obtained as shown in Equation (25).

∂z ∂z ∆z = ∆z x + ∆z y = ±  ⋅ ∆x + ±  ⋅ ∆y . ∂x  ∂y 

(25)

Note that when ∆zx and ∆zy have the same sign, we will get the worst case variation. This is equivalent to taking the absolute values of ∆zx and ∆zy since |∆zx| and |∆zy| are both positive. Therefore the worst case variation of z can be shown as ∆z =

∂z ∂z ⋅ ∆x + ⋅ ∆y = ∂z ⋅ ∆x + ∂z ⋅ ∆y ∂x ∂y ∂x ∂y

dy a ⋅ ∆x + B aa ⋅ δ a ( x, y b, y c ) dx + B ab ⋅ δ b ( x, y a, y c ) + B ac ⋅ δ c ( x, y a, y b )

.

(27)

Clearly, if there is no bias error, the variation in ya due to the variation of x is ∆ y a ;∆x =

dy a ⋅ ∆x . dx

(28)

And, the variation in ya due to the bias error δ a ( x, y b, y c ) of Tool A alone is ∆ y a ;δ = B aa ⋅ δ a ( x, y b, y c ) . a

(29)

Similarly, the variation in ya due to the bias error δ b ( x, y a, y c ) of Tool B alone is ∆ y a ;δ = B ab ⋅ δ b ( x, y a, y c ) . b

(30)

The variation in ya due to the bias error δ c ( x, y a, y b ) of Tool C alone is ∆ y a ;δ = B ac ⋅ δ c ( x, y a, y b ) . c

∆ y a = ∆ y a ;∆x + ∆ y a ;δ + ∆ y a ;δ + ∆ y a ;δ a

=

(31)

Since ∆x, δ a ( x, y b, y c ) , δ b ( x, y a, y c ) and δ c ( x, y a, y b ) could be either positive or negative, the worst combination of variations in ya will be formed

b

c

dy a ⋅ ∆x + B aa ⋅ δ a ( x, y b, y c ) dx + B ab ⋅ δ b ( x, y a, y c ) + B ac ⋅ δ c ( x, y a, y b )

=

(26)

Now let’s go back to Equation (22) and expand it. The first row is ∆ ya =

when ∆ya;∆x, ∆ya;δa, ∆ya;δb, and ∆ya;δc have the same sign. This is equivalent to taking the absolute value of each of them as shown in Equation (32).

(32)

dy a ⋅ ∆x + B aa ⋅ δ a ( x, y b, y c ) dx + B ab ⋅ δ b ( x, y a, y c ) + B ac ⋅ δ c ( x, )

Equation (32) can be written in matrix form as

∆ ya =

dy a ⋅ ∆x + B aa B ab B ac dx

  δ a ( x, y b, y c )   δ b ( x, y a, y c )   δ c ( x, y a, y b ) 

   . (33)   

Comparing Equation (33) with Equation (23), we find that Equation (33) is in fact the first row of Equation (23); therefore we can conclude that Equation (23) is the worst case estimation of the uncertainty in the state vector y. Verification-Monte Carlo Simulation: The above worst case estimation requires verification before it can be applied to multidisciplinary design. Monte Carlo simulation is utilized to simulate the real physical system in the verification. The scheme of verification is outlined as follows: When design variables are at their nominal value, the consistent system performances (i.e., states) predicted by the system analysis are commonly referred to as nominal states. These nominal states are biased due to the uncertainties associated with each discipline tool, and will be termed as “biased nominal states” from herein. Therefore the system analysis performed by the discipline tools is biased. As discussed above, Equation (23) provides a worst

case estimation of the variations in the consistent states when the worst case estimation of the precision error ∆x in design variables x and the worst case estimation of the bias error δ associated with each discipline tools are given. For a two dimensional case, where state y1 is plotted on the abscissa and state y2 is plotted on the ordinate, the biased nominal state is plotted as a cross and the worst case estimation of the variation in the states obtained from Equation (23) can be plotted as the bounding box composed of the four dashed lines in Figure 4(a). Note that the worst case estimation is based on the biased nominal design because that would be the only information available after the system analysis.

One issue that needs to be addressed is how to approximate the true output of the physical system as part of the Monte Carlo simulation. MC simulation of the true output can be performed only if certain information about the variability ∆x and the bias error δ is available. A flowchart of the Monte Carlo (MC) simulation performed in this research is shown in Figure 5. In practice the bias error δ is unknown, otherwise the discipline tools would be calibrated to produce output without error. In order to simulate true output in the MC simulation, it is assumed bias error is known. This assumed bias error δ˜ can be “appended” to the output of the discipline tool to approximate the no-error output y˜ t of the ideal tools and consequently approach an approximately “perfect system analysis” without bias error (see Equation (34)).

The actual output of the real physical system will vary due to the variability in design variables x and will deviate from the biased nominal states due to the bias error associated with the discipline tools. Given certain information about the characteristics of the variability or precision error ∆x and bias error δ, Monte Carlo simulation (MC) can be employed to approximate the true output. Each run of random sampling in the Monte Carlo simulation produces a set of possible true outputs which are plotted as dots in Figure 4(b). Repeated random sampling and repeated plotting generates the population of dots as demonstrated in Figure 4(b). If the whole population lies within the bounding box drawn according to Equation (23) as shown in Figure 4(c), it indicates that Equation (23) does provide a good worst case estimation of the propagated uncertainty. y2

Input nominal design x Perturb ~ x

Do N times

Compensate tool uncertainty Perform approx. perfect SA

Fig. 5 Monte Carlo Simulation Flowchart  y˜ t = T ( x˜ , y˜ t , y˜ t ) + ˜δ a ( x˜ , y˜ t , y˜ t ) b c b c  a a  t t t t t  y˜ b = T b ( x˜ , y˜ a, y˜ c ) + ˜δ b ( x˜ , y˜ a, y˜ c )   y˜ t = T ( x˜ , y˜ t , y˜ t ) + ˜δ ( x˜ , y˜ t , y˜ t ) c a b a b  c c

5

5

y2

4.5

4.5

4.5

4

4

4

3.5

3.5

3.5

3

3

3

2.5

2.5 12

13

14

15

16

17

11

12

13

14

15

16

y1

(a) worst case estimation

(34)

5

y2

11

Save in database & Plot

Output possible true states

2.5

17

11

12

13

14

15

y1

(b) MC simulation

16

17

y1

(c) verification

Fig. 4 Illustration of Verification-Monte Carlo Simulation

The term x˜ in Equation (34) represents the perturbed design variables which are utilized to simulate the effect of variability ∆x in the true output of the multidisciplinary system. The simulation is conducted by perturbing the design variables x based on the distribution of ∆x, then feeding the perturbed variables x˜ into the approximated “perfect system analysis” to get a set of possible true outputs. This perturbation needs to be repeated N times, where N is a number large enough so that a confident conclusion about the underlying population of true outputs can be drawn from the perturbation sample. In this study the distribution of variability ∆x is assumed and input by the user to randomly sample the design variables in the MC simulation. A uniform distribution of variability ∆x centered at the nominal value of design variable x is assumed for the verification in this study. A uniform distribution is preferred over a normal distribution because the primary interest here is in validating the worst case estimation of the variation in states y˜ t . If design variables x are perturbed based on a normal distribution, x˜ has a high possibility of lying close to the nominal value and a low possibility of lying away from it, even though the latter can be more likely to lead to the worst case value of y˜ t . Yet x˜ has equal possibility of lying anywhere within the range of ∆x if design variables x are perturbed based on a uniform distribution. In other words, a Monte Carlo simulation based on a normal distribution of ∆x might miss the actual possible worst case value of y˜ t . Therefore a uniform distribution is more appropriate in the study of worst case variation. The assumption that bias error δ˜ is known or could be specified by a designer for the MC simulation of true outputs is treated in two different strategies. The first strategy (case I) is rather simple and straight forward, where estimated bias error δ˜ is assumed to be always p percent of the current output of the discipline

tool and p is deterministic (Equation (35)). ˜δ ( x˜ , y˜ t , y˜ t ) = p ⋅ T ( x˜ , y˜ t , y˜ t ) a b c b c a a ˜δ ( x˜ , y˜ t , y˜ t ) = p ⋅ T ( x˜ , y˜ t , y˜ t ) b a c a c b b

(35)

˜δ ( x˜ , y˜ t , y˜ t ) = p ⋅ T ( x˜ , y˜ t , y˜ t ) c a b a b c c

Figure 6 illustrates how the approximate true output is obtained in this case when the design variables x are at their nominal values and the variability ∆x is not taken into consideration. y2

CA1: y1=T1(x,y2)

CA2: y2=T2(x,y1) biased nominal states

approx. true output

biased CA approx perfect CA

y1

Fig. 6 Simulation of Bias Error -- Case I

The system analysis in this example consists of only two coupled contributing analysis CA1 and CA2, and produces only two output states y1 and y2. The estimated bias error δ˜ of CA1 and CA2 are p1 and p2 percent respectively. In the y1-y2 plane (Figure 6), the two solid curves represent the CAs with bias error and their intersection point would be the set of biased nominal states. The two dashed curves are obtained by compensating for the estimated bias error δ˜ according to Equation (34) and their intersection point would be the approximate true output. A mathematical representation of this example is shown in Equations (36). The estimated bias error δ˜ in discipline tools sometimes is given in a non-deterministic form. It is recognized that during the process of engineering design, the physical system has not been built or realized, therefore its true perfor-

mance is unknown beforehand. While the outputs of the simulation tools serve as predictions of the unknown performance, the corresponding bias error δ, which is also referred to as prediction error in some literatures, is apparently unknown. Hence in some cases a probabilistic estimation δ˜ is developed to assess the unknown bias error δ. The estimation, for example, can be stated as “in the whole design space the estimated bias error δ˜ is normally distributed with 0 mean and 0.1 variance.” To account for this situation, another simulation strategy (Case II) is developed based on the relationship between the bias error δ and its non-deterministic estimation δ˜ . Note that a non-deterministic estimation δ˜ of the bias error δ does not indicate bias error itself to be necessarily non-deterministic. Although virtually all real word processes and systems exhibit variability or randomness, in some cases the variations are small enough, relative to our interest of study, that they can be ignored and the systems can be thought as deterministic. In this study the following objects are assumed to be deterministic: the entire physical system or the engineering artifact being designed, the subsystems in each discipline and the simulation tools used in each discipline. By deterministic it is meant that the system (or simulation tool) produces identical outputs every time the same setting of inputs is realized. In other words, the same inputs to the entire system (or subsystems, CAs, SA) should result in the same outputs. This assumption is in keeping with the current status of SA in the context of multidisciplinary design where consistent state solutions are obtained through iterative executions of coupled CAs. According to this assumption the bias error δ associated with the simulation tools, which by definition is the difference between the output of the subsystem in each biased SA  CA1: y 1 = T 1 ( x, y 2 )   CA2: y 2 = T 2 ( x, y 1 )

discipline and the output of the corresponding simulation tool, is deterministic since the same amount of bias error occurs when essentially identical setting of inputs are assigned to the subsystem. In simulation Case II, we expect to find out how the actual whole system behaves under bias error δ by “numerically” realizing all the subsystems a large number of times (M times) according to the estimated bias error δ˜ . In each realization of a subsystem, for instance, the mth realization, a deterministic bias error δ˜ m is appended to the discipline tool outputs. δ˜ m is randomly generated according to the statistics of δ˜ . The distinction between the bias error δ and the estimated bias error δ˜ discussed above explains why the probabilistic bias estimation δ˜ itself should not be employed in each realization. In fact if the probabilistic δ˜ is used, there will be very few chances, if not zero, to achieve convergence of the states at each realization or random sampling. Obviously simulation Case I is a special instance of Case II, where the estimated bias error is of value δ˜ with a probability of 1. The below paragraph gives a detailed description of the implementation of Case II bias error simulation in this study. The estimated bias error δ˜ is assumed to P percent of the current output of the discipline tool and P is random with a certain distribution. A Monte Carlo simulation is executed to generate M sets of possible values of P according to its distribution. For each set of possible P values, which will be denoted by pm, the compensation for bias error is performed in the same fashion as Case I and a set of approximately possible true output is obtained accordingly. Using the same illustrative example for case I, Figure 7 demonstrates how the bias

estimated bias error δ˜ δ˜ 1 ( x, y 2 ) = p 1 ⋅ T 1 ( x, y 2 ) δ˜ 2 ( x, y 1 ) = p 2 ⋅ T 2 ( x, y 1 )

approx. perfect SA t t  ˜t  y 1 = T 1 ( x, y˜ 2 ) + p 1 ⋅ T 1 ( x, y˜ 2 )  t  y˜ 2 = T 2 ( x, y˜ t1 ) + p 2 ⋅ T 2 ( x, y˜ t1 ) 

(36)

error is simulated. Subplots (a), (b) and (c) correspond to 3 different sets of pm respectively. The values of design variables are kept unchanged at their nominal value during the simulation. Note that different sets of pm leads to different sets of approx. possible true output, which indicates that by accounting for the probability of the estimated bias error δ˜ , i.e., in Case II, the simulated true outputs will vary around the biased nominal states even if there is no simulation of variability in design variables.

Input design point x

Perturb x

Do N times

Compensate tool uncertainty Perform SA Do M times Output states

Fig. 8 Monte Carlo Simulation for Case II bias simulation

y2

A uniform distribution of estimated bias error δ˜ is assumed during the verification study in this report for the same reason as why a uniform distribution of variability ∆x is used.

approx. true output CA1: y1=T1(x,y2)

When Case II bias simulation is performed, the whole process of Monte Carlo simulation including the simulation of variability ∆x is very time consuming. The flowchart of the simulation is shown in Figure 8. Since random sampling is required in both the inner loop (to account for the estimated bias error) and the outer loop (to account for the variability), the total number of runs of system analysis is extremely large.

biased nominal states

y1

Fig. 9 Simulation of Bias Error -- Case II-i

simulation (Case II) is used in application to a small multidisciplinary design test problem. The test problem used is an MDO demonstration problem referred to as the “Little Problem”. The dependency diagram for the coupled problem is shown in Figure 10. This problem operates on three design variables and calculates two states which are coupled. The simul-

Little Problem: In order to validate the derivation of the worst case estimate of uncertainty, y2

y2 approx. true output

CA2: y2=T2(x,y1)

CA1: y1=T1(x,y2)

biased nominal states

CA2: y2=T2(x,y1)

biased nominal states

CA2: y2=T2(x,y1)

biased CA approx perfect CA

In effect the simulation method proposed as Case II accounts for the type of bias error shown in Figure 9.

y2

Save in database

CA2: y2=T2(x,y1) CA1: y1=T1(x,y2)

CA1: y1=T1(x,y2)

approx. true output approx. true output

biased nominal states

biased CA approx perfect CA

biased CA approx perfect CA

y1

y1

(a)

biased CA approx perfect CA

(b)

Fig. 7 Simulation of Bias Error -- Case II

y1

(c)

y2

y2

X=[-3 3 3 ]

5

X=[3 3 3 ]

12

11.5 4.5

11

10.5 4 10

9.5 3.5 9

8.5

3

8

2.5

12

13

14

15

16

17

y1

7.5 10

11

12

13

14

15

16

y1

Predicted Bounds States obtained from approx. CA tools States obtained by Monte Carlo Simulation

Fig. 11 Monte Carlo Simulation-Verification Results for Little Problem

taneous equations describing the problem are given below. X

y1

CA1 Tool1

CA2 Tool2

y2 y

Fig. 10 Dependency Diagram for Little Problem

 2  y 1 = T 1 ( x, y 2 ) = x 1 + x 2 + x 3 – 0.2y 2   y 2 = T 2 ( x, y 1 ) = x 1 + x 3 + y 1 

(37)

Case II simulation is performed for Little Problem since the time required to complete one system analysis is relatively small due to the size of this problem. An uniform distribution of the estimated bias and precision errors is assumed during the simulation. The worst case precision error in design vector x is assumed to be plus or minus [8%, 3%, 7%] for the variables x1, x2 and x3 respectively. The worst case bias error associated with the two tools used is assumed to be plus or minus 4% and 8% for tools T1 and T2 respectively. Two different design points are selected to perform the simulations. The results of these simulations are

plotted in Figure 11. In Figure 11, state y1 is plotted on the horizontal axis and state y2 is plotted on the vertical axis. The nominal states [y1,y2] for the design point x at which the simulation is performed are shown as a cross centered in the plots. The resulting states for each of the Monte Carlo simulations are also shown. The dashed lines are the bounds predicted by the worst case estimate of state uncertainty derived earlier and given in Equation (23). From the plots we see that the Monte Carlo simulation results lie within the predicted bounds, which indicates the worst case estimate of variation developed (see Equation (23)) is very good. Autonomous Hovercraft (AHC) problem: The worst case estimation of propagated uncertainty in multidisciplinary systems is also implemented on another MDO problem called the AHC problem. This test problem focused on the design of an autonomous hovercraft (AHC) as illustrated in Figure 12. The AHC problem was first presented by Sellar10 and a thorough description can be found in his doctoral dissertation9. A brief discussion of AHC problem is given below. The physical system of AHC consists of an engine, rotor (two rectangular lifting surfaces attached to the ends of a hollow circular shaft),

and payload. The system is to operate such that the motor speed (RPM) provides a thrust-toweight ratio of one, imposing a hover condition. The system analysis is comprised of four contributing analyses, three of which are complexly coupled as illustrated by the dependency diagram in Figure 12. The aerodynamics CA (CAa) calculates the aerodynamic loads on the lifting surfaces and approximates the distributed drag force along the rod, estimating the induced velocity at the lifting surfaces as a function of the thrust. CAa requires the torsional deformation of the shaft θd, the motor RPM and thrust as inputs. The shaft deformation is supplied from a structures CA (CAs), which also calculates the axial and shear stresses of the rod at the hub and the deflection of the lifting surfaces. All of these quantities are functions of the loads computed by CAa, thus these CAs are subject to static aeroelastic coupling. A propulsion/performance CA (CAp) calculates the thrust and torque necessary to spin the rotor based on the loads supplied from CAa. The motor RPM is also updated in this CA, which instigates the coupling of CAp with CAa. It is in this CA that the hover condition is imposed by driving the design to a thrust-toweight ratio of one. A fourth CA, structural dynamics (CAd), calculates the first natural frequencies of the rotor in bending and torsion. CAd is completely uncoupled from the other CAs as it requires no states as inputs.

Wujek15 applied a reformulated version of the AHC problem to test his Trust Region Approximation Method (TRAM) move-limit strategy, in which case, eleven design variables were used to describe the geometry of the rod and lifting surfaces and the amount of the fuel. The CAs described above calculate a total of thirtyfive states as defined in this formulation. Newton’s method is employed to achieve convergence of this system analysis. The design variables and states, as well as the constant parameters used in the analyses, can be found in Wujek15. The same version of AHC problem is used in this study. The size of the AHC problem and the complexity of its system analysis simulates the difficulties encountered in MDO. Due to the large size of AHC problem, Case I Monte Carlo simulation is executed for the verification. The nominal design x is selected to be x=[.325, 1.6, 48, 12, 33, 11.25,.125,.5,.025,.15, 30]. The variability ∆x in design variables is assumed to be uniformly distributed with a range of [-5%, 5%] of the nominal value. Two Monte Carlo simulations are performed, one with the bias error assumed to be -7%, the other with the bias error assumed to be 7%. The simulation results are shown in Figure 13. The states are plotted in 2D as a simplification of the 35 dimensional hyperspace. Since 35 states are present, if the states are plotted one verse the other, then altogether more than 500 plots are available. At the end of Monte Carlo simulation a “worst case” check is executed for

t skin

Loads

CAa

t

Aerodynamics

c

θd

r

CA s

Structures

Thrust, RPM x lea

l rod

CAp Propulsion / Performance

CAd

θ0

Dynamics

b

Fig. 12 Autonomous HoverCraft System

35 states to find out whether the worst case value of each state of the simulation lies within the correspondingly calculated bound, and if not, how much the bound is underestimated. The plots in Figure 13 are selected so that they represent the worst bound predictions among the total of 35 states. The effect of the bias error is clear, in that for most plots the center of the population of the simulated true outputs, does not coincide with the cross (i.e., the biased nominal states). The worst case estimation of Equation (23) did a good job for this problem because the majority of the population in all of the plots lie within the calculated bounding box. Very few dots lie outside the bounding box predicted by Equation (23). Equation (23) is based on a linear approximation of the system analysis (1). It is the nonlinearity of the AHC problem that causes a few outputs to be outside the predicted worst case bounds. Note that the same bounding box works for bias errors of either 7% or -7%, since the worst case estimation in Equation (23) assumes the bias error to be within the range of [- δ˜ , δ˜ ], which is [-7%, 7%] in this problem. Worst Case Uncertainty in Robust Design Optimization: The worst case estimate of propagated uncertainty provides the design decision maker with a range of variation but not a distribution. This estimation can be integrated into a non-deterministic optimization framework such as robust optimization (see Su and Renaud9). The fundamental principle of robust design is to improve the quality of a product by minimizing 0.03

y6

5500 y8

y7 0.12

0.025

0.1

4500

0.08

4000

0.02 0.015

0 0.01 y5

−0.01

0.01 y5

−0.01

4 0.01 y5

(a) bias error assumed = 7%

−0.01

0.1

4500

0.08

4000

0.01 y5

−0.01

5

4.5

3000 0.04

0 0

y40 5.5

3500

0.005

0

y8

5000

0.06

0.01

4.5

2000 0

5500

y7 0.12

0.02

2500

0.02 0

y6

0.015

3000 0.04

Robust optimum design procedures have been developed and implemented in several studies. Many of them do not require a priori knowledge of probability functions, instead, they make uses of different techniques to estimate the objective function variation. Two robust optimization extensions are developed by Su and Renaud13 where the estimation of function variations were obtained from a sensitivity analysis or an experimental design of the optimization problem. These two extensions not only focus on robust optimization of the objective function and its variation, but also consider constraint variations due to variability ∆x in design variables. Adding variation to con0.03

3500

0.01

The objective in conventional optimization problems is to minimize/maximize a linear or non-linear function of many variables subject to a set of constraints. In robust optimization, we are interested in finding the feasible combination of design variables which not only optimize the function value but also minimize its sensitivity to the variation of the design variables and parameters. Therefore, robust optimization strategies consider the objective function value as well as its variation when the design variables and parameters have fluctuations or variability.

0.025

5

0.06

0.005

−0.01

y40 5.5

5000

the adverse effects of variation without necessarily eliminating the causes of variation. Incorporation of the mathematical results and numerical techniques of optimization with the concepts of robust design leads to the idea of robust optimization.

2500 2000

0.02 0

0.01 y5

−0.01

0

0.01

−0.01

y5

4 0

0.01 y5

(b) bias error assumed =-7%

Fig. 13 Monte Carlo Simulation-Verification Results of AHC problem

−0.01

0

0.01 y5

straints has the effect of reducing the size of the feasible region. The optimum design is moved into the reduced feasible region such that the design will stay feasible when subjected to variation in design variables or parameters. Similarly, to account for variations which leads to violations of the upper and lower bounds of the original optimization problem, the upper and lower bounds of the design variables are shifted by the amount of the worst case variation. This ensures that all variations of design variables are within the original problem bounds. In this study a modified version of the sensitivity based robust optimization formulation by Su and Renaud13 is constructed. The worst case estimation of propagated uncertainty developed earlier is applied as an alternative means to estimate function variations. The objective function and constraints are usually formed as functions of both design variables x and/or system states y, which does not conflict with the concept that they are essentially implicit functions of design variables since system states are actually functions of design variables. For instance, the objective f can be represented by a function f ( x, y ) . In view of the y = CA ( x ) , condition that we have f = f ( x, y ) = f ( x ) . The variation ∆f of the objective function f is then approximated as the sum of absolute value of first order Taylor series terms as follows: f = f ( x, y ) ∆f =





∑ ∂ xi f ( x, y ) ⋅ ∆xi + ∑ ∂ yk f ( x, y ) ⋅ ∆yk i

(38)

k

where ∆xi is the worst case estimation of variability in the ith design variable xi, ∆yk is the worst case estimation of uncertainty in the kth system state yk obtained from Equation (23). Similarly, for the jth constraint gj, we have Equation 39. Where ∆gj is the estimated variation of the jth constraint gj. Since the estimation of function variation as in Equations (38)

and (39) is based on linear approximation, they will only be valid when variability ∆x and uncertainty ∆y are small. g j = g j ( x, y ) ∆g j =



(39)



∑ ∂ xi g j ( x, y ) ⋅ ∆xi + ∑ ∂ yk g j ( x, y ) ⋅ ∆yk i

k

Note that in the variation estimates in Equations (38) and (39), it is assumed that the objective function and constraint functions can be considered “error-free”. The variations in the objective function and constraints come solely from the variability ∆x in design variables x and from the bias error δ associated with discipline simulation tools. The general formulation for a conventional optimization problem, in which there are n design variables, K equality constraints and J inequality constraints, is as follows: Minimize:

f ( x)

x = [ x 1, x 2, …, x n ]

Subject to:

hk ( x ) = 0

k = 1, …, K

g j( x) ≥ 0

j = 1, …, J

L

U

xi ≤ xi ≤ xi

T

(40)

i = 1, …, n

Since it is almost impossible for an optimal point to be located where the equality constraint is insensitive to variations, equality constraints should be relaxed in the process of robust optimization. The modified sensitivity based robust optimization is given in Equation (41). Where, K is the number of system states in this formulation. In this formulation, both the robust objective function f R and the robust constraints gRj consist of two parts, the original or conventional function (f or gj) and an estimate of the variation of the function (∆f or ∆gj) which is obtained from the worst case estimation developed earlier. There is a trade-off between reducing the variation of the function and optimizing the function value itself. Robust optima are less sensitive to the variation of the design variables, but the function value

tends to be larger than that obtained from conventional optimization (i.e., a small decrease in the performance is traded-off for a decrease in the performance variation).

L

R

Minimize:

f ( x) = α ⋅ f + (1 – α) ⋅ ∆ f

Subject to:

g j = g j – ∆g j ≥ 0

R

L xi

R

≤ xi ≤

U xi

j = 1, … , J

R

xi

i = 1, …, n

f = f ( x ) = f ( x, y )

where

n

∆f =

∂ f ( x, y ) ⋅ ∆x i ∂ xi

∑ i=1

K

+

∑ k=1

∂ f ( x, y ) ⋅ ∆y k ∂ yk

(41)

g j = g j ( x ) = g j ( x, y ) n



∆g j =

i=1

∂ g ( x, y ) ⋅ ∆x i ∂ xi j K

+

∑ k=1

∂ g ( x, y ) ⋅ ∆y k ∂ yk j

0≤α≤1

In the light of this recognition, robust optimization can be dealt with as a multi-objective optimization, where the conventional objective function and its variation are the two objectives under consideration (see Chen et al5). The parameter α serves as a weighting factor in this bi-objective optimization. The larger the value of α, the more significance the decision maker puts on the conventional objective over its variation and vice versa. The upper bounds xU and lower bounds x L of the design variables x were shifted as in Su and Renaud13 for the robust optimization to ensure all variation in design variables are within the original design space. The resulting robust upper bounds and robust R R U L lower bounds are donated x and x respectively. If the worst case variability ∆xi of design variable xi is constant within the region L U [ x i , x i ] , the robust upper and lower bounds are simply L

xi

When the worst case variability ∆xi of design variable xi is assumed to be pi% of the current value (pi>0.), the robust bounds can be derived as:

R

L

= x i + ∆x i

U

xi

R

U

= x i – ∆x i

(42)

L

R

 xi  ------------ 1 – pi =  xiL  ------------- 1 + pi

U

L

xi ≥ 0 xi L

xi < 0

U

R

 xi  ------------- 1 + pi =  xU  ------------ 1 – pi

U

xi ≥ 0

(43) U

xi < 0

This robust design optimization approach is implemented for the optimal design of the Autonomous Hovercraft Problem (AHC) used before, and the corresponding results are discussed below. The variability ∆x in design variables is assumed to be uniformly distributed with a range of [-5%, 5%] of the nominal value. The bias error associated with discipline tools is assumed to be within the range of [-3%, 3%] of the current tool output. A simple global optimization is performed in which ten different starting point are randomly selected within the design space. The final optima is obtained as the best result of the corresponding ten optimizations. A Generalized Reduced Gradient (GRG) optimizer is employed to solve the optimization problem. The goal of the optimization of the AHC system is to minimize the empty weight of the hovercraft subject to constraints on the axial and shear Von Mises stresses in the rod, the first natural frequencies of the rod (relative to the motor RPM), the Mach number at the tip, and the endurance of the hovercraft. Equation (44) outlines the traditional formulation of this optimization problem. The global optimum for this problem was reported in Sellar9 to be Wempty=67.9 lbs, for which 7 of the 11 design variables were at global bounds and the endurance constraint (g6) was active. It was also reported in Sellar9 that at the optimum the objective was very insensitive to two of the variables (camber or x9 and

x ⁄ c or x8) so variations in these values could be expected of the optimal design obtained. For comparison a traditional optimization for AHC problem is performed in this study and the same optimal Wempty=67.9 lbs is found, but only 6 of the 11 design variables are at the original global bounds.

Table 1 details results obtained for the robust optimization procedure (Equation 41). The value of the traditional objective f, robust objective f R and objective variation ∆f at the optima with respect to different settings of parameter α are listed in Table 1. Also listed are the values of the endurance constraint (g6) which was reported to be active at the traditional optima and the corresponding robust constraint (g R6) which is found to be active at the robust optima in this study. Additionally the design variable values at optima are given in the Table 1. The original lower bounds and robust lower bounds of the design variables are denoted by “L” and “LR” respectively. Similarly the upper bounds of the design variables are denoted by “U” or “UR”. The letter “L” is also used to indicate that a constraint is active. The values of these original and robust upper and lower bounds of the design variables are listed in Table 2.

We see from Table 1 that in the reduced design space the original objective Wempty reaches its minimum value of 86.54 lbs with a worst case variation of 20.05 lbs while α is set to 1. When α is set to 0, minimal variation of 19.99 lbs is found with Wempty equal to 86.96 lbs. The trade-off between the original objective and its variation is made clear in that in order to decrease the weight variation from 20.05 to 19.99 lbs, the original objective Wempty is increased from 86.54 to 86.96 lbs. The magnitude of this trade-off is so insignificant that it only has mathematical importance in this case. Practically the original objective Wempty and its variation have reached their optimal in the reduced design space simultaneously and no apparent trade-off is found. This “no trade-off” phenomena results from the characteristics of the constrained AHC problem itself. Note that although no trade-off between Wempty and its variation is found in the robust optimizations performed within the reduced design space, trade-off does exist between the results of conventional optimization which is performed in the original design space and robust optimization where variation of constraints are taken into consideration. A much smaller objective Wempty was obtained using conventional optimization (67.91 lbs) than for both cases of

Minimize:

f ( x ) = W empty = W wing + W rod + W fuel + W motor = y 10 – W payload = f ( y 10 )

Subject to:

σN y8 - = 1 – -------- = g 1 ( y 8 ) ≥ 0. g 1 = 1 – -------σ all σ all σT y9 g 2 = 1 – --------- = 1 – --------- = g 2 ( y 9 ) ≥ 0. σ all σ all ωb y 14 - – 1 = ----------- – 1 = g 3 ( y 3, y 14 ) ≥ 0. g 3 = ------------------k ⋅ RPM k ⋅ y3 ωt y 15 g 4 = -------------------- – 1 = ------------ – 1 = g 4 ( y 3, y 15 ) ≥ 0. k ⋅ RPM k ⋅ y3 M tip y 19 - = 1 – -------------- = g 5 ( y 9 ) ≥ 0. g 5 = 1 – -------------M ti p M ti p all

all

y 50 E g 6 = ---------- – 1 = ---------– 1 = g 6 ( y 50 ) ≥ 0. E req E req (l)

(u)

xi ≤ xi ≤ xi Where:

σ all = 14, 000 psi, k = 1.5, M ti p

all

= 0.8, E req = 2 hrs

(44)

robust optimization (86.54 or 86.96 lbs), but the negative value for the robust constraint gR6 (-0.28) at conventional optima implies the possibility of constraint violation when subjected to variabilities in design variables and bias errors in discipline tools. Evidently the original objective has to be sacrificed in order to reach a robust feasible design.

uniformly distributed with a range of [-5%, 5%] of the optimal value found. The bias error is assumed to be -3% or 3%. The simulation results are shown in Figures 14, 15 and 16 and focus on the original objective Wempty and the endurance constraint (g6) which is active at the optimal. The frequency histograms of Wempty and g6 are presented in these figures. Figure 14 is constructed based on the data obtained from a Monte Carlo simulation performed at the conventional optima. Figures 15 and 16 are the results of Monte Carlo simulations performed at the robust optima when α is set to 1 and 0 respectively. Each figure consists of frequency histograms of Wempty and g6 at bias error levels of 3% (Figure 14a, 15a, 16a), and -3% (Figure 14b, 15b, 16b). Three dashed vertical lines are drawn in each histogram plot. The middle line indicates the nominal, yet biased value of Wempty or g6 determined by a biased system analysis of AHC problem. The other two lines mark the worst case bounds of the corresponding variation calculated from the worst case estimation. One can clearly see that the distribution in g6 at the conventional optima (Figure 14) includes many infeasible designs. Note also that we are plotting distributions for a single common bias error for all states, which is not a worst case distribution. At the robust optima, the distribution of g6 are all in the fea-

Note that the magnitude of ∆f’s in Table 1 are rather large in comparison to the value of f. This is due in part because the AHC problem is very sensitive to the propagation of bias errors. Tables 3, 4 and 5 provide a look at the factor effect contributions on the the resulting worst case function variation ∆f , for different levels of errors in ∆x and bias. The variations in Tables 3, 4 and 5 are evaluated at the designs obtained in Table 1. Note that robust optimization is not performed using these different errors in ∆x and bias. The propagated bias errors have a much greater effect on ∆f than does the variability in ∆x. Another way to compare the robust optimization solution with the conventional optimization solution is to perform Monte Carlo simulations at each of the optimal points listed in Table 1. As in the previous section, the variability ∆x in design variables is assumed to be

Table 1. Robust Optimization Results a Convent. Opt.

∆f

f

fR

67.91

(17.3)

1.

86.54

20.05

0.

86.96

19.99

optimal design x*

gR6

g6

x1

x2

x3

x4

x5

x6

x7

41.3

L

U

10.7

L

x11

L

19.9

L

L

86.54

.3894

L

LR

LR 44.9 LR

UR 10.8 LR .8168 .0005 LR 31.5

19.99

.3890

L

LR

LR 38.3 LR

UR 10.7 LR .7209 .0076 LR 32.5

x2

x3

x4

6.0

x5

x6

x7

x8

x9

x10

x11

L

.150 1.20 36.0

24.0 7.50 .050

0.

0.

.050 10.0

LR

.158 1.26 37.9 6.32 25.3 7.89 .053

0.

0.

.053 10.5

UR .476 1.90 57.1 17.1 40.0 14.3 19.0 .952 .048 .238 47.6 .500 2.00 60.0 18.0 42.0 15.0 .200 1.00 .050 .250 50.0

.1817 .0189

x10

(-0.28)

Table 2. Upper and Lower Bounds of Design Variables

U

x9

L

Robust Opt.

x1

x8

sible region. Note that the robust constraint gR6 is defined as being active (gR6 =0) at the worst case lower bound of g6. Therefore the left dashed line in the constraint histogram of Figures 15 and 16, depicts the value of gR6 = 0. Figures 14, 15 and 16 may lead to the impression that the worst case estimation is too conservative. Note that in those cases the bias error associated with all 35 states are assumed to be the same 3% (or -3%) for each of the states. When the bias error associated with state y1 is assumed to be 3%, then the bias error associated with y2, y3,...,y35 are assumed to be 3% also. It needs to be pointed out that the worst case estimation we developed assumes that bias error will vary within the range of [-3%, 3%]. So there are cases when the bias error associated with state y1 is assumed to be 3%, the bias error associated with y2 and y3 maybe 2.5%. For 35 states, we can assume countless

combinations of bias error. The worst case estimation we developed is intended to capture the largest variation given the many combinations that could occur. In Figures 17, 18 and 19, we present the histograms of the objective function and endurance constraint g6. under six different combinations at the optimum designs corresponding to Figures 14, 15 and 16. Clearly the worst case bounds calculated based on our derivation are not too conservative. The bias error combinations in each of the plot in Figures 17, 18 and 19 are labelled to be (1) +sign(g6), (2) +sign(f), (3) +3%, (4) -sign(g6), (5) -sign(f) and (6) -3%. Combination (1) +sign(g6) is selected to produce a variation in g6 which will be very close to the worst case predicted bound of g6 (i.e., the right dashed line in histogram plot of g6). Combination (2) +sign(f) is selected to generate a variation in f which will be close to the worst case predicted

Table 3. Factor Effects for ∆f where ∆x = [±5%, 0] and Bias= [±3%, 0] a

f

1. 0.

86.96

Convent. Opt. Robust Opt.

∆f /f ∆x=±5%,Bias=±3%

∆x=±5%,Bias=0

∆x=0,Bias=±3%

67.91

±25.4%

±7.2%

±18.22%

86.54

±23.2%

±7.4%

±15.8%

±23.0%

±7.1%

±15.9%

Table 4. Factor Effects for ∆f where ∆x = [±1%, 0] and Bias= [±1%, 0] a

∆f /f ∆x=±1%,Bias=±1%

∆x=±1%,Bias=0

∆x=0,Bias=±1%

67.91

±7.51%

±1.44%

±6.07%

1.

86.54

±6.74%

±1.48%

±5.26%

0.

86.96

±6.72%

±1.42%

±5.30%

Convent. Opt. Robust Opt.

f

Table 5. Factor Effects for ∆f where ∆x = [±0.5%, 0] and Bias= [±0.1%, 0] a

f

1. 0.

86.96

Convent. Opt. Robust Opt.

∆f /f ∆x=±0.5%, Bias=±0.1%

∆x=±0.5%,Bias=0

∆x=0,Bias=±0.1%

67.91

±1.33%

±0.72%

±0.61%

86.54

±1.28%

±0.75%

±0.53%

±1.24%

±0.71%

±0.53%

bound of f (i.e., the right dashed line in histogram plot of f). Combination (3) +3% assumes the bias error in all 35 states will be 3% for each of them. Combinations (4), (5) and (6) are of the opposite sign of combinations (1), (2) and (3), respectively. Table 6 lists combinations (1) +sign(g6) and (2) +sign(f) used in Figures 17, 18 and 19 for the fifty states of the AHC problem. Conclusions: In this research an investigation of how uncertainty propagates through a multidisciplinary system is undertaken. Two sources of error are considered, the bias errors bias error =3%

bias error =−3%

bias error =−3%

100

100

80

80

80

80

60 40

60 40

20

20

0

0

60

80 F=Wempty

100

Frequency

100

Frequency

100

Frequency

Frequency

bias error =3%

associated with the disciplinary design tools and the precision errors (i.e., variability) in the design variables. A derivation for estimating the worst case propagated uncertainty in multidisciplinary systems is developed. The method is verified using two strategies for Monte Carlo simulation in application to two multidisciplinary test problems. This research provides detailed information on the two different strategies used for Monte Carlo simulation. The strategies differ in how they treat bias errors. The worst case estimates of propagated uncertainty are integrated into a robust optimization framework. The robust

60 40 20

−0.2

0

0.2 0.4 0.6 Active Constr. g(6)

0

0.8

60 40 20

60

80 F=Wempty

(a) bias error = 3%

0

100

−0.2

0

0.2 0.4 0.6 Active Constr. g(6)

0.8

(b) bias error = -3%

Fig. 14 Frequency Histograms of Conventional Objective Function f and Active Constraint g6 at Conventional Optima bias error =3%

bias error =−3%

bias error =−3% 100

80

80

80

80

60 40

60 40

60 40

20

20

20

0

0

0

60

80 F=Wempty

100

−0.2

0

0.2 0.4 0.6 Active Constr. g(6)

0.8

Frequency

100

Frequency

100

Frequency

Frequency

bias error =3% 100

60 40 20

60

80 F=Wempty

(a) bias error = 3%

0

100

−0.2

0

0.2 0.4 0.6 Active Constr. g(6)

0.8

(b) bias error = -3%

Fig. 15 Frequency Histograms of Conventional Objective Function f and Active Constraint g6 at Robust Optima when α=1 bias error =3%

bias error =−3%

bias error =−3% 100

80

80

80

80

60 40

60 40

60 40

20

20

20

0

0

0

60

80 F=Wempty

100

−0.2

(a) bias error = 3%

0

0.2 0.4 0.6 Active Constr. g(6)

0.8

Frequency

100

Frequency

100

Frequency

Frequency

bias error =3% 100

60 40 20

60

80 F=Wempty

100

0

−0.2

0

0.2 0.4 0.6 Active Constr. g(6)

(b) bias error = -3%

Fig. 16 Frequency Histograms of Conventional Objective Function f and Active Constraint g6 at Robust Optima when α=0

0.8

optimization solutions are compared to conventional optimization solutions in application to the autonomous hovercraft test problem. The robust solution is constraint driven where an increase in f is traded-off for an increase in feasibilty under uncertainty. Acknowledgments: This multidisciplinary research effort was supported in part by the

following grants and contracts: NSF grants DMI94-57179 and DMI98-12857 and support from the General Motors Corporation. References [1]

Batill, S. M. (1994) “Experimental Uncertainty and Drag Measurements in the National Transonic Facility”, NASA Contractor Report 4600. [2] Bevington, P. R. (1969), Data Reduction and Error Analysis for the Physical Sciences, McGraw-Hill, New York.

80 80 Frequency

Frequency

60 60 40

20

20 0

40

50

60

70 F=Wempty

0

80

−0.2

−0.1 0 0.1 Active Constr. g(6)

0.2

bias error =+ sign(g6)

bias error =+ sign(f)

bias error =3%

bias error =− sign(g6)

bias error =− sign(f)

bias error =− 3%

Fig. 17 Frequency Histograms of Conventional Objective Function f and Active Constraint g6 at Conventional Optima for different settings of estimated bias error 80 80 Frequency

Frequency

60 60 40

20

20 0

40

70

80 90 F=Wempty

100

0

110

0

0.2 0.4 0.6 Active Constr. g(6)

0.8

bias error =+ sign(g6)

bias error =+ sign(f)

bias error =3%

bias error =− sign(g6)

bias error =− sign(f)

bias error =− 3%

80

80

60

60 Frequency

Frequency

Fig. 18 Frequency Histograms of Conventional Objective Function f and Active Constraint g6 at Robust Optima when α=1 for different settings of estimated bias error

40

20

20

0

40

70

80

90 F=Wempty

100

110

0

0

0.2 0.4 0.6 Active Constr. g(6)

0.8

bias error =+ sign(g6)

bias error =+ sign(f)

bias error =3%

bias error =− sign(g6)

bias error =− sign(f)

bias error =− 3%

Fig. 19 Frequency Histograms of Conventional Objective Function f and Active Constraint g6 at Robust Optima when α=0 for different settings of estimated bias error

Table 6. bias error combinations for histograms in Figures 17, 18 and 19 Figures 17

Figures 18

Figures 19

y

[3]

+sign(g6)

+sign(f)

+sign(g6)

1

-3 %

3%

2

3%

3%

3

3%

4

3%

5 6

+sign(f)

+sign(g6)

-3 %

3%

-3 %

3%

3%

3%

3%

3%

-3 %

3%

-3 %

3%

-3 %

3%

3%

3%

3%

3%

-3 %

3%

3%

-3 %

-3 %

3%

3%

3%

3%

3%

3%

3%

7

3%

3%

3%

3%

3%

3%

8

3%

3%

3%

3%

3%

3%

+sign(f)

9

3%

3%

3%

3%

3%

3%

10

3%

3%

3%

3%

3%

3%

11

3%

3%

3%

3%

3%

3%

12

3%

3%

3%

3%

3%

3%

13

3%

3%

3%

3%

3%

3%

14

3%

3%

3%

3%

3%

3%

15

3%

3%

3%

3%

3%

3%

16

3%

-3 %

3%

-3 %

3%

-3 %

17

-3 %

3%

-3 %

3%

-3 %

3%

18

-3 %

3%

3%

-3 %

-3 %

3%

19

3%

3%

3%

3%

3%

3%

20

-3 %

3%

-3 %

3%

-3 %

3%

21

-3 %

3%

-3 %

3%

-3 %

3%

22

-3 %

3%

-3 %

3%

-3 %

3%

23

-3 %

3%

-3 %

3%

-3 %

3%

24

-3 %

3%

-3 %

3%

-3 %

3%

30

-3 %

3%

-3 %

3%

-3 %

3%

31

-3 %

3%

-3 %

3%

-3 %

3%

32

-3 %

3%

-3 %

3%

-3 %

3%

33

-3 %

3%

-3 %

3%

-3 %

3%

34

-3 %

3%

-3 %

3%

-3 %

3%

40

-3 %

3%

-3 %

3%

-3 %

3%

41

-3 %

3%

-3 %

3%

-3 %

3%

42

-3 %

3%

-3 %

3%

-3 %

3%

43

-3 %

3%

-3 %

3%

-3 %

3%

44

-3 %

3%

-3 %

3%

-3 %

3%

50

3%

3%

3%

3%

3%

3%

Bevington, P. R. and Robinson, D. K. (1992), Data Reduction and Error Analysis for the Physical Sciences, McGraw-Hill, New York, 2nd Edition. [4] Chen, W., Tsui, K., Allen, J. K., and Mistree, F. (1995), “Integration of the Response Surface Methodology with the Compromise Decision Support Problem in Developing a General Robust Design Procedure”, Advances in Design Automation, ASME DE-Vol. 82. (Published by ASME, Place New York, Author -- Design Automation Conference

(ASME). [5] Chen, W., Wiecek, M., and Zhang, J., “Quality Utility: A Compromise Programming Approach to Robust Design”, accepted by ASME Journal of Mechanical Design, (September, 1998). -- 1999, Vol 121, No. 2, pp. 179. [6] Coleman, H. W. and Steele, W. G., Jr. (1989), Experimentation and Uncertainty Analysis for Engineers, John Wiley and Sons, Pub. [7] Gu, X., Renaud, J. E. and Batill, S. M., (1998), “An

Investigation of Multidisciplinary design subject to uncertainty,” AIAA-98-4747, AIAA/USAF/NASA/ISSMO 7th Symposium on Multidisciplinary Analysis and Optimization, St. Louis, MO, September 1998. [8] Kline, S. J., McClintock, F. A. (1953), “Describing Uncertainties in Single-Sample Experiments,” Mechanical Engineering, Vol. 75, No. 1, pp. 3-9, January. [9] Sellar, R.S. (1997), Multidisciplinary Design Using Artificial Neural Networks for Discipline Coordination and System Optimization, Doctoral Dissertation, University of Notre Dame, April. [10] Sellar, R., Stelmack, M., Batill, S., and Renaud, J. (1996), “Response Surface Approximations for Discipline Coordination in Multidisciplinary Design Optimization,” 37th Structures, Structural Dynamics, and Materials Conference Proceedings, Salt Lake City, UT, AIAA-96-1383, April. [11] Sobieszczanski-Sobieski, J. (1988), “On the Sensitivity of Complex, Internally Coupled Systems,” Proceedings of the 29th AIAA/ASME/ASCE/AHS Structures, Structural Dynamics and Materials Conference, Williamsburg, Virginia, April. (revision published in AIAA Journal 1990) (originally NASA TM-100537, Jan. 1988). [12] Sobieszczanski-Sobieski, J., Haftka, R. T. (1997), “Multidisciplinary aerospace design optimization survey of recent developments”, Structural Optimization, Volume 14, Number 1, pp. 1 - 23, August. [13] Su, J., Renaud, J. E. (1997), “Automatic Differentiation in Robust Optimization”, AIAA Journal, Volume 35, Number 6, June, pp. 1072 - 1079, Published by the American Institute of Aeronautics and Astronautics, Washington, DC, USA. [14] Taylor, J. R. (1982), An Introduction to Error Analysis: the Study of Uncertainty in Physical Measurements, University Science Books. (place: Mill Valley, Calif.) [15] Wujek, B.A. (1997), Automation Enhancements in Multidisciplinary Design Optimization, Doctoral Dissertation, University of Notre Dame, July.

Hetényi, M. (1950), Handbook of Experimental Stress and Analysis, New York, Wiley

Suggest Documents