Experimental Validation of Polynomial Chaos Theory on ... - AIAA ARC

1 downloads 0 Views 5MB Size Report
In contrast to Monte Carlo theory, polynomial chaos theory aims to spectrally ... proven polynomial chaos theory to be more efficient than Monte Carlo theory in ...
Experimental Validation of Polynomial Chaos Theory on an Aircraft T-Tail Prasad Cheema∗ , Gareth Vio†, and Nicholas F. Giannelis‡ The University of Sydney, New South Wales, 2006, Australia

Uncertainty quantification (UQ) is a notion which has received much interest over the past decade. It involves the extraction of statistical information from a problem with inherent variability, where this variability may stem from a lack of model knowledge or through observational uncertainty. Traditionally, UQ has been a challenging pursuit owing to the lack of efficient methods available. The archetypal UQ method is Monte Carlo theory, however this method possesses a slow convergence rate and is therefore a computational burden. In contrast to Monte Carlo theory, polynomial chaos theory aims to spectrally expand the modelled uncertainty via polynomials of random variables which have deterministic coefficients. Once the spectral expansion has been fully defined, it is possible to obtain statistical properties using simple integration procedures. Although literature has proven polynomial chaos theory to be more efficient than Monte Carlo theory in several contexts, there has been very little effort to experimentally validate polynomial chaos theory. Hence, it is the aim of this paper to perform an experimental validation on an in-house physical T-Tail structure by analysing the first six vibrational modes of this structure, and comparing these against the predicted uncertainty bounds of polynomial chaos theory.

I.

Introduction

ncertainty quantification (UQ) involves the extraction of statistical information from a problem with U some type of uncertainty, where the statistical information in question is often the mean and standard deviation values. Over the past decade, a wealth of literature has emerged in the field of UQ with various applications. The main reasons for a sudden surge of interest in UQ techniques stem from the notions of cost-effectiveness, time efficiency in engineering design stages, and a startling need to increase the efficiency of traditional UQ models. In addition to this, aside from the usually cited benefit of supporting rational decision making, UQ techniques provide a means by which older factors of safety can be improved on.1 An example of this may be seen in the 15% flutter margin requirement for US military aircraft. This requirement was successfully developed in the 1960s but numerous significant advances in computation and aeroelastic modelling suggest this bound is to conservative.2 Thus, instead of effortlessly following conservatism blindly, it is better instead to perform computational analyses aided by UQ methods which will result in tighter bounds estimates, and therefore time and monetary savings in engineering projects, in the long term. Monte Carlo theory is the quintessential UQ method, being formally used since 1953.3 As such, it has seen a gamut of applications including modelling the non-linear aspects of quantum theory, kinetic theory, material phase equilibria transition, and more infamously the physics equations for the atomic bomb.3, 4 In the aerospace field it has been used to study the aeroelastic failure of composite structures with multidimensional inputs,5 to perform a robust design of a wing under material and geometric uncertainty,6 and in the study of thermal protection systems for hypersonic vehicles.7 However the Monte Carlo method suffers in computational efficiency, owing to its need to generate a large number of samples for the answer to converge to within acceptable bounds.3 As such, any slight algorithmic improvements in Monte Carlo ∗ Undergraduate Student, School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW, 2006, AIAA Student Member. † Senior Lecturer, School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW, 2006, AIAA Professional Member. ‡ PhD, School of Aerospace, Mechanical and Mechatronic Engineering, The University of Sydney, NSW, 2006, AIAA Professional Member.

1 of 14 American Institute of Aeronautics and Astronautics

theory have been unable to catch up to the growth and complexity of modern engineering models, especially in the context of finite element (FE) analysis and computational fluid dynamics (CFD) software for model calculation. In addition to Monte Carlo methods there have been explorations into non-probabilistic theories for UQ. Interval analysis is one such method, and it can be used to find a bounding solution either through elementary interval mathematics,8, 9 or geometric intuition.10 The use of interval analysis for problems in linear static FE modelling11, 12 as well as solutions to the ‘Sandia non-linear challenge problem’,13, 14 suggests the such approaches rarely find tight bounds, and as such suffer from over-conservatism. In an effort to circumvent these issues, an additional non-probabilistic approach in the form of fuzzy set theory has been used as an extension to interval analysis via the addition of a membership function.15, 16 Although fuzzy analysis has been successfully used to model uncertainty in structural linear aeroelastic stability,17 and for the analysis for thin walled composite beams,16 it still can be difficult to model tight bounds, and it is difficult to perform probabilistic self-checks and thus validate the calculations as you form them. However most importantly these methods are intrusive and necessitate re-arranging model equations which in many contexts is infeasible. Additional efforts into efficient and robust UQ have been sought in the form of Evidence theory15 and µ-analysis.18 However these methods are very unusual, and as such haven’t been explored as extensively as the other prior methods, and so suffer from a lack of convincing theoretical verification and experimental validation. Variable success of these methods have been noted in the areas of aeroservoelastic stability margins,18 and to analyse structural damage detection,19 but they have not been adapted in the wider engineering context, due to their niche fields uses. Therefore, in an attempt to circumvent the aforementioned issues of these UQ methods, an efficient probabilistic framework for UQ was formalised in the form polynomial chaos methods. The origins of polynomial chaos theory can be traced back to the works of Norbert Weiner in his paradigm shifting work: The Homogenous Chaos.20 In his work, Weiner models stochastic systems through Gaussian distributions expressed through a Hermite polynomial series expansion. Although Wiener’s work has its roots in statistical mechanics, it has been adapted for use in the general UQ context. A fatal constriction of the initial formulation of homogeneous chaos envisioned by Wiener is its over-reliance on the Gaussian assumption of input parameters. To overcome this shortcoming, the notion of a unified polynomial chaos theory (commonly referred to as generalised polynomial chaos theory) was generated with the aim of improving Weiner’s model through the inclusion of a variety of input probability distributions which are summarised in Table 1. Due to this increased model flexibility, researchers have been able to model uncertainty in even larger scope of problems including propagating parametric uncertainty for ground vehicle and quadcopter dynamics,21, 22 for estimating transient responses in structural dynamics,23 and in the modelling of uncertainty for electrical circuits .24 Traditional implementations of polynomial chaos theory have worked intrusively. That is a series expansion is expanded directly into the model and rearranged into some explicit form. Although this algebraic re-arrangement is manageable for simple models, it struggles for more complicated and realistic problems. In recent times, non-intrusive formulations of polynomial chaos theory have been developed to cope with the inability of the intrusive polynomial chaos solution to be adapted to arbitrarily complex differential equations. This has seen the rise of two alternate formulations of polynomial chaos theory: the probabilistic collocation method and the non-intrusive polynomial chaos (NIPC) regressive method. The main aim of these methods is treat the problem as a black box, around which the outputs have polynomial fits applied. The probabilistic collocation methodology works by finding an unknown chaos polynomial to fit against some known chaos coefficients.25, 26 This works differently to NIPC regressive methods which assume that the polynomial forms are known before hand, but that the coefficients are now unknown and need to be found via a regressive fit .21 Therefore there has been a gamut of methods for UQ using probability as its basis. Traditional direct Monte Carlo methods have been slowly trumped through the use of novel and more efficient methods through polynomial chaos theory.27 However unlike these modern methods which require further testing and validation, the traditional Monte Carlo-based methods have the full confidence of engineers and scientists to deliver the correct answer for UQ under all conceivable scenarios. Hence, it is the purpose of this paper to show an example context of where both Monte Carlo, and polynomial chaos methods are equally successful at UQ for a physical T-Tail model, but where the polynomial chaos method is clearly the more efficient alternative, thus validating it as a suitable alternative for Monte Carlo modelling in a ‘real-world’ context.

2 of 14 American Institute of Aeronautics and Astronautics

II.

Background Theory

orbet Wiener introduced a spectral-based uncertainty quantification (UQ) method for Gaussian random N variables in his seminal paper of 1938 entitled: The Homogenous Chaos. Through his theory of ‘polynomial chaos’, Wiener originally used a Hermite polynomial basis to spectrally expand uncertain parameters 20

in differential equations which were assumed to be Gaussian distributed. In order overcome this limitation (pure Gaussian assumption) polynomial chaos theory has been enhanced to include a variety of input probability distributions which are summarised in Table 1. The polynomial chaos formulation is known to possess √ an exponential convergence property,28 in contrast to the comparably slower 1/ N convergence rate for the Monte Carlo method.3 Table 1. Relationship between modelling random variable and the corresponding polynomial basis and support range. Table adapted from reference:.29

Continuous

Discrete

Random Variables (ξ) Gaussian Gamma Beta Uniform Poisson Binomial Negative Binomial Hypergeometric

Wiener-Chaos (Ψ(ξ)) Hermite Laguerre Jacobi Legendre Charlier Krawtchouk Meixner Hahn

Support (−∞,∞) [0,∞) [a,b] [a,b] {0,1,2,...} {0,1,...,N} {0,1,2,...} {0,1,...,N}

Mathematically speaking, polynomial chaos theory aims to expand all forms of uncertainty in an equation via a series expansion. This series expansion consists of a basis of orthogonal polynomials which are expanded in terms of random variables. The choice of ensuring that the polynomials are orthogonal is important as it simplifies the derivation of statistical properties when integration is employed. The polynomial chaos formulation classically takes the form shown in Equation 1. X(ξ) =

∞ X

αj Ψj (ξ)

(1)

j=0

In Equation 1, the variables ξ are random variables. They will possess a distribution depending on the manner of treatment of the polynomial functional term, Ψj (ξ). For example, if it is expanded as a series of Hermite polynomials then ξ will be normally distributed, that is ξ ∼ N (0, 1). This list of correspondence between modelled random variables and polynomial type is outlined in Table 1. Notice that Equation 1 is summed over an infinite series which is not physically practical for many problems. Therefore it is necessary to truncate the polynomial series to some finite term. This is shown in Equation 2. X(ξ) =

N X

αj Ψj (ξ)

(2)

j=0

When the polynomial chaos expression is truncated there now exist a finite number of terms in the approximation which must be considered in model design, as truncation will invariably induce additional error into the results. Ultimately, polynomial chaos theory aims to solve a stochastic process by applying the series form shown in Equation 2 to a generic model involving a state space variable, x, and an independent variable t. An example of this is made clear in Equation 3, which essentially adapts Equation 2 to account for state the state space parameters, x, and t. u(x, t; ξ) =

N X

ui (x, t)Ψi (ξ)

(3)

i=0

Equation 3 can be seen to possess two distinct parts, a deterministic and a random component (ui (x, t), and Ψi (ξ) respectively). The typical polynomial chaos expression requires the selection of uncertainty for the

3 of 14 American Institute of Aeronautics and Astronautics

expanding polynomials in accordance with Table 1. Using the expansion system represented in Equation 3, the mean value may be found by applying the expectation operator to it. Doing so results in the generation of Equation 4, where the sample space is defined as being Ω. Z E[u(x, t; ξ)] =

u(x, t)w(ξ)dξ Ω

=

Z X P

ui (x, t)Ψi (ξ)w(ξ)dξ

Ω i=0

= u0 (x, t)

(4)

An interesting conclusion of Equation 4 is that the determination of the mean value reduces to the evaluation of a single constant term in the polynomial chaos expansion. Such a property is only possible due to the use of orthogonal polynomials and their associated properties. A similar expression may be arrived at for the variance through Equation 5. V ar [u(x, t; ξ)] = E[u(x, t; ξ)2 ] − E[u(x, t; ξ)]2 Z P X = ui (x, t; ξ)2 Ψi (ξ)2 w(ξ)dξ − u0 (x, t)2 Ω

i=0

=

P X

2

ui (x, t)2 hΨi (ξ)i

(5)

i=1

In Equation 5, an angular bracket notation is used for the sake of mathematical expedience. The definition behind this notation is shown in Equation 6. This notation is common-practice whenever orthogonal polynomials are used. Z Pn (x)Pm (x)w(x)dx = hPn (x), Pm (x)i . (6) Ω

Thus as Equations 4 and 5 summarise, the coefficient terms of a polynomial chaos expansion can be used to directly calculate system statistics, and hence the overarching goal of polynomial chaos theory in general is to have a complete system of coefficients and polynomials from which statistics can be gathered. There are two prominent non-intrusive methodologies for doing so, the non-intrusive polynomial chaos (NIPC) regression method, and the probabilistic collocation method. which both aim to model the inherent uncertain problem as a black box.. These methods are briefly explored in the following two subsections. A.

NIPC Regression

of NIPC regression is to fit a linear least square solution to the model black box output evaluations. TheTheaimlinear least squares solution is based on the standard linear algebra system defined in Equation 7. Ψβ = R

(7)

In this equation Ψ refers to an m × n (m ≥ n) matrix of polynomials, β is a m × 1 column vector of coefficients, and R is a m × 1 column vector of outputs (response values). This is clarified for the reader in Equation 8.      Ψ0 (ξ0 ) Ψ1 (ξ0 ) Ψ2 (ξ0 ) · · · Ψn (ξ0 ) β0 R(ξ0 )       Ψ0 (ξ1 ) Ψ1 (ξ1 ) Ψ2 (ξ1 ) · · · Ψn (ξ1 )   β1   R(ξ1 )   .      (8) .. ..  .   ..  =  ..   .   .   . . . Ψ0 (ξm )

Ψ1 (ξm )

Ψ2 (ξm ) · · ·

Ψn (ξm )

βm

R(ξm )

Hence the Ψ matrix is formulated by evaluating the polynomial chaos polynomials at a set of sampled input points ξ, and the corresponding output matrix R is formed by passing these set of values through 4 of 14 American Institute of Aeronautics and Astronautics

the black box equation. Note that the Ψ matrix is not necessarily square. It has dimensions of m × n with m ≥ n, which occurs if there is an over-sampling of the input points ξ. An over-sampling of input points requires a least squares regression solution for an over-determined system (hence the name: NIPC regression). For polynomial chaos regression it is generally encouraged to over-sample the number of input points by a factor of 2 .30 As mentioned previously, in order to build the values of ξ sampling is necessary. However unlike Monte Carlo methods, the sampling performed in NIPC regression tends to be more ‘intelligent’. That is to say, in the general UQ research field, NIPC regressive-type sampling is performed using a method known as Latin hypercube sampling. An image demonstrating how this type of sampling works is made clear in Figure 1. 1

F(x)

0.8

CDF

0.6 0.4 0.2 0 -∞...

-8

-6

-4

-2

0

2

4

6

8

...∞

1

f(x)

0.8

PDF

0.6 0.4 0.2 0 -∞...

-8

-6

-4

-2

0

2

4

6

8

...∞

x

Figure 1. Demonstration of Latin hypercube sampling. Image adapted from.31

In Figure 1 it can be seen that the probability space (in this image a Gaussian distribution) is split into equiprobable areas. These splits are driven by the equidistant sectioning in the cumulative distribution function (CDF). Hence in Figure 1 points are randomly chosen within the stratified sections of this Gaussian probability distribution function (PDF). This technique is favourable to completely random sampling since all sections of the PDF now receive equal representation in the uncertainty analysis. Increasing the number of samples in a Latin hypercube-driven technique involves increasing the amount of sectioning shown in Figure 1. B.

Probabilistic Collocation

robabilistic collocation is another non-intrusive method used to solve a polynomial chaos system. HowP ever, it works differently than the aforementioned regressive-type approach. Namely, instead of finding the correct coefficients to fit to a known orthogonal polynomial basis, it assumes the coefficients are known and instead tries to find an unknown polynomial to fit these coefficients. The coefficient values for the probabilistic collocation method are chosen to be the output values from a black box function evaluation at very specific (‘optimal’) input points. As opposed to traditional polynomial chaos methods which use an orthogonal polynomial basis, the chaos expansion in probabilistic collocation forms a Lagrangian basis, which is not generally orthogonal. Lagrangian polynomials aim to define a function completely at a single point, and ‘interpolate’ a polynomial between these points. Hence, a Lagrange interpolating polynomial is a polynomial that is specifically designed to pass through n distinct points, (ξ0 , R(ξ0 )), (ξ1 , R(ξ1 )), ... ,(ξn , R(ξn )). An Lagrange interpolating

5 of 14 American Institute of Aeronautics and Astronautics

polynomial is formally given in Equation 9. L(ξ) =

N X

`i (ξ)

(9)

j=0

The main step in formulating a Lagrange polynomial is to find all the smaller elements of the summation given in Equation 9, that is the `i components which are each of order N − 1. The expression for each of these ‘smaller’ polynomials are shown in Equation 10. `i (ξ) = R(ξ)

N Y ξ − ξk ξj − ξk

(10)

k=0 k6=j

such that, `i (ξj ) = R(ξj )δij

(11) 32

Further examples of Lagrangian interpolating polynomials are available for the reader in source . The important point in this discussion is that the probabilistic collocation method no longer uses an orthogonal polynomial basis to represent the uncertainty (as did the intrusive, and non-intrusive regressive methods). The lack of the orthogonal polynomial basis means that Equations such as 4 and 5 to find the integral statistics of the distribution are no longer valid (as these equations exploited orthogonality rules to arrive at their conclusions). Therefore probabilistic collocation methods require different formulations to efficiently gain the mean and standard deviation. As it is difficult to obtain an analytic expression for the mean and standard deviation statistics via probabilistic collocation, numerical procedures are generally used. Although this may seem detrimental to accuracy, Gaussian quadrature techniques can be used which requires only n collocation points in order to exactly integrate functions up to an order of 2n − 1 .33 The equation for a Gaussian quadrature is clarified in Equation 12. Z g(x)w(x)dx = Ix

n X

wi g(xi )

(12)

i=0

In Equation 12, Ix is some random real interval in the x domain, and g(x) is some function with w(x) being its corresponding weighting function. For example g(x) may be a Hermite polynomial and w(x) would x2

be the weighting function of the form e− 2 . Hence even though for probabilistic collocation, orthogonal polynomial bases are not used, the ‘theory of orthogonal polynomials’ is still fundamentally important as Gaussian quadrature necessitates integrating via the use of weighting functions which are themselves modelled around orthogonal polynomial theory. In order to carry through with the Gaussian quadrature integration method it is necessary to select the optimal collocation points and weighting function nodes required (these are the xi and wi points specified in Equation 12). The simplest method to do so uses the Golub-Welsch Algorithm which involves the construction of a tridiagonal symmetric Jacobi matrix shown in Equation 13 .34   √ β1 α0 √  √ β2  β1 α1    . . .   .. .. .. Jn =  (13)  p p    βn−2 α βn−1  p n−2 βn−1 αn−1 In order to build this matrix for a specific polynomial it is necessary to re-write the required polynomial (in Table 1) in a recursive fashion of the form shown in Equation 14. ψn+1 (x) = (x − αn )ψn (x) − βn ψn−1 (x),

n≥1

where

6 of 14 American Institute of Aeronautics and Astronautics

(14)

 hxψn (x), ψn (x)i   αn = hψ (x), ψ (x)i n n  hψ (x), ψn (x)i n β =  n hψn−1 (x), ψn−1 (x)i form where the α and β terms in the recurrence relationship correspond directly to those in this Jacobi matrix. Once the Jacobi matrix has been constructed, it needs to be eigen-decomposed into its eigenvalues and eigenvectors. Each of the eigenvalues in this decomposition represent the collocation values xi , and the first terms of the corresponding eigenvectors are used to calculated the weight values. This is made clear in Equations 15 and 16. That is the collocation points are obtained by, xi = λ i ,

(15)

where λi are the corresponding eigenvalues of the Jacobi matrix in Equation 13. The corresponding optimal weights to be used in the Gaussian quadrature are obtained through, 2 wi = Cvi,0 ,

(16)

where vi,0 is the first element of the ith eigenvector. The constant C is the same factor used to normalise the choice of orthogonal polynomials so that they integrate to unity in their standard forms (the probability space is not allowed to integrate to greater than unity). Therefore for Hermite polynomials the value of C √ is taken to be 2π, and for the Legendre polynomials this value is taken to be 2 .35, 34

III.

Experimental Methodology

he Nastran model T-Tail on which robust uncertainty bounds shall be developed from a modal analysis T is shown in Figure 2. This is the same T-Tail configuration described in an earlier work on this subject by Cheema & Vio. The finite element (FE) model is defined via a set of global and local axes. The local 36

axes are shown in orange and are positioned along the 1/4 chord point.

Figure 2. The ‘mesh’ view of the the Nastran T-Tail from an isometric perspective.

This T-Tail was modelled so as to be dimensionally consistent with one built in-house for validation purposes. However this dimensional consistency assumes that the construction methodology supplied by the blue-prints were followed precisely. Realistically minor errors would be introduced in the construction procedure, and it is the purpose of this paper to demonstrate how such errors can be propagated efficiently into a final computational model through the polynomial chaos. Upon discussion with in-faculty qualified aircraft technicians who oversee the T-Tail construction, it was deemed that ‘at most’ the length and width would vary by 5 millimetres. This is primarily due to the aluminium skin not being tensioned properly over the construction jig (Figure 3) in some cases. Since there

7 of 14 American Institute of Aeronautics and Astronautics

is no reason to suggest that this 5 millimetres of dimension uncertainty is partial to particular values, the uncertainty was determined to be uniform in nature. Hence the length and width uncertainties are defined as in Equation 17, `, w ∼ U (−5, 5)

(17)

where all dimensions are in millimetres, and the variables ` and w refer to length and width respectively.

Figure 3. The jig on which the in-house T-Tail structures are developed.

Furthermore there is possible variability in the Young’s modulus of the aluminium skin used. It is difficult to gauge a good value of the coefficient of variation (COV) from literature purely for aluminium. However there has been research done for steel structures (owing to its common use in civil engineering) and so it is possible to extrapolate a COV value from steel to aluminium. Thus from literature on steel reliability a COV of 5% is will be used to represent the Young’s modulus uncertainty in aluminium.37 This notion is summarised as a formal statistic in Equation 18. E ∼ N (69, 3.45)

(18)

where the mean value of 69 GPa is the Young’s modulus value for aluminium 5050, and the standard deviation of 3.45 GPa results directly from the 5% COV value. The distribution is assumed to be normal since it is assumed that industry would attempt to tailor the build quality of aluminium as close as possible to theoretical means and standards. The value of Young’s modulus for this 5000 series of aluminium was taken from the “Military Handbook for Metallic Materials and Elements for Aerospace Vehicle Structures”, with an assumed material production shape of “flat/sheet”.38 Hence, in total the polynomial chaos expansion will contain three dimensions of uncertainty - length, width, and Young’s modulus, which all have the potential to cause significant deviation from the true modal analysis values. Once the Nastran FE model is generated, and the uncertainty analysis is performed with Monte Carlo and polynomial chaos theory, it becomes necessary to perform a validation. As the FE model was generated in the image of a physical T-Tail built in-house, the validation procedure necessitates performing a modal analysis on this physical T-Tail. The experimental set-up is shown in Figure 4.

8 of 14 American Institute of Aeronautics and Astronautics

Figure 4. Complete experimental set-up showing the relative locations of the T-Tail, NI chassis, and accelerometer. R This set-up involved attaching an accelerometer (IMI 603C01 ICP ) onto the T-Tail via a standard epoxy resin, and applying an impulse force at twenty-four uniformly distributed points atop the T-Tail wing surface. The impulse was applied at these distinct points via an impulse force hammer (086C03 impulse force hammer). The results of the modal analysis were analysed and displayed on a computer using a combination of the NI PXIe-4497 dynamic signal analyser housed in an NI PXIe-1071 chassis. The software overlay was chosen to be a third party NI extension,“ABSignal ModalView”. The complete experimental set-up used for validation is outlined in Figure 4.

IV.

Results

he results of performing a single Nastran modal analysis (SOL103) on the FE T-Tail are summarised in T Table 2. Since these results do not consider uncertainty, they are representative of results which would generally be obtained in an industry setting. Since the FE model employed in this paper matches that used by Prasad & Vio, the results are found to coincide.36 Table 2. A descriptive summary of the first six bending modes as they appeared via the FE Nastran analysis.36

Mode Number 1 2

Frequency (Hz) 10.88 15.26

3

31.32

4 5 6

42.32 55.18 82.66

Bending Type Wing bend about x-axis Wing twist around z-axis Wing bend about x-axis with minor elevator twist around y-axis Strong elevator twist around y-axis Symmetric wing bend about x-axis Rudder bend about x-axis

Figures 5(a) and 5(b) show some of the visual results obtainable from this single deterministic run of the FE modal analysis in Nastran (SOL103).

9 of 14 American Institute of Aeronautics and Astronautics

(a) The single bending mode which occurs at 10.88 Hz.36

(b) The symmetric bending mode at 51.65 Hz.36

Figure 5. Images showing some of the modal analysis results of the Nastran analysis.

105

102

100

100

10-5

10

Relative Error [%]

Relative Error [%]

Using the work of Cheema & Vio36 as a basis, the Monte Carlo and NIPC regressive methods were applied to the FE model, and the uncertainties of each mode were captured. As it is difficult to summarise all the results of Cheema & Vio in a concise and tabulated mode, the same graphs are represented here for reference. As the purpose of this paper is based on experimental validation rather than a theoretical discussion, the reader is urged to examine the material presented in Cheema & Vio for further theoretical clarification on the interpretations of these graphs. The results of applying the Monte Carlo simulation are shown in Figures 6(a) and 6(b).

1st mode 2nd mode 3rd mode 4th mode 5th mode 6th mode

-10

10-15 100

101

102

103

10-2

10

1st mode 2nd mode 3rd mode 4th mode 5th mode 6th mode

-4

10-6 100

104

101

102

Number of Simulations

103

104

Number of Simulations

(b) The symmetric bending mode at 51.65 Hz.36

(a) Convergence of mean value against simulation number using Monte Carlo.36

Figure 6. Monte Carlo convergences against simulation number.

Similarly the results of Cheema & Vio in regards to the NIPC regressive model on the FE model are displayed in Figures 1st order

0.08

2nd order

5th order

3rd order

0.06

5

1st mode 2nd mode 3rd mode 4th mode 5th mode 6th mode

4th order

Relative Error [%]

Relative Error [%]

0.1

0.04 6th order

1st order

1st mode 2nd mode 3rd mode 4th mode 5th mode 6th mode

4 3 2nd order

2

3rd order

4th order

5th order

6th order

1

0.02 0

20

40

60

80

100

120

0

20

40

Number of Simulations

60

80

100

120

Number of Simulations

(a) Convergence of mean value against simulation number using NIPC.36

(b) Convergence of standard deviation value against simulation number using Monte Carlo.36

Figure 7. NIPC convergences against simulation number.

However, unlike Cheema & Vio, this paper presents a discussion on an additional polynomial chaos expansion approach for theoretical discussion - probabilistic collocation. The graphical results of this are 10 of 14 American Institute of Aeronautics and Astronautics

shown below.

0.04 7x7x7 points

3x3x3 points

0.02

0

0

50

100

150

200

250

300

1st mode 2nd mode 3rd mode 4th mode 5th mode 6th mode

2x2x2 points

Relative Error [%]

Relative Error [%]

5x5x5 points

2x2x2 points

0.06

2

1st mode 2nd mode 3rd mode 4th mode 5th mode 6th mode

0.08

350

1.5 5x5x5 points

1 3x3x3 points

0.5

7x7x7 points

0

50

100

Number of Simulations

150

200

250

300

Number of Simulations

(a) Convergence of mean value against simulation number using Probabilistic collocation.

(b) Convergence of standard deviation value against simulation number using Probabilistic collocation.

Figure 8. Probabilistic collocation convergences against simulation number.

From the stagnation occurring in Figure 8(a) it would appear as if the collocation method has converged to a value slightly different to the ‘theoretical’ one predicted by Monte Carlo (Figure 6(a). This apparent anomaly √ may be explained if we consider that the Monte Carlo convergence is indeed very slow (Monte Carlo has 1/ N convergence3 ). Therefore it is entirely possible that the values of mean and standard deviation for the six modes of motion obtained after 10,000 Monte Carlo simulations is far from being fully converged (and therefore should perhaps not be taken as the ‘theoretical’ solutions), and that the probabilistic collocation method has perhaps instead converged to the precise solution for the mean values with just over 8 simulation runs. This is a difficult point to prove unless more Monte Carlo simulations are run, but doing so would be infeasible given the time constraints of this thesis. Considering that it takes approximately eight seconds to perform a single Nastran simulation on The School of Aerospace, Mechanical, and Manufacturing Engineering (AMME) computers, running enough simulations to ensure convergence of at least four significant figures √ would necessitate approximately 1004 simulations (given the Monte Carlo convergence rate is O(1 N )). Running this many simulations would take 1004 × 8 seconds which is equivalent to 144.68 days - longer than a university semester! Arguably, the probabilistic collocation method has converged to the correct value for the mean values in far less simulations, but unless further Monte Carlo simulations are run for approximately 150 days, this would be difficult to prove definitively. Now that three sets of UQ data are obtained for the first six bending modes of motion, based on a theoretically ‘blue-print’ accurate FE model, the experimental validation becomes necessary. As mentioned in the Methodology section, the results involved observing the frequency response function (FRF) outputs when an impulsive force is applied at twenty-four uniformly distributed points on top of the T-Tail horizontal stabiliser. The result of the T-Tail experiment is shown in Figure 9. This figure is annotated with information from theoretical UQ analysis involving Monte Carlo theory, NIPC regression, and probabilistic collocation.

Figure 9. The experimentally obtained FRF annotated with mean behaviours, and standard deviation ranges.

Ultimately this T-Tail experiment was performed for two reasons. Firstly, it serves to validate the T-Tail finite element (FE) model used as it compares the modal shape output of the physical model against the

11 of 14 American Institute of Aeronautics and Astronautics

Nastran FE model. Secondly, it presents a viable scenario in which uncertainty quantification (UQ) methods may be used to gain important information about a non-linear system. The experimentally obtained frequency response function (FRF) shown in Figure 9 was taken at a single point on the wing surface. An overall FRF across the twenty-four points on the T-Tail structure could not be obtained since there is a strong correlation between the vibrational information contained in the twentyfour points. Therefore it would not be valid to obtain a single statistical average of all the FRFs without a full correlation matrix. Therefore each FRF was analysed individually and trends were found between each point. Note that, roughly speaking each peak in the FRF of Figure 9 corresponds to a mode of vibrational motion for the T-Tail. In Figure 9 the red lines act as horizontal error bars which capture the 4σ standard deviation information from the FE uncertainty analysis. The 4σ value was chosen in accordance with Chebyshev’s inequality, which states that no more than 1/k 2 of the distribution’s values may be greater than k standard deviations away from the mean. Hence a k value of 4 would ensure that 93.75% of the distribution is captured with confidence.39 The values for the mean and standard deviation (the standard deviation being represented by the error bars) used for each mode in Figure 9 are extracted from the Monte Carlo graphs of Figures 6(a), and 6(b). Even though these error bars are built based on Monte Carlo values, they are equally representative of NIPC, or collocation values since all have methods have been shown to converge to within 1% relative error of each other. The main point however, is that the NIPC and collocation methods have been able to obtain these same error bounds at least an order of magnitude faster by exploiting faster chaos expansions. As can be seen in Figure 9 the uncertainty analysis has its merits in that it is able to extend a range about the mean value and capture additional modes and non-linear behaviour that is not otherwise obtainable from a single deterministic FE analysis. The uncertainty analysis was able to find a an additional mode at approximately 29 Hz, as well as one at approximately 45 Hz. Moreover some residual non-linear behaviour around the 5th and 6th modes of vibrational motion were able to located. It is important to locate these as one the objectives of this paper is to prove the ability for UQ methods to allow engineers to design for non-anticipated behaviour that cannot be seen in a single deterministic FE analysis. Evidently if engineers designed this T-Tail structure based on a single deterministic FE-model, functional design requirements may not have been met which would necessitate project re-design. If efficient polynomial chaos methods had been used for early UQ, there would be more time and monetary savings in the re-design of the prototype stage. However even though this UQ analysis has helped determine locations of design interest, there are certain modes (or ‘non-linear behaviours’) that are completely missed. There is a strong mode at approximately 72 Hz that is not found via FE analysis and there is some non-linear FRF behaviour ranging from 93-98 Hz. This is a testament of the uncertainty inherent in the black box model. Essentially, there exists an inherent lack of ‘perfect’ knowledge in the FE T-Tail modelling stage and hence correlation with experiments is not expected to be perfect. Although UQ techniques, polynomial chaos theory in particular, have helped to efficiently capture a lot of these non-anticipated behaviours it is not able to capture all the epistemic uncertainty which stem from an incomplete model.

V.

Conclusion

olynomial chaos theory is an exciting development to have been formally conceived and generalised over P the last decade. It is representative of a major step forwards in the field of UQ. This is important because traditional Monte Carlo methods, whilst proven to be correct, are an inefficient method of UQ that has effectively ‘stagnated’ a lot of research in this field of research. However through polynomial chaos theory we have access to a more efficient, and intuitive insight into UQ for various problems when compared against Monte Carlo. The ability to apply this efficient UQ method via polynomial chaos theory will assist in the overall costeffectiveness and time efficiency of the engineering design cycle. This is because an uncertainty model may be developed in the earlier, more flexible stages of design which can then be efficiently propagated throughout the model. This ultimately allows engineers to understand physical system behaviour without the need for multiple prototype stages. This has been demonstrated via the physical T-Tail model, which was able to locate additional vibrational modes and anticipate non-linear behaviour not otherwise obtainable through a single deterministic FE simulation. Hence the propagation of a polynomial chaos through the FE model ensured that non-anticipated system behaviour could be found.

12 of 14 American Institute of Aeronautics and Astronautics

References 1 Pettit, C. L. and Veley, D. E., “Risk Allocation Issues for System Engineering of Airframes,” International Symposium on Uncertainty Modeling and Analysis, 2008. 2 Pettit, C. L., “Uncertainty Quantification in Aeroelasticity: Recent Results and Research Challenges,” Journal of Aircraft, Vol. 41, No. 5, 2004, pp. 1217–1229. 3 Amar, J. G., “The Monte Carlo method in science and engineering,” Computing in Science & Engineering, Vol. 8, No. 2, 2006, pp. 9–19. 4 Wu, Y.-F. and Lewins, J. D., “Monte Carlo studies of engineering system reliability,” Annals of Nuclear Energy, Vol. 19, No. 10, 1992, pp. 825–859. 5 Styuart, A., Livne, E., Demasi, L., and Mor, M., “Flutter Failure Risk Assessment for Damage-Tolerant Composite Aircraft Structures,” AIAA Journal, Vol. 49, No. 3, 2011, pp. 655–669. 6 Kurdi, M., Lindsley, N., and Beran, P., “Uncertainty Quantification of the Goland Wing’s Flutter Boundary,” Proceedings of the AIAA Atmospheric Flight Mechanics Conference and Exhibit, 2007, pp. 2007–6309. 7 Wright, M., Bose, D., and Chen, Y., “Probabilistic modeling of aerothermal and thermal protection material response uncertainties,” AIAA Journal, Vol. 45, No. 2, 2007, pp. 399–410. 8 Moore, R., Interval Analysis, Prentice Hall, 1966. 9 Alefeld, G. and Herzbeger, J., Introduction to Interval Computations, Academic Press, New York, 1983. 10 Zhang, H., “Nondeterministic linear static finite element analysis: An interval approach,” 2005. 11 Qiu, Z., Wang, X., and Friswell, M. I., “Eigenvalue bounds of structures with uncertain-but-bounded parameters,” Journal of Sound and Vibration, Vol. 282, No. 1, 2005, pp. 297–312. 12 Chen, S. H., Lian, H. D., and Yang, X. W., “Interval eigenvalue analysis for structures with interval parameters,” Finite Elements in Analysis & Design, Vol. 39, No. 5, 2003, pp. 419–431. 13 Oberkampf, W. L., Helton, J., Joslyn, C., Wojtkiewicz, S., and Ferson, S., “Challenge Problems: Uncertainty in System Response Given Uncertain Parameters,” Reliability Engineering and System Safety, Vol. 85, 2004, pp. 11–19. 14 Ferson, S. and Hajagos, J. G., “Arithmetic with uncertain numbers: rigorous and (often) best possible answers,” Reliability Engineering and System Safety, Vol. 85, No. 1, 2004, pp. 135–152. 15 M¨ oller, B. and Beer, M., “Engineering computation under uncertainty: Capabilities of nontraditional models,” Computers and Structures, Vol. 86, No. 10, 2008, pp. 1024–1041. 16 Pawar, P. M., Nam Jung, S., and Ronge, B. P., “Fuzzy approach for uncertainty analysis of thin walled composite beams,” Aircraft Engineering and Aerospace Technology, Vol. 84, No. 1, 2012, pp. 13–22. 17 Mottershead, J. E., Kenneth, B. J., and Khodaparast, H. H., “Propagation of structural uncertainty to linear aerelastic stability,” Computers & Structures, Vol. 88, 2009, pp. 223–236. 18 Dai, Y., Wu, Z., and Yang, C., “Robust aeroservoelastic stability margin analysis using the structured singular value,” IEEE, 2010, pp. 643–648. 19 Bao, Y., Li, H., An, Y., and Ou, J., “Dempster Shafer evidence theory approach to structural damage detection,” Structural Health Monitoring, Vol. 11, No. 1, 2012, pp. 13–26. 20 Wiener, N., “The Homogeneous Chaos,” American Journal of Mathematics, Vol. 60, No. 4, 1938, pp. 897–936. 21 Iagnemma, K., Crawford, J., and Kewlani, G., “A polynomial chaos approach to the analysis of vehicle dynamics under uncertainty,” Vehicle System Dynamics, Vol. 50, No. 5, 2012, pp. 749. 22 Madankan, R., Polynomial Chaos Based Method for State and Parameter Estimation, Master’s thesis, State University of New York, 2012. 23 Adhikari, S. and Kundu, A., “Transient Response of Structural Dynamic Systems with Parametric Uncertainty,” Journal of Engineering Mechanics, Vol. 140, No. 2, 2014, pp. 315–331. 24 Fagiano, L. and Khammash, M., “Simulation of stochastic systems via polynomial chaos expansions and convex optimization,” Physical review.E, Statistical, nonlinear, and soft matter physics, Vol. 86, No. 3 Pt 2, 2012, pp. 036702. 25 Eldred, M. and Burkardt, J., “Comparison of Non-Intrusive Polynomial Chaos and Stochastic Collocation Methods for Uncertainty Quantification,” AIAA Journal, 2009. 26 Eldred, M., Webster, C., and Constantine, P., “Design Under Uncertainty Employing Stochastic Expansion Methods,” 2008. 27 Xiu, D., Lucor, D., Su, C., and Karniadakis, G., “Performance Evaluation of Generalized Polynomial Chaos,” 2003. 28 Xiu, D., Numerical Methods for Stochastic Computations - A Spectral Method Approach, Princeton University Press, New York, 2010. 29 Xiu, D. and Karniadakis, G. E., “The Wiener-Askey Polynomial Chaos for Stochastic Differential Equations,” SIAM Journal on Scientific Computing, Vol. 24, No. 2, 2002, pp. 619–644. 30 Hosder, S., Walters, R., and Balch, M., “Efficient Sampling for Non-Intrusive Polynomial Chaos Applications with Multiple Uncertain Inputs,” Proceedings of the 48th AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, 2007. 31 Ishigami, G., Kewlani, G., and Iagnemma, K., “Predictal Mobility: A Statistical Approach for Planetary Surface Exploration Rovers in Uncertain Terrain,” IEEE Robotics & Robotics Automation Magazine, 2009, pp. 61–70. 32 Levy, D., Introduction to Numerical Analysis, University of Maryland, 2010. 33 Press, W., Teukolsky, S., Vetterling, W., and B. P., F., Numerical Recipes - The Art of Scientific Computing, Cambridge University Press, 2007. 34 Gubner, J., “Gaussian Quadrature and the Eigenvalue Problem,” Tech. rep., University of Wisconsin, 2009. 35 Kærgaard, E. B., Spectral Methods for Uncertainty Quantification, Master’s thesis, Technical University of Denmark, 2013.

13 of 14 American Institute of Aeronautics and Astronautics

36 Cheema, P. and Vio, G., “A Non-Intrusive Polynomial Chaos Method to Efficiently Quantify Uncertainty in an Aircraft T-Tail,” Australiasian Conference on Computational Mechanics, 2015. 37 Galambos, T. V. and Ravindra, M. K., “The basis for load and resistance factor design criteria of steel building structures,” Canadian Journal of Civil Engineering, Vol. 4, No. 2, 1977, pp. 178–189. 38 Department of Defence, “Military Handbook: Metallic Materials and Elements for Aerospace Vehicle Structures,” Tech. rep., United States of America, 1998. 39 Freedman, D., Robert, P., and Purves, R., Statistics, W.W. Nortan & Company, New York City, 3rd ed., 1998.

14 of 14 American Institute of Aeronautics and Astronautics