theory of stochastic model updating using a Monte-Carlo inverse procedure with ... These variables can be different structural parameters such as material and ... (such as generic-element eigenvalues) a possible assumption that could be ...
23rd IMAC, Orlando, Florida, USA, February 2005, paper 200
Inverse Propagation and Identification of Random Parameters in Model Updating
C. Mares, J. E. Mottershead Department of Engineering, University of Liverpool, Brownlow Hill, Liverpool L69 3GH, UK M. I. Friswell University of Bristol, Department of Aerospace Engineering, Queen’s Building, University Walk, Bristol BS8 1TR, UK ABSTRACT The usual model updating method may be considered to be deterministic since it uses measurements from a single test system to correct a nominal finite element model. There may however be variability in seemingly identical test structures and uncertainties in the finite element model. Variability in test structures may arise from many sources including geometric tolerances and the manufacturing process, and modelling uncertainties may result from the use of nominal material properties, ill-defined joint stiffnesses and rigid boundary conditions. In this paper the theory of stochastic model updating using a Monte-Carlo inverse procedure with multiple sets of experimental results is briefly explained and applied to the case of propagation and identification of randomised parameters. 1. Introduction In the usual model updating method a single finite element model is optimised by minimizing the error between predicted results and test data from a single physical structure [1, 2]. The choice of updating parameters is an important aspect of the process and should always be justified physically. Model uncertainties should be located and parameterised sensitively to the predictions. Finally the model should be validated by assessing the model quality within its range of operation and its robustness to modifications in the loading configuration, design changes, coupled structure analysis and different boundary conditions. But predictions based on a single calibration of the model parameters cannot give a measure of confidence in the capability of numerical simulations to represent the actual structure. Uncertainty and error quantification is a two-step process, the first step being the identification of all uncertainty and error sources whether they originate from the modelling assumptions, numerical computations or physical experiments. The second step is the assessment and propagation of the most significant uncertainties and errors through the modelling and simulation process to the predicted response quantities [3]. It is conceivable that as more complex models are developed designs may involve hundreds of random variables and a large number of responses at many structural locations should be determined in a stochastic sense [4]. Multiple realizations of an experiment (numerical or physical) lead to the concept of the metamodel [5] and the possibility to express the distance between models and operate design modifications based on statistical concepts as opposed to the comparison between deterministic models based on nominal variables. The main methodologies for uncertainty propagation are the Monte-Carlo sampling for the forward calculations and the Bayesian calibration for backward
inference. The Monte-Carlo simulation appears to be the only universal method which can provide accurate solutions for problems in stochastic mechanics, the meta-model being a source for a statistical problem description, confidence measures, correlation with experimental data, global dependencies and selection of dominant design variables, the only disadvantage being the large amounts computation needed for the simulations [6-10]. Possibilities for reanalysis and model reduction for large parameterised models are discussed by Balmes et al. [11]. In this paper a stochastic model updating method is briefly described; a full description can be found in [12]. It is supposed that multiple sets of test data are available from many structures built in the same way from the same materials, but with manufacturing and materials variability. A finite element model is also available containing modelling uncertainties. Parameters are selected together with Gaussian distributions and are propagated through the model using Monte-Carlo analysis to provide multiple sets of predicted results. The sets of predicted results are generally nonlinear functions of the randomised parameters, which must be chosen together with their distributions using engineering judgement. It is assumed that the randomised parameters are able to account for both the variability in physical test pieces and uncertainty in the model. In ‘deterministic’ model updating the sensitivities of a single set of test data to the chosen parameters are determined and the same parameters are corrected iteratively using an objective function based on a truncated first-order Taylor series expansion. In the stochastic approach a linear model is fitted to the multiple sets of predicted results using multivariate multiple regression thereby producing a parameter sensitivity matrix that takes into account the complete population of randomised values for all of the parameters together. The distance between the mean values of the experimental data and predictions is then minimized using the gradient-and-regression approach to obtain improved estimates of the mean values of the randomised parameters at each step. The knowledge of the randomised parameters is increased iteratively and this allows an improved estimate of the co-variance matrix a-posteriori. Hypothesis tests may be applied to determine whether or not the population of updated models has converged to the population of test structures with the same statistics, within a specified confidence interval. The theory is applied to a simulated three degree-of-freedom mass-spring system. 2. Theory The general representation of a mechanical system includes a set of physical parameters x whose possible variations ∆x around a nominal value x 0 create a model of uncertainty: x = x 0 + ∆x
(1)
These variables can be different structural parameters such as material and geometrical properties or aspects of modelling related to the boundary conditions etc. and their variation may be correlated or independent. For a finite element model the global matrices are expressed as linear combinations of constant element or substructure matrices multiplied by the variable parameters and affecting all the terms in the substructure stiffness or mass matrix. More sensitive geometric parameters are studied for the case of complex joints [13-15] or by eigenvalue decomposition of the stiffness matrix and modification of its eigenvalues and eigenvectors [16, 17]. For example a general parameterisation for the stiffness matrix may be written as, K (x) = ∑ x j K ej
(2)
with similar decomposition for the mass or damping matrices. The choice of updating parameters and of variation bounds for them, which can be justified physically, is one of the most important aspects of an analysis. The possibility to describe the variation of the updating parameters by a probability distribution requires access to considerable amounts of experimental data that are not always accessible. In the case of equivalent models where the parameters are model dependent
(such as generic-element eigenvalues) a possible assumption that could be used is that of a uniform probability distribution over an interval justified by design considerations. In the Monte-Carlo process a random parameter vector obtained from the parameter distribution can be used to characterize the uncertainty in some output variables of interest. The random output variables y can be physical quantities observed at a single location, a single physical quantity observed at p locations, or at p time instants, or combinations of these. This process produces a posterior predictive distribution and can be used for an inverse estimate when compared to the experimental data obtained from measurements on the actual system. For the n random observations of each i th variable the mean value is obtained as, yi =
1 n 1 T ∑ y ki = y i e n n k =1 n
e n = [1 1 L 1] ; T
eTn e n
= n;
y i = [ y1i y 2i L y ni ]
(3) T
and collectively in terms of the data vector, y=
1 n 1 ∑ y i = [y1 n i =1 n
y 2 L y n ] en =
1 T Y en n
(4)
The sample mean vector y is an unbiased estimate to the population mean vector µ (over all possible values) though never equal to it. For the observed p variables the covariance matrix comprising the sample variances and covariances is defined by, S=
[
1 y1 n −1
y2 L y p
]T I − 1n e neTn [y1
]
y2 L y p =
[
1 Y T Y − ny y T n −1
]
(5)
The sample covariance matrix S is an unbiased estimator of the population covariance Σ but never equal to it. Considering the system described by the relationship between the random variables of the input and of the output, y = f (x)
(6)
the inverse propagation of the error can be achieved by comparing the means of the output vector with the desired output. Then by different stochastic optimisation techniques the gradient method yields a linearised version of equation (6) by using a Taylor expansion about the mean vector of the sample x at an iteration k : y k ≅ f (x) +
df df ( x k − x ) ≅ y + G ( x k − x ); G = dx dx
(7)
where the second equality is valid for a linearised model, when y = f (x) . The matrix G is determined by a linear regression as detailed below. If the target mean vector of the output variable is y 0 and at an iteration k the population mean µx,k leads to a mean vector y ≠ y 0 , an improved estimate µx,k+1 of the population mean of the input can be obtained by using equation (7),
y (µ x ,k +1 ) ≅ y (µ x ,k ) +
df (µ x ,k +1 − µ x ,k ) = y 0 dx k
(8) (9)
µ x ,k +1 = µ x ,k + G −k 1 ( y 0 − y (µ x ,k ))
with convergence of the iterative process when µ x ,k +1 ≅ µ x ,k
and
y (µ x ,k ) ≅ y 0 . Usually the
distance between the points in the output population and the target mean vector is the Euclidian distance DE or the Mahalanobis distance DM defined as, DE2 = (y k − y 0 )T (y k − y 0 )
(10)
2 DM = (y k − y 0 )T S −y1 (y k − y 0 )
(11)
The Mahalanobis distance can be considered to be a normalized Euclidian distance. The matrix S −y1 is usually considered to be diagonal, by eliminating the co-variance terms. D M is unitless due to the scaling by the covariance matrix and therefore expresses distance in terms of a number of standard deviations. The variances and covariances of the parameters, whether directly measurable or not, should be updated together with the mean values. If the a priori knowledge about a parameter is sparse, the corresponding variance will be large and during the updating process the resulting, a posteriori, variance should decrease as a result of increasing the amount of knowledge on the parameter. If the prior and posterior variances are identical the parameter is unresolved. The predicted values cannot be identical to the true ‘observed’ values because of experimental variability and modelling uncertainties. If the model uncertainties and observed test variabilities have Gaussian distributions, then the modelling and experimental-error covariances, for the forward problem, will combine by addition even in the case of nonlinearity. Then, at each iteration of the updating process the a posteriori covariance matrix can be computed as [18], S xx = S xx 0 − S xx 0 G T (S M + G S xx 0 G T ) −1 G S xx 0
(12)
where S M is the error covariance matrix describing both model uncertainty and test variability, S xx 0 is the a priori variance matrix of the updating parameters and G is the sensitivity matrix at each iteration. It is of interest to mention that the above covariance correction was used in [19], where a minimum variance updating method was developed, but in that paper, S M was estimated only from the test measurements errors. Covariance operators (prior and posterior) are generally not diagonal but ‘band diagonal’; covariances between the neighbouring parameters are not null and the parameters are correlated [20]. A multivariate multiple regression analysis is carried out at each iteration in order to obtain a linearised relationship between the input, or ‘predictive’, variables X = [x1 , x 2 K x q ]T and the output, or ‘criterion’, variables Y = [y1 , y 2 K y p ]T as,
(13)
Y = e n aT + XB + Ξ, a ∈ ℜ n× p , B ∈ ℜ q× p , X ∈ ℜ n×q , Y ∈ ℜ n× p
[
aT = a1
]
[
]
[
a 2 L a p , B = b1 b 2 L b p , Ξ = ε 1 ε 2 L ε p
]T
e n = [1 1 L 1]T
(14) (15)
where the iteration index is omitted for clarity. Each row in the model describes an output vector as a linear function of the corresponding input vector corrected by a random deviation. The following assumptions are made in order to obtain the coefficients: E (Ξ) = 0, cov(y i ) = Σ , cov(y i , y j ) = 0
(16)
where y Ti is the ith row of Y . Thus the observation vectors are independent and have the same covariance matrix. That is we assume that the y ’s within an observation vector are correlated with each other but independent in the y ’s in any other observation vector [21-22]. Application of ˆ of the a least squares technique for each variable individually y , leads to the estimates ˆa , B i
ˆ is unbiased - sampling the linear model which have the following properties: a) the estimator B same population one would give the same average value for B ; b) the least squares estimates ˆ have minimum variance among all possible linear unbiased estimators without requiring βij in B
normality of the y ’s - the Gauss-Markov theorem, and c) all βij are intercorrelated with each other, the relationship of the x ’s with each other affecting the relationship of the β ’s with each other and since the y ’s are correlated, the β ’s in different columns are correlated. A model corrected for means is obtained [21-22] by, ^
B = S −xx1S xy , aT =
1 T ˆ] e n [Y − XB n
(17)
The final model resulting from the multivariate multiple regression may be written in the form, T n× p ˆ =e a ˆ Y , B ∈ ℜ q× p , X ∈ ℜ n×q , Y ∈ ℜ n× p . n ˆ + XB , a ∈ ℜ
(18)
A more complete description of the theory is given in [12], which includes sections on hypothesis testing to determine whether the meta-model has converges upon the collection of test structures with the same statistics, within a specified confidence limit. It also describes data reduction methods that become essential for numerical stability when large numbers of variables are considered. 3. Three Degree-of-Freedom Mass-Spring Example The three degree-of-freedom mass-spring system shown in Figure 1 is used to illustrate the application of the method. The nominal values of the parameters for the experimental system are: mi = 1.0 kg ( i = 1,2 ,3 ) k i = 1.0 N/m ( i = 1,K5 ) and k 6 = 3.0 N/m. The erroneous random parameters have Gaussian distributions with mean values, k1 = k 2 = k 5 = 2.0 and standard deviations, σ1 = σ 2 = σ 5 = 0.20 N/m. The assumed values of the standard deviations are, σ1 = σ 2 = σ 5 = 0.30 N/m, for the analysis. It is considered that 10 experimental systems are
measured and used for reference while the analytical set consists of a ‘cloud’ of 1000 samples determine by convergence of a T 2 statistic.
Figure 1. Three degree-of-freedom mass-spring example The Euclidian distance and natural frequency errors are shown in Figure 2 and Table 1. Convergence of the mean and standard deviation of the updated random parameters is shown in Figure 2, with initial and final values, after twelve iterations, given in Tables 2 and 3. It can be seen that, as expected, the estimated mean and standard deviation of the experimental sample is not exactly the same as the distribution that was used to produce them. The updated parameters then converge upon the statistics of the experimental sample rather than the underlying distribution. The size of the experimental sample is small, but this is likely to represent the availability of physical systems for testing quite faithfully. Of course, if more experimental samples were available it would lead to a better approximation of the mean and standard deviations of the natural frequencies.
Table 1 Mean Natural Frequency in Hz - % Errors in Parentheses Mode 1 2 3
Test 0.16 0.32 0.45
Initial 0.19(-20.03) 0.38(-20.63) 0.48(-6.70)
Final 0.16 (-0.2) 0.32 (0.0) 0.45 (0.1)
Table 2. Parameter Mean Estimates Parameter K1 K2 K5
Test 1.08 1.09 0.96
Initial 2.0 (85.18) 2.0 (83.48) 2.0 (108.33)
Final 1.09 (1.14) 1.07 (-1.53) 0.95 (-0.89)
Table 3. Standard Deviation Estimates Parameter K1 K2 K3
Test 0.288 0.130 0.241
Initial 0.30 (50) 0.30 (50) 0.30 (50)
Final 0.204 (2.03) 0.226 (13.31) 0.181 (-9.03)
Euclidian Distance
Frequency variation
0.1
0.5
0.09 0.45 0.08 0.4
0.07
0.06 0.35 0.05 0.3 0.04
0.03
0.25
0.02 0.2 0.01
0
0
2
4
6 Iter
8
10
12
0.15
0
2
4
6 Iter
8
10
12
Figure 2. Euclidian Distance and Mean Natural Frequency Errors Parameter variation
STD variation
3
0.4 k1 k2 k5
k1 k 2 k5
0.35
2.5
0.3 2
0.25
1.5
0.2
0.15 1
0.1 0.5
0.05
0
0
2
4
6 Iter
8
10
0
12
1
2
3
4
5
6
7
8
9
10
11
12
Iter
Figure 3. Parameter Mean and Standard Deviation
Iter# 1
Iter# 12 analytic experiment
0.42
0.4
0.34
0.38
0.33
0.36
f2
f2
analytic experiment
0.35
0.32
0.34
0.31
0.32
0.3
0.3
0.29
0.1
0.12
0.14
0.16
0.18
0.2
0.22
0.24
0.26
0.12
0.13
0.14
f1
0.15
0.16 f1
0.17
0.18
0.19
0.2
Figure 4. Initial and Final Scatter Ellipses
Figure 4 shows how the scatter ellipse of predicted results converges upon the experimental scatter ellipse in the plane of the first two natural frequencies. The cloud of yellow points gives the predicted natural frequencies with the corresponding ellipse shown in green. The experimental points and ellipse are shown in red. It can be seen that after 12 iterations the two scatter ellipses overlay each other quite closely. They have similar orientations but there is a noticeable difference in the sizes of the ellipses. There are a number of possible explanations: (i) The experimental ellipse is estimated from only ten samples and as can be seen from Table 3 it differs from the
underlying distribution that the experimental samples were drawn from (with σ i = 0.2 ). In fact the cloud of analytical samples converge better onto the underlying distribution than onto the ellipse from the ten experiments. (ii) The covariances are determined a posteriori and do not appear in the objective function. (iii) The covariance correction is based on a linearised gradient at each step in the design space and depends upon the quality of the estimates S xx and S xy . (iv) The gradient method may converge to a local minimum. (v) The number of samples in the experimental population is determined according to convergence of the T 2 statistic. A population of 1000 was taken but still convergence had not been completely achieved. 4. Conclusions A stochastic model updating method using inverse Monte-Carlo propagation of physical structure variability and model uncertainty together with multivariate multiple regression for optimisation by the gradient method is described. A simulated example is used to demonstrate how the method may be applied. Acknowledgements The research reported in this paper is supported by EPSRC grants GR/R26818 and GR/R34936. References 1. 2. 3. 4. 5. 6. 7. 8. 9. 10. 11. 12. 13. 14. 15. 16. 17. 18. 19. 20. 21. 22.
Mottershead, J. E. and Friswell, M. I. 1993. Model updating in structural dynamics: a survey, Journal of Sound and Vibration, 162(2), 347-375. Friswell, M. I. and Mottershead J. E., 1995. Finite Element Model Updating in Structural Dynamics, Dordrecht, Kluwer Academic Press. Hemez, F. M., Doebling S. W., Anderson M. C. 2004. A brief tutorial on verification and validation, International Modal Analysis Conference, Dearborn, USA. Alvin K. F., Oberkampf W. L., Diegert K. V., Rutherford B. M. 1998. Uncertainty quantification in structural dynamics: a new paradigm for model validation, International Modal Analysis Conference, USA. Marczyk, J 1997. Meta-computing and computational stochastic mechanics. In Computational stochastic mechanics in a meta-computing perspective (ed. J. Marczyk ), CIMNE, Barcelona. Schueller G. I. ed. 1997. A state-of-the art report on computational stochastic mechanics, Probabilistic Engineering Mechanics, 12(4), 197-321. Doltsinis, I. Rau F. and Werner M. 1999. Analysis of random systems. In Stochastic analysis of multivariate systems in computational mechanics and engineering, CIMNE, Barcelona. Doltsinis, I. Ed., 9-159. Kleiber M. ed. 1999. Computational stochastic mechanics. Computer Methods Applied Mechanical Engineering, 168(1-4), 1-353. Marczyk, J 1999. Principles of simulation-based computer-aided engineering, FIM Publications, Barcelona. Shinozuka M., 1987. Structural response variability. Journal of Engineering Mechanics, ASCE, 113(6), 825-842. Balmes E, 2004. Uncertainty error propagation. International Modal Analysis Conference, Dearborn, USA. Mares, C., Mottershead, J.E. and Friswell, M.I., Stochastic model updating: part 1 - theory and simulated examples, Mechanical Systems and Signal Processing, submitted. Mottershead J. E., Friswell M. I., Ng G. H., Brandon J. A. 1996. Geometric parameters for finite element updating of joints and constraints. Mechanical Systems and Signal Processing, 10, 171-182. Mottershead J.E., Mares C., Friswell M. I. and James S. 2000. Selection and updating of parameters for an aluminium space-frame model. Mechanical Systems and Signal Processing, 14(6), 923-944. Mares C., Mottershead J. E., Friswell M. I. 2003. Results obtained by minimizing natural-frequency errors and using physical reasoning. Mechanical Systems and Signal Processing, 17(1), 39-46. Gladwell G. M. and Ahmadian H. 1996. Generic elements for finite element updating. Mechanical Systems and Signal Processing, 9, 601-614. Ahmadian H., Gladwell G. M and Ismail F. 1997. Parameter strategies in finite element updating. Journal of Vibration and Acoustics, 119, 37-45. Tarantola A, Valette B. 1982. Generalized nonlinear inverse problems solved using the least squares criterion, Review of Geophysics and Space Physics, 20(2), 219-232. Collins J. D., Hart G. C., Hasselman T. K. and Kennedy B. 1974. Statistical identification of structures, AIAA Journal, 2, 185-190. Backus, G. and Gilbert F., 1970. Uniqueness in the inversion of inaccurate gross earth data, Philosophical Transactions, Royal Society of London, Series A, 226, 123-192. Rencher A. 1995. Methods of multivariate analysis, John Wiley and Sons, Inc. Rencher A. 1998. Methods of statistical inference and applications, John Wiley and Sons, Inc.