A Case Study in Model Improvement for Vehicle ... - CiteSeerX

16 downloads 9119 Views 454KB Size Report
General Motors R&D Center. 2790 Skypark Drive, Suite 310 .... Acceleration at the center of the radiator tie-bar (integrated to obtain velocity) ..... By selecting the edge-to-edge option in the contact definition to extend the contact search to the ...
A Case Study in Model Improvement For Vehicle Crashworthiness Simulation

Timothy Hasselman & Keng Yap

Chin-Hsu Lin & John Cafeo

ACTA Incorporated 2790 Skypark Drive, Suite 310 Torrance, CA 90505, USA

General Motors R&D Center 30500 Mound Road Warren, MI 48090, USA

Abstract This paper presents a case study of experimentally-based “model improvement” (also known as “model updating”) for numerical simulations of vehicle crashworthiness testing. The paper details the steps involved the model updating process, including the detection, identification and correction of systematic modeling and/or experimental errors, parameter selection and subsequent Bayesian parameter estimation, and the statistical qualification of those estimates. While parameter estimation and qualification are straightforward computational procedures, the detection, identification and correction of systematic errors is not; it is fundamentally intuitive and requires the expertise and experience of both the modeler and experimentalist. This model updating effort was successful in identifying two systematic modeling deficiencies. The identification and correction of these deficiencies is expected to benefit future crashworthiness simulation efforts at GM.

Nomenclature X exp

mod sur

U Σ V S XX FXX Sθθ FGθθ X

Response matrix Subscript denoting experimental results Subscript denoting analytical results Subscript denoting surrogate model Left singular vectors Singular values Right singular vectors Covariance matrix of response Fisher information matrix of response Covariance matrix of parameters Fisher information matrix of parameters Vectorized response

1. Introduction General Motors (GM) has a large quantity of time-history data from vehicle crash tests and corresponding numerical simulations for a midsize sedan. Earlier work between GM and ACTA led to generic uncertainty quantification and predictive accuracy assessment for this model for a range of impact conditions including frontal, left and right-angle impacts at speeds ranging from less than 20 to nearly 60 km/hr (approximately 12 to 37 mph) [1]. The initial model was in reasonably good agreement with experimental data and the methods used to construct it represented GM’s best modeling practices [2]. Figure 1 illustrates the nonlinear finite element model used by GM in these simulations.

The goal of the present effort was to learn whether the model could be further improved, particularly during the early response times that are critical for the design of the safety restraint system, by the application of formal model updating methods and tools [3]. The project was successful in identifying and improving two critical modeling assumptions, both of which should lead to significant improvement in the accuracy of future crash simulation models. Model improvement, or experimentally-based “model updating,” generally involves several distinct and important steps including (a) an initial assessment of the predictive accuracy of the model relative to accuracy requirements and past experience for generically similar models, (b) an assessment of experimental uncertainty for the measured response features of interest, (c) the identification and correction of any systematic errors, either in the model or the experimental data being used as the reference for assessing predictive accuracy, and (d) the identification and estimation of those model parameters whose values are not known with a high degree of certainty, and to which the predicted response features of interest are sensitive. As a practical matter, steps (c) and (d) are often performed iteratively: both the model and experimental data are initially screened for any identifiable systematic errors which are corrected at the outset. Sensitive parameters are then identified and estimated using the experimental data. If these estimates do not improve the model and/or are judged to be unacceptable, the model and test data are further examined for possible systematic errors, which can lead to another round of parameter estimation if the predictive accuracy of the updated model is not satisfactory. Large finite element or finite difference models, especially nonlinear models, present an additional complication in that they are often too costly to run in-line with parameter estimation and must be replaced with some kind of model approximation, i.e. a surrogate model. This adds an inner loop to the process where the model approximation must be periodically updated until parameter estimates have converged. The remainder of the paper documents a joint effort by GM and ACTA to improve the GM midsize vehicle model in the initial 25 milliseconds of response after impact. Structural response during this time is critical for the design of the safety restraint system.

Figure 1. LS-DYNA Crash Simulation

2. Removal of Systematic Modeling Deficiencies Parameter estimation is only meaningful to the extent that the physical system being modeled has been correctly parameterized (i.e. that the model is correct) and that the experimental data used for parameter estimation is also correct. It is therefore important that both the model and the test data be scrutinized for possible systematic errors before beginning parameter estimation. In this case, six replicate tests were conducted for the 30 mph straight frontal impact condition. Acceleration at the center of the radiator tie-bar (integrated to obtain velocity) during the first 25 ms after impact was selected as a critical measure of response for input to the airbag triggering algorithm. As shown in Figure 2, all six velocity measurements were in close agreement during this period, eliminating the likelihood of systematic error in the test data. The model revealed some distinctly different behavior, however. In particular, the dip at 10 ms consistently shown by the data was not present in the simulation, and model lagged the data in the steep velocity drop between 15 and 20 ms.

2.1 Energy Absorption The build-up of internal strain energy during a 30 mph full frontal impact was evaluated as a basis for selecting a set of candidate parameters for model updating. Structural components including the mid-rail, bumper beam, upper rail, cradle, toe pan, and rear mid-rail were considered. The time history of the internal energy build-up in each of these components is plotted in Figure 3 for the 30 mph frontal impact simulation. These results show that the bumper beam and mid-rail are the only components that absorb significant energy during the first 25 ms.

Test 1 FEA Test 2 Test 3 Test 4 Test 5 Test 6 Test 8

Figure 2. Comparing Nominal Responses to Replicate Tests

Figure 3. Energy Absorption of Individual Components

2.2 Bumper Beam Material Model The velocity dip at 10 ms was examined first. Earlier sensitivity studies by GM, where the bumper beam and midrail thicknesses and stress-strain curve amplitude scaling were varied, showed no appreciable velocity change in the 10 ms region. Thicknesses were varied by ±5%, while stress-strain amplitude scaling was varied by ±10%. It was reasoned that the bumper beam was most likely to affect velocity response at 10 ms since the energy absorbed by the bumper is nearly twice that of the mid-rail at this time. This sedan bumper is made of thermoplastic material that exhibits a considerable drop in stress level after passing yield stress and reaching the ultimate stress. This stress hump characteristic was not captured in the original LS-DYNA model. In modeling the stress-strain curve, an artificial drop after reaching ultimate stress was imposed in the simulation, knowing that the coupon test results and the strain rate effect had not been considered. The simulation in Figure 4 shows marked improvement of the upper radiator center (RADC) velocity trace in the first 20 ms, as both the drop and rise tend to agree with the test velocity curve. Response at the sensor diagnostic module (SDM), also used by the safety restraint system, did not change appreciably. To model the bumper beam material more accurately, the actual manufacturer’s material stress-strain properties were incorporated into the LS-DYNA model. The “dip” at 10 ms was not quite as prominent as the one shown in Figure 4, but nevertheless showed significant improvement. This was the only “systematic error” (in this case a modeling deficiency) identified at the outset of the model updating effort. The hope was that the velocity lag in the model simulation in the 15 – 20 ms time interval would be resolved by parameter estimation.

Test 1

Test 1

Improvement

Figure 4. Velocity Improvement after Modifying the Bumper Material Model

3.

Parameter Selection and Screening

Parameter sensitivity and effects analyses were then performed to determine which parameters, when varied over their respective ranges of plausible values, had the greatest effect on the response quantities of interest. 3.1 Parameter Sensitivity Analysis Parameter sensitivity analysis was conducted to study the sensitivity of the Bayesian cost function to parameter perturbations, since minimizing this scalar function is the goal of Bayesian parameter estimation. The ratio of peak (or ultimate) stress to yield stress was selected as the first parameter and varied from 9% to 18%. Other parameter variations included the thickness and stress-strain curve of bumper and mid-rail. As before, thickness was varied by + and – 5% while the stress-strain curves were scaled up and down by ±10%. The normalized effects of one-at-a time parameter variations on the Bayesian cost function were computed as the ratio of the perturbed cost function to that corresponding to the nominal parameter values, where parameters numbered 1 through 5 are: 1. 2. 3. 4. 5.

Bumper stress-strain peak variation Bumper thickness Bumper stress-strain curve times a variable coefficient Mid-rail thickness Mid-rail stress-strain curve times a variable coefficient

The Bayesian cost function is defined in Section 4. For sensitivity analysis it was normalized to a cost function of unity at the nominal parameter values, as illustrated by the bar chart shown in Figure 5, where the sensitivity of the velocity-based Bayesian cost function to bumper stress-strain peak variations is seen to be negligible. Varying the other four parameters (except for varying the mid-rail stress-strain curve up) has the undesirable effect of increasing the Bayesian cost function, suggesting that the minimum may lie somewhere near the nominal model. The bumper stress-strain peak was ignored in the subsequent parameter effects analysis.

3.2 Primary and Higher Order Effects Analysis Sensitivity analysis reduced the set of candidate parameters to four: bumper thickness, bumper stress-strain, midrail thickness, and mid-rail stress-strain, renumbered 1 through 4. The primary effects of these four parameters are represented by the capital letters A, B, C, and D, where for example the primary effect, A, in this case is calculated by evaluating and summing the Bayesian cost functions for all parameter combinations involving the upper limit of the parameter, a, less the sum of Bayesian cost functions for all combinations of parameters involving the lower limit of a [4]. Parameter effects analysis was performed based on a full factorial design, i.e. all possible combinations of low and high parameter values normalized to -1 and +1, respectively, leading to a fourdimensional hypercube. Figure 6 shows the primary and higher order parameter interaction effects plotted on a normal probability scale, against their corresponding ranks. The rationale behind this analysis is that effects which fall close to a normal distribution may be considered random, whereas those departing significantly from the normal distribution reveal a significant bias, i.e. sensitivity to the parameter variation representing a particular primary or interaction effect. Figures 6 shows both primary and interaction effects of parameters a and b to be significant. The CD interaction effect is shown to be marginally significant. 5

4

0.99

D own Nominal Up

A

0.90

3.5 Normal Probability

N ormalized S ensitivity

4.5

3 2.5 2 1.5 1

BC A BD AC

AD C

0.10

D

E rr=0.13279 Insignificant S ignificant

CD

1

2

3 P arameters

4

0.01 -60

5

-40

-20

0

20

40

60

80

Effect

Figure 5. Normalized Effect of Parameter Variations on Velocity-based Bayesian Cost Function

4.

A BC A BCD A CD BCD BD

0.50

0.5 0

AB B

Figure 6. Parameter Effects on Velocity-based Bayesian Cost Function

Bayesian Parameter Estimation

ACTA’s Bayesian parameter estimation approach involves minimizing a cost function to obtain an optimal set of model parameters based on the available data. A principal components (PC) based fast running surrogate model is then developed using polynomial functions. Model updating is performed by minimizing the Bayesian cost function based on the surrogate model, and verifying the results by rerunning LS-DYNA for the estimated parameter values. 4.1 Bayesian Cost Function Let X expi ( t ) and X modi ( t ) denote the i th dependent variables (such as velocity or acceleration) derived from experimental testing and analytical model simulation as functions of the independent variable, t. By discretizing the independent variable t, we can put the dependent variables in the following matrix form:

X exp

 X exp1 ( t1 ) " X expn ( t1 )    = # % # ,  X ( t ) " X ( t ) expn m   exp1 m 

X mod

 X mod1 ( t1 ) " X modn ( t1 )    = # % # . X t " X modn ( tm )   mod1 ( m ) 

(1),(2)

For optimal model updating, it is necessary to define a scalar cost function to measure the discrepancy between an updated model and test responses, while taking into account the “quality” of the updated model. The following

Bayesian cost function based on a weighted least-squares formulation, where the weighting matrices are the inverse covariance matrices of measurement and modeling uncertainty, is ideal for this purpose: G G J = X exp − X mod

(

)

T

G G T GG X FXX exp − X mod + ( θref − θmod ) Fθθ ( θref − θmod ) .

(

)

(3)

The cost function is a sum of two non-negative terms. The first term G penalizes G the error between the model and test responses. The experimental and model response vectors X exp and X mod are obtained by vectorizing the matrices X exp and X mod column-wise, i.e. concatenating the columns of each matrix to form one long vector. The second term penalizes the variation of parameters from their prior values. The currently estimated parameter vector θmod is used to compute the response X mod . The reference parameter vector θref corresponds to the prior model response. In a one-level model updating procedure, θref is the nominal parameter vector. In a hierarchical or multi-level model updating procedure, θref represents the updated parameter values resulting from the previous level of model updating. The response error in the first term and the parameter variation in the second term are weighted by the Fisher information matrices FXX and Fθθ respectively, where FXX is the inverse of the covariance matrix of measured response computed from replicate experiments and Fθθ is the inverse covariance matrix of the parameter estimates, i.e., −1 FXX = S XX ,

Fθθ = Sθθ−1 .

(4)(a,b)

The Moore-Penrose pseudo-inverse is implied whenever a covariance matrix is singular. In our case, the covariance matrix S XX was computed over six replicate tests of the experiment used to update the model, i.e. full frontal impact at 30 mph. Only the response at the radiator center (RADC) from zero to 25 ms was used for model updating. In this case the matrices in Equations (1) and (2) are just vectors, so that G X mod = X mod ,

G X exp = X exp ,

(5)(a,b)

i.e. n = 1 in Eq. (1) and (2). The covariance matrix S XX was obtained from the PC-based covariance matrix of generic experimental uncertainty by linear covariance propagation where X exp was taken to be the mean of the experimental response from the 30 mph straight frontal impact condition. In general, larger values in the Fisher information matrix imply that we know more (hence are more certain) about the corresponding response terms, which in turn give more weight to the corresponding response measurements. On the other hand, smaller values in the Fisher information matrix imply that we know less (hence are less certain) about the corresponding response terms, which in turn give less weight to the corresponding response measurements. In short, the Bayesian cost function can be explained as a scalar that weights response measurements inversely with their uncertainties, while weighting parameter variations inversely with their uncertainties. 4.2 Surrogate Model Parameter estimation is performed by minimizing the Bayesian cost function of the form shown in Eq. (3) to search for a set of optimal model parameters. Although the same high-fidelity computer model used for sensitivity and effects analysis can be used during the iterative model updating process, the response evaluations are typically very computationally intensive. A wise alternative is to construct a fast-running surrogate model to represent response in multi-dimensional parameter space [5]. To construct a surrogate model for a range of model parameters, model response time-histories are rearranged such that the response vectors of the same variable are collected in the same matrix over the model parameter vectors. Since there is only one variable corresponding to the sensor location at the radiator center, we have only one such matrix, here represented by X sur . Let T X sur ≈ U sur Σ surVsur

(6)

be the singular value decomposition (SVD) of the surrogate model response matrix with only the desired number k of modes (PCs) retained. Each row Vsur of the right singular vectors Vsur is a function of the model parameters associated with each simulation, i.e. k k Vsur = Vsur (θ k ) ,

(7)

where θ k represents the parameter vector of the k th simulation. The surrogate model is developed by fitting a polynomial function to each of the right singular vectors, Vsurj = Vsur e j , where e j is the j th column of an identity matrix. Thus, we obtain the continuous approximation

Vˆsurj ( θ ) = Pj ( θ ) ≈ Vsurj ( θ ) at θ = θ k ,

(8)

where Pj represents the polynomial function fitted for the j th right singular vector. The surrogate model is then of the form r

T X sur ( t ; θ ) ≈ ∑ U surj ( t ) Σ surj Vˆsur (θ ) , j

(9)

j =1

where each column of U sur contains the time dependency of the column vector, X sur ( t ; θ ) . The singular values, Σ sur , and the left singular vectors, U sur , come directly from Eq. (8). 4.3 Parameter Estimates Minimization of the Bayesian cost function in Equation (3) results in a set of parameter estimates. When a model is linear, Bayesian estimation may be accomplished in a single step. For nonlinear models, a recursive algorithm may be derived where the model is linearized locally and minimization of the cost function proceeds iteratively in small steps until convergence is achieved. The most general approach, however, is to minimize the cost function by some optimization algorithm. Optimization algorithms such as Nelder-Mead simplex, Levenberg-Marquardt Nonlinear Least Squares, and Quasi-Newton, may be used. The Nelder-Mead simplex method was selected here for its robustness. When a surrogate model is used to approximate the full-fidelity nonlinear model, the updated model must be verified by evaluating the full-fidelity model at the updated parameter values. Figures 7 and 8 show the iterative minimization of the Bayesian cost function and corresponding evolution of the parameter estimates, respectively.

Objective Function

45

40

35

30

0

20

40

60

80

100

Iteration

Figure 7. Minimization of Velocity-based Bayesian Cost Function

Figure 8. Parameter Estimation Using Velocity-based Objective Function

4.4 Qualification of Parameter Estimates Figures 9 and 10 illustrate the parameter “qualification process” used to assess the quality of the estimates. Bayesian parameter estimation results in both a revised (a-posteriori) set of parameter estimates and a revised parameter covariance matrix. −1

* −1 T −1 Sθθ S XX TXθ  . =  Sθθ + TXθ

(10)

The square roots of the diagonal elements of the revised covariance matrix are the updated standard deviations of parametric uncertainty, and are so designated by an asterisk superscript. The horizontal axis of the plot in Figure 9 shows the change in a parameter estimate normalized by the standard deviation of its initial (a-priori) estimate. Thus, for example, a 1, 2, or 3 on the horizontal axis represents a parameter change of 1, 2, or 3 standard deviations of the initial estimate. The vertical axis in Figure 9 represents the degree to which the “confidence” in an estimated parameter has been improved. For example, a 100% improvement occurs whenever the standard deviation of a revised parameter estimate has been reduced by a factor of two. An increase in this measure of “confidence” of less than 50% is considered to be a statistically weak estimate and may be rejected. The different “lobes” shown in Figure 9 are derived from the four conditions illustrated in Figure 10. In each case, the probability distributions of the prior (a-priori) and revised (a-posteriori) estimates are shown by the solid and dotted lines, respectively. In case (a), the mean of the revised distribution has shifted by one revised standard deviation; in case (b) it has shifted by two revised standard deviations. In case (c) the mean of the revised distribution has shifted by the sum of one prior standard deviation and one revised standard deviation; in case (d) it has shifted by twice that amount. These conditions lead to the four sets of curves (left and right sets of curves are symmetric about the origin) shown in Figure 9. A parameter estimate whose revised mean and standard deviation result in a point falling within the center (white) lobe may be considered to confirm the prior estimate. A parameter estimate plotted by a point falling within the next set of white lobes (i.e. between curves (b) and (c)) may be considered to represent a statistically significant revision to the prior estimate. An estimate plotted by a point in the third set of white lobes (i.e. outside of curve (d)) should be rejected as inconsistent with the assumed prior distribution of uncertainty. When this happens, one might conclude that the initial estimate of parameter uncertainty was too small and rerun the estimation with a larger uncertainty on the prior estimate. Often the new estimate will change even more so that the point stays within the “reject” lobe. This is an indication of a systematic modeling error, or perhaps a bias error in the data. The shaded zones between the white lobes may be considered “gray areas” subject to engineering judgment.

*

ba ab

c

d |∆θ | = σ *

Reject

a)

Revise

Confirm

400

c Revise

d

500 Reject

Percent Increase in Confidence, (σ /σ -1)×100%

In this case study, all but one of the parameter estimates fall below the threshold of statistical significance. All are seen to fall within the center lobe tending to confirm the prior estimate. This result is consistent with the sensitivity analysis that showed the nominal parameter estimates to yield the lowest values of the Bayesian cost function.

θ θ* b)

|∆θ | = 2σ

c)

*

*

300 θ

200

50 Statistically Weak 0 -5

|∆θ | = σ +σ

θ

100

-4

θ2 θ 1θ 3

d)

θ*

θ*

|∆θ | = 2(σ *+ σ )

θ4

-3 -2 -1 0 1 2 3 4 Relative Change in Parameter Estimate, ∆θ /σ

5

Figure 9. Qualitative Assessment of Velocity-based Model Updating Results

θ

Prior

θ*

Revised

Figure 10. Comparison of Prior and Revised Probability Distributions

4.4 LS-DYNA Verification As a final check on the parameter estimates, the updated values were input to the LS-DYNA model and rerun. The results shown in Figure 11 show very little improvement. (In this figure the colored uncertainty bands represent experimental uncertainty and are generated by the methods outlined in [1] and [6].) Under these conditions the “null hypothesis” is indicated, i.e. the prior model has not been rejected and may therefore be accepted. The fact that the simulated response still lags the measured response by a few milliseconds, and that the response characteristics of interest are insensitive to the selected parameters, implies the possible existence of an undiscovered modeling error. 50

Velocity (km/hr)

40 30 20 10 0 -10

0

Exp

5

Ref

10 15 Time (ms)

Upd

20

1 σ band

25

2 σ band

Figure 11. Comparison of Reference and Velocity-based Updated Model Responses

5.

Another Modeling Deficiency

5.1 Contact Algorithm Full vehicle crash analysis involves interaction between all free surfaces, including contact at corners and edges. Test-analysis correlation can be degraded significantly if these interactions are not carefully handled. In all of the foregoing analysis, only the default contact search option for nodal penetration through shell surfaces was used. The optional shell exterior edge-to-edge option was not selected. Figure 12 shows a slice through the structure at the radiator center sensor at 20ms using the two different contact definitions for the 30 mph zero degree frontal impact. With the enhanced contact algorithm and edge-to-edge contact in addition to node-to-surface contact, deformations at the upper radiator support at 20 ms are more than those predicted with node-to-surface contact only. This is to be expected since more elaborate consideration of the contact generally promotes earlier deformation. By selecting the edge-to-edge option in the contact definition to extend the contact search to the exterior edge of shell elements, the computational cost became higher, but the velocity correlation at the radiator center was greatly improved at the required airbag triggering time, 17 ms. Figure 13 shows that the blue line now coincides with the green line from 15 ms to 18 ms. The difference between the original model and the corrected model is two full milliseconds as shown in Figure 13, a significant difference for timing the airbag triggering mechanism. The two-millisecond difference at 30 mph corresponds to approximately one inch in displacement differential. It was this realization that provided the first clue to identification of the model deficiency. The fact that none of the single parameter variations nor the Bayesian estimation were able to reduce the 2 ms phase lag in the model indicated the likelihood of a geometric or kinematics error. Figure 14 shows a comparison with test data of velocity response at both the SDM and RADC locations, of the original “FEA Baseline” model and the updated “FEA” model.

With Edge Contact

60

Node to Surface Contact 50

d Hoo

40

Radiator Support 30

Hood Latch Bumper

Engine

20 10 0 -10

Figure 12. Radiator Center Deformations at 20 ms

0

8

16

Time (msec)

24

32

40

Figure 13. Velocity of Radiator Center with Enhanced Contact Algorithm and Edge-to-edge Contact Search Shown with Expanded Time Scale

Figure14. Velocity at SDM and RADC for 30 mph Full Frontal Impact with Updated Bumper Beam Material Properties, Enhanced Contact Algorithm, and Edge-to-edge Contact Search 5.2 Validation Case To further validate the model improvements identified in this study, including the enhanced bumper beam material properties and extended contact option, we also compared the velocity at the three front sensors and SDM at the other four impact conditions listed above. Comparisons are shown in Figure 15 for the 26 mph zero degree frontal impact. They show that velocities at the front sensor has been significantly improved. In the case of the left and right angle impacts, the improvements were moderate.

Figure 15. Velocity at SDM and RADC for 26 mph Frontal Impact with Updated Bumper Beam Material Properties, Enhanced Contact Algorithm, and Edge-to-edge Contact Search.

5.3 Updated Uncertainty Quantification To demonstrate the improved predictive accuracy of the improved model, uncertainty bands based on single tests selected from each of five series of repeated tests were recalculated for both the baseline model and the improved model, using the method described in Reference [1]. (In Reference [1] we used data from 19 testsimulation pairs instead of only five.) The five impact conditions used in these calculations were: • • • • •

Zero degree frontal impact at 26 mph Zero degree frontal impact at 30 mph Thirty degree left angle impact at 26 mph Thirty degree left angle impact at 30 mph Thirty degree right angle impact at 26 mph

As shown in Figures 16 and 17, the uncertainty bands are significantly smaller for the improved model in the 0 – 25 ms time interval. “ModB” denotes the bias-shifted model with respect to which the uncertainty bands are plotted. The bands prior to 20 ms, including the required air bag deployment time of 17 ms, are much smaller than those of the baseline model. Significant improvement in predictive accuracy was shown at other impact conditions as well, thus validating the use of the edge-to-edge contact algorithm in the finite element model.

SDM

60 50

50

Velocity (km/h)

Velocity (km/h)

Exp ModB Mod 1 σ Ba nd 2 σ Ba nd

40

40 30 20 10

30 20 10

0

0

-10

-10

-20 0

Rad. C.

60 Exp ModB Mod 1 σ Ba nd 2 σ Ba nd

20

40

60

80

-20

100

0

20

40

60

80

100

Time (msec)

Time (msec)

Figure 16. Uncertainty Quantification of the Baseline Model for 30mph Full Frontal Impact

SDM

60 50

50

Velocity (km/h)

Velocity (km/h)

E xp ModB Mod 1 σ Ba nd 2 σ Ba nd

40

40 30 20 10

30 20 10

0

0

-10

-10

-20 0

Rad. C.

60 E xp ModB Mod 1 σ Ba nd 2 σ Ba nd

20

40

60

Time (msec)

80

100

-20 0

20

40

60

80

Time (msec)

Figure 17. Uncertainty Quantification of the Enhanced Model for 30mph Full Frontal Impact

100

6.

Conclusions

Replacement of the generic stress-strain material model with the correct thermoplastic bumper beam stress-strain model improved the simulated response at the front sensor during the 10 to 15 ms period after impact. Furthermore, activation of an optional edge-to-edge contact search algorithm in LS-DYNA prevented penetration of one element through another in the simulation, and eliminated the 2 ms disparity between simulated and measured velocity drop during the 15 to 20 ms period after impact. The default contact search algorithm, originally selected for its computational efficiency, proved to be inadequate for this finite element model. The updated model was applied to other impact conditions for validation. Velocity traces at the front sensors were improved significantly for full frontal impact and moderately for the left and right angle impacts. This extended study showed that the model updates are reasonable and necessary, and resulted in better agreement with experimental data. The enhanced contact search is being applied to GM’s current vehicle crashworthiness modeling and simulations. Uncertainty bands for the updated model runs show improvement over those of the baseline model runs. This improvement enhances the possibility of using generic modeling uncertainty to aid in future vehicle development. The definition of model updating used in this report includes more than model calibration; it includes the detection, identification and correction of systematic errors, and the qualification of parameter estimates that result from model calibration. It is important to note that in the present application, this process lead to the proper conclusion and demonstrated that parameter estimation when properly applied does not distort the model based on one set of data, thereby making it worse instead of better when compared with data from other load conditions. Rather it indicated a need to search further for potential systematic errors because parameter estimation was unable to reconcile the important differences between analysis and test. The qualification process applied to the revised parameter estimates indicated statistically weak estimates that tended to confirm the prior model. If statistically weak estimates had resulted in better agreement between analysis and test, that result would have indicated a need for more data or data representing response features having greater sensitivity to the parameters being estimated. This case study should help to allay the fears of those in the model validation community who believe that model calibration cannot be trusted because of the danger of distorting a model to force-fit a particular set of data.

References [1] Hasselman, T. K., Yap, K. C, Lin, C-H. and Cafeo, J., “Uncertainty Quantification Applied to Predictive Accuracy Assessment for Numerical Crash Simulations,” The 21st Int’l Modal Analysis Conf. (IMAC), Kissimmee, FL, 2003. [2] Lin, C.-H., Gao, R. and Cheng, Y.-P., “A Stochastic Approach for the Simulation of an Integrated Vehicle and Occupant Model,” The 17th Int’l Technical Conf. on the Enhanced Safety of Vehicles (ESV), Amsterdam, The Netherlands, 2001. [3] Yap, K. C., Wathugala, G. W., and Hasselman, T. K., “An Updated Toolbox for Validation and Uncertainty Quantification of Nonlinear Finite Element Models,” The 7th Int’l LS-DYNA Users Conf., Dearborn, MI, 2002. [4] Myers, R. H. and Montgomery, D. C., Response Surface Methodology: Process and Product Optimization Using Designed Experiments, John Wiley & Sons, 1995. [5] Hasselman, T. K., Anderson, M. C., and Zimmerman, D. C., “Fast Running Approximations of High Fidelity Physics Based Models,” Proc. of the 69th Shock and Vibration Symposium, Minneapolis/St. Paul, MN, 1998. [6] Hasselman, T. K., Anderson, M. C., and Gan, W., “Principal Components Analysis for Nonlinear Model Correlation, Updating and Uncertainty Evaluation,” The 16th Int’l Modal Analysis Conf. (IMAC), Santa Barbara, CA, 1998.