Concepts and Terminology of Validation for Computational Solid

0 downloads 0 Views 190KB Size Report
purpose of the committee is to develop and publish a set of documents .... The key difference in the ... challenges to establishing credibility in numerical models.
2004-01-0454

Concepts and Terminology of Validation for Computational Solid Mechanics Models John A. Cafeo GM R&D Center 30500 Mound Road Warren, MI-48090

Ben H. Thacker Southwest Research Institute 6220 Culebra Road San Antonio, TX 78238

ABSTRACT During the past couple of years, a committee under the auspices of the ASME Codes and Standards division has been formed and are meeting regularly. The purpose of the committee is to develop and publish a set of documents that describe a common process for verification and validation of computational solid mechanics models. There are many issues under discussion and many concepts under debate. In this paper we will present some the major concepts and the differing viewpoints focusing on the concept of validation and relate this to the automotive industry in particular.

INTRODUCTION The vehicle development process (VDP) is a very creative and complex activity that is full of uncertainties of many kinds. While the VDP may be viewed from many perspectives, we consider it to be a series of decisions. In the absence of uncertainty, this series of decisions can, in principle, be posed as a very complex multidimensional optimization problem. Decisions, however, are actions taken in the present to achieve an outcome in the future. Because it is impossible to predict the outcomes of these decisions with certainty, the characterization and management of uncertainty in engineering design is essential to the decision-making that is the core activity of the vehicle development process. Uncertainties are present throughout the vehicle development process, from the specification of requirements in conceptual design to build variation in manufacturing. Vehicle program managers are continually challenged with the task of integrating uncertain information across a large number of functional areas, assessing program risk relative to business goals and then making program level

decisions. Engineers struggle to develop design alternatives in this uncertain environment and to provide the program managers with credible, timely, and robust estimates of a multitude of design related vehicle performance attributes. Marketplace pressures to continuously shorten the vehicle development process drive the increasing use of numerical models (as opposed to physical prototypes) for providing estimates of vehicle performance attributes to support decisionmaking under uncertainty. In order for the calculations from the numerical models to be useful, the decision makers must have confidence in the results. This confidence is formally developed through the model verification and validation (V&V) and process. Of course, model V&V is also of great interest to both government and other industries. The U.S. Department of Defense (DoD) Modeling and Simulation Office (DMSO) has been a leader in the development of fundamental concepts and terminology for V&V applied to high-level systems engineering such as ballistic missile defense systems [1]. In response to a ban on production of new strategic weapons and nuclear testing, the Department of Energy (DOE) initiated the Science-Based Stockpile Stewardship Program (SSP) program. An objective of the SSP is to maintain a high level of confidence in the safety, reliability, and performance of the existing nuclear weapons stockpile in the absence of nuclear testing. This has challenged the national laboratories to develop high-confidence tools and methods that can be used to provide evidence needed for stockpile certification in the complete absence of full systems testing via numerical simulation. [2, 3] The American Society of Mechanical Engineers (ASME) has also recently formed a Standards Committee for the development of V&V procedures for computational solid mechanics. This committee is composed of people from

the product development industry, the engineering code development industry, governmental scientific labs and the defense industry. Each of these groups has both common and distinct uses for numerical models in their work. A general overview of model verification and validation issues can be found in [4].

MODEL USAGE In industries, where products are developed for sale in the marketplace, numerical models are used to support design decisions during the product development process. This enables the designers in the company to more rapidly complete a product design than if prototype hardware is used to check a design and answer design questions. A numerical model allows a more complete exploration of the design space and also allows the design to be optimized to perform in a particular way. During the early part of the design process, coarse models with a relatively low confidence may be used to set direction. As the product development process continues, the design as well as the models used in the design process is refined. Refined models contain more details that are to be chosen and it is expected that the results will have a lower uncertainty associated with them. We typically assume that more detailed models are more accurate than coarse models; however, this is not generally true and is one of the important concerns being addressed by model V&V. Finally, because of the cost and time involved to develop complex codes and the wide availability of robust commercial software, industry usually chooses to simulate numerical models using commercially available codes. In contrast, a military simulation, like ballistic missile defense, may be constructed to study the location and magnitude of damage for a given scenario. Typically, this kind of model is not simulated with commercially available software but is written from the ground up. The simulation may have a long life span and is continually updated as information is gathered. Another characteristic is that the code usually attempts to simulate a large number of non-physical behaviors, such has human decision, past experience, and circumstantial data, e.g., photographs of past battle damage assessments. As a result, these models are typically referred to as system simulations to differentiate them from engineering simulations. In the military simulation one difficulty lies in the fact that there can never be any data gathered from a full test. This means that we must validate the pieces of the simulation in a hierarchical fashion (as discussed in [4]) and then extrapolate these results to the full simulation. Another related but different situation is a weapons simulation where the objective is to calculate the amount and extent of damage caused by a bomb being used against a specific type of target, such as a concrete bunker. Simulating these types of events involve highly

nonlinear and dynamic behavior, and both commercially available and in-house developed codes are generally used. A key challenge in weapon simulation is that it must rely on sub-models (ignition and explosive burn for example) for which it is difficult if not impossible to gather high-quality data suitable for model validation. As with the military simulation, weapons simulation must also validate the full simulation based on validated unit, component and subsystem models. The implications of model validation in each of these situations can be traced back to the need for measured data to compare to the simulation results. In the industrial case, the challenge is that, because of the speed to market issue, we typically do not have the hardware that we are designing. The automotive industry, in particular, is also challenged with producing a very complex product using a minimum amount of time and money. Therefore no full vehicle model validation is possible. What we believe is possible, however, is the idea that we can validate the procedures used to construct models for the current product program based on past similar programs. Our challenge then is to develop methodology that will allow the level of confidence that we calculate based on past experience to be applicable to the current situation. Validating model development procedures instead of the models themselves carries with it unique challenges and assumptions unlike those faced by government or military applications. The key difference in the automotive industry is that we can oftentimes carry forward many experiences, structures, materials, designs and models from past programs to new ones. Thus the relevancy of a model validated based on the procedure from which it was developed is one of the key challenges to establishing credibility in numerical models used in the automotive industry. In the remainder of this paper, we will look at model validation in the automotive business context.

NUMERICAL MODELS SUPPORT DECISIONS IN THE VEHICLE DEVELOPMENT PROCESS The vehicle development process is the series of actions and choices required to bring a vehicle to market. For domestic (US) vehicle manufacturers, the VDP is structured around a traditional systems engineering approach to product development. The initial phase of the VDP focuses on identifying customer requirements and then translating them into lower level requirements for various functional activities; including product planning, marketing, styling, manufacturing, finance, and a broad array of engineering disciplines. Work within the VDP then proceeds in a highly parallel fashion. Engineers design subsystems to satisfy the lower level requirements; then the subsystems are integrated to analyze the vehicle’s conformance to the customer requirements and to assess the compatibility of the subsystems. Meanwhile, other functional staffs

work to satisfy their own requirements: product planning monitors the progress of the VDP to ensure that the program is proceeding on time and within its budget, marketing ensures that the vehicle design is appropriate to support sales and pricing goals, finance evaluates the vehicle design to ensure that it is consistent with the vehicle’s established cost structure, manufacturing assesses the vehicle design to ensure that it possible to build within the target assembly plant, and so on. This is typically the most complex, and the most iterative phase of the VDP, as literally thousands of choices and tradeoffs are made. Finally, the product development team converges on a compatible set of requirements and a corresponding vehicle design. Engineers then release their parts for production and the vehicle proceeds through a series of pre-production build phases culminating in the start of production.

deterministic process involves testing a piece of hardware that is being modeled and running the model at the nominal test conditions. Typically, only a single set of computational results and one set of test measurements are generated. These results (test and computational) are then compared. Often, the results are overlaid in some way (e.g. graphed together in a two dimensional plot) and an experienced engineer decides whether or not they are “close enough” to declare that the model is correlated (this measure of correlation has been referred to as the “viewgraph norm” [5]). On initial comparison, the degree of correlation may be judged to be insufficient. Then the engineer will adjust some of the model parameters in an attempt to make the model results match the test results more closely. If and when sufficient agreement is obtained, the model is accepted as a useful surrogate for further hardware tests.

Models that calculate the performance attributes (e.g. stress, noise level, etc.) of the vehicle are fundamental to the vehicle design process. The purpose of the model validation process is to establish confidence that the numerical model can predict the attribute. This helps the builders of the models during the model development phase. It enables them to change parameters or modify assumptions to improve the models predictive power. The quantified confidence associated with a validated model also informs the person using the results from the model and helps them to estimate their subjective uncertainty during the decision making process.

During this process several important issues must be considered: •

During the product development process, the person responsible for a decision will ask and answer two basic questions when presented with model results: 1) “Can I trust this result?” and 2) “Even if I can trust it, is it useful (i.e. does it help me make my decision)?” In this context, trust means credibility. Most times there is no formal objective measure of credibility. Instead, the credibility in the model results is equivalent to credibility in the modeler and the model prediction; a subjective measure largely based on previous experiences. The chief engineer will ask the modeler to assess and report his confidence in the results. But, this is difficult without a formal model validation process. What the engineer really needs is an objective measure of confidence to present to the decision maker. This can only be obtained as the product of the model validation process.

This process, as described, implicitly assumes that the test result is the “correct answer” (i.e. a very accurate estimate of reality). In many cases this may be a good assumption. But experience has shown that testing is prone to error. Consequently, the test used should have a quantified repeatability. Also, every effort must be made to ensure that the test results are free from systematic error. The use of independent test procedures can help here. For example, the frequency results from a strain gage time history can be used to confirm the frequency results from an accelerometer. If the accuracy of the test results cannot be quantified and assured through procedural controls, then replicated experiments should be run to estimate test uncertainty.



In practice, the processes of model development and validation most often occur in concert; aspects of validation interact with and feed back to the model development process (e.g., a shortcoming in the model uncovered during the validation process may require change in the numerical implementation). It is instructive, therefore to look at some of the current practices to begin to understand the role that nondeterministic analysis should play in this process.

Changing model parameters to obtain “better” agreement between test and analysis results (calibration) can lead to the false conclusion that the modified model is better than its original version. Often, many model parameters can be chosen that will allow the same level of agreement to be forced at the observation points. This nonuniqueness means that the engineer must be very disciplined when choosing and/or changing the calibration parameters. They should only be chosen when there are significant reasons to believe that they are in error and, therefore, “need” to be calibrated.



Experimental data that has been used to adjust the model should not be the sole source of data for establishing the claim that the model is validated. Additional data for model validation should be obtained from further experiments. Ideally, these experiments should be carefully and specifically planned for model validation, i.e., sufficiently “test” the model’s predictive power, and represent the design domain over which the model will be used. However, with great care, data from

MODEL VALIDATION: CURRENT PRACTICE Model validation often takes the form of what is commonly called a model correlation exercise. This

hardware development and prototype tests can sometimes be used as a surrogate for (or an augmentation to) further validation testing.

MODEL VALIDATION DEFINITION The validation definition that we use to start this discussion is from Ref. [6]: Validation is the process of determining the degree to which a model is an accurate representation of the real world from the perspective of the intended uses of the model. This seemingly simple and succinct definition has lead to many long committee discussions. It is clear that all agree that validation can only be accomplished through comparing measured or field data to the outcomes of simulations or models. There is also general agreement on the committee that the rigorous approach is to use a statistical basis for this comparison. This recognizes that there are uncertainties present in both the test data and in the numerical model outcomes. In a statistical framework, the result of this comparison is a difference between test and analysis and its associated tolerance bound or confidence level. The difference between simulation outcomes and measured results has been termed fidelity [7]. Fidelity is an absolute measure of agreement model results to experimental data. The fidelity does not change with application demands. Validity, on the other hand, is a relative measure of model agreement to reality; validity determines whether the representation is close enough for an intended purpose and addresses the question of suitability. The validity changes with application demands. Thus, while the fidelity of a simulation is constant, it could be valid for one application and not valid for another. [7] There are two different paradigms for using model results in decision-making. In the first paradigm, the actual decision makers establish that once a targeted level of fidelity has been reached, they will accept the model results as an adequate representation of reality. This means that the model result’s uncertainty is low in comparison to the other factors that affect the decision. In the second paradigm, decision maker uses the model results and the confidence levels established for them during fidelity assessment. The uncertainties can be low or high relative to the other factors that affect the decision and the decision maker can use the confidence level to weight the information from the model. The first paradigm is actually a subset of the second with a preestablished confidence level. In the first paradigm, the validation process consists of specifying a level of fidelity that will allow the results to be used for the intended purpose; then constructing a model, assessing its fidelity, and changing the model in an iterative process until the fidelity specification is met. This is the typical paradigm that the DoD community

uses. It should be noted that often in the DoD community, models are used over a long time. One implication is that the actual model being validated is the one that will be used. In industrial practice, this is not typically the case. This leads to the second paradigm. In industries like the automotive industry, where a product is being brought to market in a relatively short time frame, it is difficult to construct a model and iteratively improve it through assessing its fidelity. In fact, the reason for using the model in the first place is to avoid the cost and time of building prototype hardware. This implies that what we really are assessing during the model validation process is the procedures we use to construct the models. Hardware from a product in production is available for validation testing as are models used in the design process. An engineer in a product program will rely on “proven” methods for constructing a model to evaluate a certain attribute. Design decisions in large automotive companies are made by many different people, each with their own risk tolerance to using the outputs of a model in their decisions. This suggests a slightly different paradigm. Numerical models used in a previous product development program can be assessed for fidelity using actual hardware data. Then, decision makers can claim that the model has been validated to a certain statistically based fidelity level. The idea is that different decision makers may choose to use, or not, the results in their product design decisions. If not, this will spawn an effort to improve the model outside of the actual product development program. For those decision makers that choose to use the model, the statistical measure of fidelity allows them to assess how much credibility to assign to the model results versus the other factors they are considering. The central difference in viewpoint stems from the number of different people who use the output of a model and the short time frame they have for making decisions. In this view then, the validation process consists of updating a model to match a currently produced product, then assessing its fidelity. The procedures and the level of confidence in the agreement are the validated model. Model improvement is undertaken when a specific decision maker finds they require higher confidence.

MODEL CALIBRATION The idea of model improvement is important in either paradigm we have discussed. Because many problems show significant sensitivity to physical and numerical parameters, it is often straightforward and tempting to adjust the parameters of highly sophisticated computer models to improve agreement with experimental results. It is essential that the model developers not know the experimental results during the model validation process. This temptation is avoided if the simulation and validation experiment results are kept separate until the end when the comparisons are made. However, it is

extremely important that the test and analysis teams design the validation experiments together. There is one fundamental discussion that has arisen during the committee discussions, “Can one use the same data for calibrating a model and validating a model?” The members generally agree that if one set of experimental data is used to calibrate a model; then another independent set should be gathered for the validation process. However, if one takes a Bayesian statistical viewpoint while calculating the differences between test and analysis, this is not necessarily the case. A Bayesian analysis takes prior probability distributions of the unknowns, and updates these to posterior distributions through Bayes’ Rule based on the evidence presented. In the Bayesian methodology, all uncertainties can be updated at the same time. One of the uncertainties is the difference between the set of test data and analysis outcomes. Another, though, are the parameters that need to be calibrated in the model. The Bayesian analysis can provide updated estimates for both of these quantities at the same time. We should state explicitly what is meant by a calibration parameter. A calibration parameter is one that the modeler is unsure about and has no other way of estimating except to use the model. If an independent method is available to measure the calibration parameter, then it is preferable to use it instead. The reduced data and analysis requirements associated with a Bayesian approach are attractive because of the severe time and cost constraints inherent to the VDP. This benefit, however, must be balanced against the reduced rigor and benefits that the bottoms-up approach provides. As model V&V guidelines, methodology and tools continue to be developed, we expect that a combined approach will provide the best approach for the automotive industry.

VERIFICATION AND VALIDATION PROCESS An overview of the verification and validation process and the role of uncertainty quantification was presented in Ref. [8]. Here, referring to Fig. 1, we discuss some of the important elements of model validation. 1. Identify and quantify the important features and inputs to the model that are believed to be potentially significant contributors to error in model predictions. These can range from basic physics assumptions, to acknowledged numerical errors, to operating conditions, to material parameters, etc. Model features that lead to uncertainties in model outputs should also be identified and quantified where possible (via uncertainty probability distributions or parameter sensitivity ranges). An initial subjective impact number should be assigned to

each uncertainty to help target the resources available for validation. 2. Determine evaluation criteria - In the process of determining the degree to which a computer model is an accurate representation of reality, one must specify what physical quantities, or system response measures, should be used for comparisons between calculations and data during the validation process. This is an important specification because the physical modeling fidelity that is required to make accurate predictions can be markedly dependent upon what model output quantities are being validated. The determination of these evaluation criteria must account for the context in which the model is used, the feasibility of acquiring adequate experimental data and the methodology needed to permit an evaluation. Often we do not compare the fundamental variables in a model because we cannot measure them, but rather we choose a surrogate. 3. Develop and verify the numerical model – Based on the outcome of Elements 1 and 2, an existing model must be modified or a new one developed. Features of the model such as load functions, boundary and initial conditions, constitutive models, etc. must be carefully assessed. Once developed, the numerical model must be verified before proceeding on with the validation process. Model verification is concerned with identifying and removing errors in the model by comparing numerical solutions to analytical or highly accurate benchmark solutions. In short, verification deals with the mathematics associated with the model, whereas validation deals with the physics associated with the model. [9] Because mathematical errors can cancel giving the impression of correctness (right answer for the wrong reason), verification should be performed to a sufficient level before the validation activity begins. A grid refinement study must be performed because, barring input errors, insufficient grid refinement is typically the largest contributor to error in verification assessment. 4. Design validation experiments - This includes both designed computer and designed field or laboratory experiments. Validation experiments must be conducted so that experimental uncertainty can be accurately estimated. This means replicate experiments must be conducted. It may be that, if the experimental variation has been previously quantified, this error can be applied to the output of a single experiment [10]. An uncertainty analysis procedure should be used that distinguishes and quantifies systematic (bias) and random errors. Also, experimentalists must understand the computational model assumptions so that validation experiments can match, as closely as

possible, the code assumptions and requirements. 5. Generate model output distributions, run field experiments – If the model is complex (typical for detailed engineering calculations) then it may be necessary to generate a surrogate or response surface representation of the full model. This adds another layer of uncertainty into the process. 6. Compare computer model output with field output - Calculate bias, variance and possibly model tuning/calibrating parameters. This is where some statistical modeling assumptions will have to be made. 7. Feedback the information into the current validation exercise and feed-forward information to future validation activities. The idea is that we should learn something from this activity that we can apply to future activities.

CONCLUSION This paper presented the some of the conceptual arguments involve in model validation. We illustrated how these concepts fit into the automotive industry. There are many open issues in the definition and practice of V&V. From a practical standpoint, the upfront costs associated with conducting a high quality V&V program will certainly be formidable. Therefore, the long-term benefits of using a model to supplement testing must be balanced against the model development and V&V costs. It is certain that V&V will remain an issue of high importance in the field of computational mechanics, and much further research is needed to develop recommended practices for performing model V&V.

ACKNOWLEDGMENTS Various programs supported this work. The author would like to acknowledge Southwest Research Institute for support as well as the members of the ASME Verification and Validation Standards Committee for their many valuable discussions.

REFERENCES 1. DoD, DoD Directive No. 5000.61: Modeling and Simulation (M&S) Verification, Validation, and Accreditation (VV&A), Defense Modeling and Simulation Office, www.dmso.mil/docslib.

2. Doebling, S.W., F.M. Hemez, J.F. Schultz, S.P. Girrens, “Overview of Structural Dynamics Model Validation Activities at Los Alamos National Laboratory,” Proc AIAA/ASME/ASCE/AHS/ASC 43rd Structures, Structural Dynamics, and Materials (SDM) Conf., AIAA 2002-1643, Denver, CO, 22-25 April 2002. 3. Oberkampf, W.L., T.C. Trucano and C. Hirsch, “Verification, Validation, and Predictive Capability in Computational Engineering and Physics,” Proc. Foundations for Verification and Validation in the 21st Century Workshop, Johns Hopkins University, Laurel, MD, 22-23 October 2002. 4. Oberkampf, W.L., T.G. Trucano and C. Hirsch, Verification, Validation, and Predictive Capability in Computational Engineering and Physics, SAND2003-3769, February, 2003. 5. Trucano, et al, “Description of the Sandia Validation Metrics Project, Sandia National Laboratories, SAND2001-0341, Albuquerque, NM, 2002. 6. AIAA, Guide for the Verification and Validation of Computational Fluid Dynamics Simulations, American Institute of Aeronautics and Astronautics, AIAA-G-077-1998, Reston, VA, 1998. 7. Pace, Dale, personal communication. 8. Thacker, B.H., “The Role of Nondeterminism in Verification and Validation of Computational Solid Mechanics Models,“ Proc. SAE 2003 World Congress & Exposition, Reliability & Robust Design in Automotive Engineering,” Paper 2003-01-1353, Detroit, MI, 4 March 2003, 9. Roach, P.J., Verification and Validation in Computational Science and Engineering, Hermosa Publishers, Albuquerque, NM, 1998. 10. Bayarri, M. J., Berger, J. O., Higdon, D., Kennedy, M. C., Kottas, A., Paulo, R., Sacks, J., Cafeo, J. A., Cavendish, J., Lin, C. H., and Tu, J., "A Framework for Validation of Computer Models," Foundations for Verification and Validation in the 21st Century Workshop, October 22-23, 2002, Johns Hopkins University, Maryland.

Reality Realityofof Interest Interest Modeling

Experimental Design

Conceptual Conceptual Model Model

Validation Validation Experiment Experiment

Physics Modeling Mathematical Mathematical Model Model

Experimentation

Pre-test Calculations

Implementation

Experimental Experimental Data Data Update Experiment

Code & Calculation Verification

Computer Computer Model Model

Uncertainty Quantification

Uncertainty Quantification

Model Validation

Experimental Experimental Outcomes Outcomes

Simulation Simulation Outcomes Outcomes

Statistical Analysis

No

Acceptable Agreement?

Update Model

No

Yes Next Reality of Interest

Figure 1: Model verification and validation process.

Assessment Activities Modeling, Simulation, & Experimental Activities

Suggest Documents