Nov 30, 2009 - of a series of four V&V subcommittees, including computational solid mechanics .... Arun Nair, BD Medical, Franklin Lakes, NJ, United States.
ASME Verification and Validation Symposium May 2-4, 2012 Planet Hollywood Resort Las Vegas, Nevada
Program Media Sponsor
Conference Bag Sponsor
Bronze Sponsor
STANDARDS TECHNOLOGY, LLC 1
Track 2 Sponsor
WELCOME LETTER Welcome to ASME’s Verification and Validation Symposium! Dear Colleagues, On behalf of the Co-Chairs of the ASME Verification and Validation Symposium, we thank you and your colleagues for your attendance and participation at the first large-scale conference dedicated entirely to verification, validation, and uncertainly quantification of computer simulations. The call-for-abstracts brought an outstanding response with over 240 abstracts received. The result is a program that brings together engineers and scientists from around the world representing many different disciplines that use computational modeling and simulation. We hope this symposium provides a unique opportunity for you to discuss and exchange ideas and methods for verification of codes and solutions, simulation validation, and assessment of uncertainties in mathematical models, computational solutions, and experimental data. The presentations have been organized by both application field and technical goal and approach. This event provides a unique opportunity for scientists and engineers in various fields, who do not normally have an opportunity to interact, to be able to share verification and validation methods, approaches, success and failures, and ideas for the future. Thanks again for your attendance and we look forward to your valued participation at this ground-breaking conference. Sincerely, Symposium Co-Chairs Ryan Crane ASME New York, NY, United States
Scott Doebling Los Alamos National Laboratory Los Alamos, NM, United States
Christopher Freitas Southwest Research Institute San Antonio, TX, United States
TABLE OF CONTENTS General Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .2 Schedule-at-a-Glance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .3 Keynote/Plenary Speakers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .4-5 Technical Program . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .6-75 Author Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .76-78 Session Organizers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .79-80 Exhibitors and Sponsors
. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .81-82
About ASME . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .83 ASME Standards and Certification . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .83 ASME Officers and Staff . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .83 Hotel Floor Plan . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .inside back cover
1
GENERAL INFORMATION ACKNOWLEDGEMENT
ASME TRAVEL POLICY
The Verification and Validation Symposium is sponsored by ASME. All technical sessions, exhibits, and meals will take place at Planet Hollywood Resort & Casino. The plenary sessions and panel discussion will take place in Celebrity Ballroom 1. Please check the schedule for exact dates and times.
ASME is not responsible for the purchase of non-refundable airline tickets or the cancellation/change fees associated with canceling a flight. ASME retains the right to cancel a course/ conference up until 3 weeks of the scheduled presentation date.
NAME BADGES Please wear your name badge at all times. Admission to all conference functions is on a badges-only basis (unless noted otherwise). Your badge also provides a helpful introduction to other attendees.
REGISTRATION HOURS, LOCATION Registration is in the Sunset 1 Foyer on the mezzanine level. Registration hours are: Tuesday, May 1 Wednesday, May 2 Thursday, May 3 Friday, May 4
TAX DEDUCTIBILITY
5:00pm–7:00pm 7:00am–6:00pm 7:00am–6:00pm 7:00am–12:30pm
Expenses of attending a professional meeting, such as registration fees and costs of technical publications, are tax deductible as ordinary and necessary business expenses for US citizens. However recent changes in the tax code have affected the level of deductibility.
REGISTRATION FEES Registration Fees ASME Member/Author Non-Member ASME Student Member Student Non-Member One Day Life Member Guest
On-Site Rates On or After May 2 US$ 475 US$ 575 US$ 175 US$ 200 US$ 350 US$ 150 US$ 100
HANDICAPPED REGISTRANTS Whenever possible, we are pleased to make arrangements for handicapped registrants. Advance notice may be required for certain requests. For on-site assistance, please visit the registration area and ask to speak with a symposium representative.
HAVE QUESTIONS ABOUT THE MEETING? If you have any questions or need assistance, see an ASME representative, located in the registration area.
Registration Fee includes: • Admission to all technical sessions • All scheduled meals • Symposium program with abstracts
SESSION ROOM EQUIPMENT Each session room is equipped with a screen and an LCD projector. There is also a laptop computer in each room. Speakers should have a copy of their presentation to load onto this computer via memory stick. It is recommended that authors/speakers bring all visual aids with them.
Guest Registration Fee Includes: • All meals • Does NOT include admission to technical sessions
REGISTRATION POLICIES
EXHIBITS
1. Full registration fee includes admission to all technical sessions, exhibits, breakfast, coffee breaks, lunches, symposium program with abstracts.
All exhibits are in Celebrity Ballroom 2 on the mezzanine level of Planet Hollywood Resort & Casino.
2. One-day fee includes admission to technical sessions and exhibits for one day, meal functions for that day only and symposium program with abstracts.
The exhibits are open during breakfast, coffee breaks and lunches on Wednesday and Thursday. Attendees also have the opportunity to visit the exhibits during the following hours:
3. All attendees, including authors, panelists, chairs, co-chairs, keynote and other speakers, must pay the appropriate registration fee.
Wednesday, May 2 and Thursday, May 3
4. No one will be allowed to attend the technical sessions without first registering and obtaining the official V&V symposium badge.
REFRESHMENT BREAKS Wednesday, May 2 and Thursday, May 3 Friday, May 4
10:00am–10:30am 3:30pm–4:00pm 10:00am–10:30am 2
7:00am–4:00pm
SCHEDULE-AT-A-GLANCE
Time 5:00pm–7:00pm
Session # Session Registration
TUESDAY, MAY 1, 2012 Track
Room Sunset 1 Foyer
WEDNESDAY, MAY 2, 2012 7:00am–6:00pm 7:00am–8:00am 8:00am–10:00am 10:00am–10:30am 10:30am–12:30pm 12:30pm–1:30pm
Registration 1- 1 1- 2
1:30pm–3:30pm
2- 1
1:30pm–3:30pm
4- 1
1:30pm–3:30pm
5- 1
1:30pm–2:15pm
9- 1
2:15pm–3:30pm
9- 2
3:30pm–4:00pm 4:00pm–6:00pm
2- 2
4:00pm–6:00pm
3- 1
4:00pm–6:00pm
4- 2
4:00pm–4:50pm
8- 1
4:50pm–6:00pm
11- 3
7:00am–6:00pm 7:00am–8:00am 8:00am–10:00am
2- 3
8:00am–10:00am
3- 2
8:00am–10:00am
4- 3
8:00am–10:00am
6- 1
10:00am–10:30am 10:30am–12:30pm
2- 4
10:30am–12:30pm
3- 3
10:30am–12:30pm
6- 2
10:30am–12:30pm 12:30pm–1:30pm
12- 1
1:30pm–3:30pm
3- 4
1:30pm–3:30pm
6- 3
1:30pm–3:30pm
11- 1
1:30pm–3:30pm 3:30pm–4:00pm
12- 2
4:00pm–6:00pm
7- 1
4:00pm–6:00pm
8- 2
4:00pm–6:00pm
11- 2
4:00pm–6:00pm
12- 3
7:00am–12:30pm 7:00am–8:00am 8:00am–10:00am
7- 2
8:00am–10:00am 8:00am–10:00am 8:00am–10:00am 10:00am–10:30am
10- 1 12-4 13- 1
10:30am–12:30pm
14- 1
Continental Breakfast Plenary Session: Part 1 General Coffee Break/Exhibits Plenary Session: Part 2 General Lunch/Exhibits Uncertainty Quantification, Sensitivity Analysis, Uncertainty Quantification, Sensitivity Analysis, and Prediction: Part 1 and Prediction Verification and Validation for Fluid Dynamics Verification and Validation for Fluid Dynamics and Heat Transfer: Part 1 and Heat Transfer Validation Methods for Impact and Blast Validation Methods for Impact and Blast Verification and Validation for Energy, Power, Verification and Validation for Energy, Power, Building, and Environmental Systems Building, and Environmental Systems Panel Session: Uncertainty Analysis of Building Verification and Validation for Energy, Power, Performance Assessments Building, and Environmental Systems Coffee Break/Exhibits Uncertainty Quantification, Sensitivity Analysis, Uncertainty Quantification, Sensitivity Analysis, and Prediction: Part 2 and Prediction Validation Methods for Solid Mechanics and Validation Methods for Solid Mechanics and Structures: Part 1 Structures Verification and Validation for Fluid Dynamics Verification and Validation for Fluid Dynamics and Heat Transfer: Part 2 and Heat Transfer Validation Methods for Materials Engineering: Validation Methods for Material Engineering Part 1 Panel Session: ASME Committee on Verification Standards Development Activities for and Validation in Computational Modeling of Verification and Validation Medical Devices THURSDAY, MAY 3, 2012 Registration Continental Breakfast Uncertainty Quantification, Sensitivity Analysis, Uncertainty Quantification, Sensitivity Analysis, and Prediction: Part 3 and Prediction Validation Methods for Solid Mechanics and Validation Methods for Solid Mechanics and Structures: Part 2 Structures Verification and Validation for Fluid Dynamics Verification and Validation for Fluid Dynamics and Heat Transfer: Part 3 and Heat Transfer Verification and Validation for Simulation of Verification and Validation for Simulation of Nuclear Applications: Part 1 Nuclear Applications Coffee Break/Exhibits Uncertainty Quantification, Sensitivity Analysis, Uncertainty Quantification, Sensitivity Analysis, and Prediction: Part 4 and Prediction Validation Methods for Solid Mechanics and Validation Methods for Solid Mechanics and Structures: Part 3 Structures Verification and Validation for Simulation of Verification and Validation for Simulation of Nuclear Applications: Part 2 Nuclear Applications Validation Methods: Part 1 Validation Methods Lunch/Exhibits Validation Methods for Solid Mechanics and Validation Methods for Solid Mechanics and Structures: Part 4 Structures Verification and Validation for Simulation of Verification and Validation for Simulation of Nuclear Applications: Part 3 Nuclear Applications Standards Development Activities for Standards Development Activities for Verification and Validation: Part 1 Verification and Validation Validation Methods: Part 2 Validation Methods Coffee Break Verification for Fluid Dynamics and Heat Verification for Fluid Dynamics and Heat Transfer: Part 1 Transfer Validation Methods for Materials Engineering: Validation Methods for Materials Engineering Part 2 Standards Development Activities for Standards Development Activities for Verification and Validation: Part 2 Verification and Validation Validation Methods: Part 3 Validation Methods FRIDAY, MAY 4, 2012 Registration Continental Breakfast Verification for Fluid Dynamics and Heat Verification for Fluid Dynamics and Heat Transfer: Part 2 Transfer Validation Methods for Bioengineering Validation Methods for Bioengineering Validation Methods: Part 4 Validation Methods Verification Methods Verification Methods Coffee Break Panel Session: V&V from a Government and Panel Sessions Regulatory Perspective
3
Sunset 1 Foyer Celebrity Ballroom 2 Celebrity Ballroom 1 Celebrity Ballroom 2 Celebrity Ballroom 1 Celebrity Ballroom 2 Sunset 3&4 Wilshire A Wilshire B Celebrity Ballroom 1 Celebrity Ballroom 1 Celebrity Ballroom 2 Sunset 3&4 Sunset 5&6 Wilshire A Wilshire B Wilshire B
Sunset 1 Foyer Celebrity Ballroom 2 Sunset 3&4 Sunset 5&6 Wilshire A Wilshire B Celebrity Ballroom 2 Sunset 3&4 Sunset 5&6 Wilshire B Wilshire A Celebrity Ballroom 2 Sunset 5&6 Wilshire B Celebrity Ballroom 1 Wilshire A Celebrity Ballroom 2 Sunset 5&6 Sunset 3&4 Celebrity Ballroom 1 Wilshire A Sunset 1 Foyer Celebrity Ballroom 2 Sunset 5&6 Sunset 3&4 Wilsire A Wilshire B Celebrity Ballroom Foyer Celebrity Ballroom 1
KEYNOTE AND PLENARY SESSIONS Plenary 1 Wednesday, May 2 8:00am–10:00am Celebrity Ballroom 1, Mezzanine Level
William L. Oberkampf Consultant Georgetown, TX, United States Practical and Technical Challenges in Verification and Validation William L. Oberkampf received his PhD in 1970 from the University of Notre Dame in aerospace engineering. He has 41 years of experience in research and development in fluid dynamics, heat transfer, flight dynamics, and solid mechanics. He served on the faculty of the Mechanical Engineering Department at the University of Texas at Austin from 1970 to 1979. From 1979 until 2007 he worked in both staff and management positions at Sandia National Laboratories in Albuquerque, New Mexico. During his career he has been deeply involved in both computational simulation and experimental activities. During the last 20 years he has been focused on verification, validation, uncertainty quantification, and risk analyses in modeling and simulation. He retired from Sandia as Distinguished Member of the Technical Staff and is a Fellow of the American Institute of Aeronautics and Astronautics. He has over 160 journal articles, book chapters, conference papers, and technical reports, and has taught 35 short courses in the field of verification and validation. He recently co-authored, with Christopher Roy, the book Verification and Validation in Scientific Computing published by Cambridge University Press.
Arthur G. Erdman University of Minnesota Minneapolis, MN, United States Helping Solve Our Health Care Dilemma - Virtual Medical Device Prototyping Arthur G. Erdman, PhD, PE, is the Richard C. Jordan Professor and a Morse Alumni Distinguished Teaching Professor of Mechanical Engineering at the University of Minnesota, specializing in mechanical design, bioengineering and product design. In July 2007 he was selected as the director of the Medical Devices Center at the U of M. He is also the co-editor of the ASME Journal of Medical Devices. He received his BS degree at Rutgers University, his MS and PhD at RPI. Erdman has published over 350 technical papers, 3 books, holds 35 patents (plus 10 pending), and shares with his former students 9 Best Paper Awards at international conferences. He currently has a number of ongoing projects of which many are related to biomedical engineering and medical device design. He led the effort to create LINCAGES, a mechanism software design package that has been used worldwide. Erdman has had research collaborations with faculty in ophthalmology, neuroscience, epidemiology, cardiology, urology, orthopedics, surgery, dentistry, otolaryngology and sports biomechanics. He has consulted at over 50 companies in mechanical, biomedical and product design, including Xerox, 3M, Andersen Windows, Proctor and Gamble, HP, Rollerblade, Sulzer Medica, St. Jude Medical and Yamaha. He has received a number of awards including the ASME Machine Design Award and the ASME Outstanding Design Educator Award. Erdman is a Fellow of ASME and a Founding Fellow of AIMBE. He has served as chair of the Publications Committee and of the Design and Bioengineering Divisions of ASME. He has also been the chair of ten Design of Medical Devices Conferences which are held next to the University of Minnesota each April.
4
KEYNOTE AND PLENARY SESSIONS Plenary 2 Wednesday, May 2 10:30am–12:30pm Celebrity Ballroom 1, Mezzanine Level
Douglas B. Kothe Oak Ridge National Laboratory Oak Ridge, TN, United States Predictive Simulation Challenges and Opportunities in CASL: The Consortium for Advanced Simulation of Light Water Reactors
Patrick J. Roache Consultant Socorro, NM, United States A Defense of Computational Physics: Popper’s Non-Verifiability vs. Computational Validation Patrick J. Roache’s primary area of expertise is in the numerical solution of partial differential equations, particularly those of fluid dynamics, heat transfer, and electrodynamics, with special interest in verification and validation. He is the author of the original (1972) CFD book Computational Fluid Dynamics (translated into Japanese, Russian, and Chinese), the monograph Elliptic Marching Methods and Domain Decomposition (1995), the widely referenced Verification and Validation in Computational Science and Engineering (1995), the successor to the original CFD book Fundamentals of Computational Fluid Dynamics (1995), the successor to the original V&V book Fundamentals of Verification and Validation (2009), and the booklet A Defense of Computational Physics (2012).
Douglas B. Kothe currently serves as the director of the Consortium for Advanced Simulation of Light Water Reactors, which is a US Department of Energy Innovation Hub located at Oak Ridge National Laboratory in Tennessee. Kothe conducted his PhD research at Los Alamos National Laboratory (LANL) from 1985-1987 as a graduate research assistant, where he developed the models and algorithms for a particle-in-cell application designed to simulate the hydrodynamically unstable implosion of inertial confinement fusion targets. He joined the technical staff at LANL in 1988 in the Fluid Dynamics Group (T-3), and worked in a variety of technical and management positions at LANL until 2006, when he joined Oak Ridge National Laboratory (ORNL) as director of science in the National Center of Computational Sciences. Kothe’s research interests and expertise are focused on development of physical models and numerical algorithms for the simulation of a wide variety of physical processes in the presence of incompressible and compressible fluid flow. A notable contribution is his development of methods for flows possessing interfaces having surface tension, especially free surfaces. Another has been the development and application of an advanced casting/welding simulation tool (known as “Truchas”) for the US Department of Energy Complex. Kothe has authored over 60 refereed and invited publications and written over one-half million lines of scientific source code.
Roache served as associate editor for Numerical Methods for the ASME Journal of Fluids Engineering from 1985 to 1988, and coauthored that journal’s innovative Policy Statement on the Control of Numerical Accuracy. He also chaired the AIAA Fluid Dynamics Subcommittee on Publication Standards for Computational Fluid Dynamics which produced the original AIAA Policy Statements on Numerical Accuracy. He co-edited an ASME symposium proceedings on Quantification of Uncertainty in CFD, wrote a chapter on that subject for Annual Reviews of Fluid Mechanics, and co-authored a chapter on V&V in the Handbook of Numerical Heat Transfer, 2nd Edition. He has taught eleven short courses (six for AIAA) on verification and validation.
Kothe graduated in 1983 with a BS in chemical engineering from the University of Missouri-Columbia, followed by his MS and PhD in nuclear engineering at Purdue University in 1986 and 1987, respectively.
Committee work and publications on verification and validation include ASCE Free Surface Flow Model Verifications, and ASME committees on V&V in Computational Solid Mechanics (V&V 10), V&V in Fluid Dynamics and Heat Transfer (V&V 20), and the newly formed V&V in Computational Nuclear System Thermal Fluids Behavior (V&V 30). Both V&V 10 and V&V 20 have resulted in ASME publications accepted as ANSI standards. He has served on the advisory editorial board of six international journals and on several review boards and committees. He has received career awards from the University of Cincinnati and the University of Notre Dame, and the ASME Knapp Award. With Prof. S. Steinberg, he pioneered the use of computer artificial intelligence (symbolic manipulation) in CFD and variational grid generation. He has served as adjunct faculty and visiting professor in engineering and mathematics at six universities. Roache is a Fellow of the ASME and Associate Fellow of AIAA. He received his PhD (1967) in aerospace engineering from the University of Notre Dame. For full resume, visit www.hermosapub.com/hermosa.
5
TUESDAY, MAY 1 / WEDNESDAY, MAY 2
TECHNICAL SESSIONS
TUESDAY, MAY 1 REGISTRATION Sunset 1 Foyer
toward V&V and this dichotomy is still a major challenge today. The M&S perspective requires the development of practices and procedures that are quite different from a software perspective.
5:00pm–7:00pm
This presentation will briefly survey the IEEE, ANS, and DoD perspectives on V&V, as well as the contributions by the American Institute of Aeronautics and Astronautics, the American Society of Mechanical Engineers, the American Society of Civil Engineers, and the International Organization for Standardization. These major organizations, as well as others, are all involved in developing V&V engineering standards, but each have differing constituencies, goals, and perspectives. Recommendations will be made concerning how differing goals and perspectives can be respected, yet clarification and constructive discussions could be improved between software V&V and M&S V&V communities.
WEDNESDAY, MAY 2 CONTINENTAL BREAKFAST Celebrity Ballroom 2 7:00am–8:00am PLENARY SESSION: PART 1 Celebrity Ballroom 1 8:00am–10:00am Session Chair: Christopher Freitas, Southwest Research Institute, San Antonio, TX, United States Session Co-Chairs: Scott Doebling, Los Alamos National Laboratory, Los Alamos, NM, United States, Ryan Crane, ASME, New York, NY, United States
The second half of the presentation will deal with technical challenges facing M&S V&V. Although there are many open issues, the four topics that will be briefly discussed have a major impact on the future development and use of M&S for risk-informed decision making. First, development of V&V plans is recognized as a critical activity, but little work has been done in advising practitioners how they should develop a V&V plan and what they should include in it. For example, some of the open issues include the difficulty in specifying the application domain and the validation domain for each tier of the validation hierarchy; specifying predictive accuracy requirements for each tier of the hierarchy; and estimating the cost and schedule needed to satisfy the requirements in the V&V plan. Second, the value of validation metrics in uncertainty quantification is increasingly recognized, but there are widely differing opinions how they should be constructed. For example, validation metrics have been constructed where the result is given in terms of dimensional units of the difference between model predictions and experimental measurements of the system response quantities of interest. Metrics have also been constructed where the result is a probability measure for the level of agreement between model predictions and experimental measurements. Third, there is growing interest in extrapolating estimated model form uncertainty to application conditions of interest where no experimental data are available. Although it has been common to ignore model form uncertainty, strong arguments are being made to utilize an estimate of model form uncertainty at the application conditions of interest. This estimate can then be included in representing the predictive capability of a model. And fourth, the term predictive capability is commonly used in uncertainty quantification for model predictions. There are, however, diverse perspectives on what the term means in the uncertainty quantification community and what it could or should mean to be of maximum benefit to managers and stakeholders in the context of risk-informed decision-making. These four topics will be briefly addressed and suggestions made for constructive dialogue on each.
Speaker: Arthur G. Erdman, University of Minnesota, Minneapolis, MN, United States Helping Solve Our Health Care Dilemma - Virtual Medical Device Prototyping Major advances in medical device design and manufacture require extensive and expensive product cycles that usually include animal and clinical trials. Competitive pressures often force initiation of animal trials without sufficient understanding of parameter selections based on bench tests and other preliminary analysis. This seminar will suggest that these limitations can be reduced through advancements in simulation-based medical device design and manufacture including CAD, CFD and FEA. In the future device/tissue interaction results can be visualized at the medical device designer’s workbench powered by interactive supercomputing and 3D virtual environments. Since the designer is interested in comparing the impact of particular design decisions, visual analytics techniques optimized for data comparison are used to explore high-dimensional datasets from 10’s to 100’s of simulation runs. The vision is the development of an integrated simulation-based environment, from personalized anatomical data to the design, optimization and manufacturing of a medical device. Speaker: William L. Oberkampf, Consultant, Georgetown, TX, United States Practical and Technical Challenges in Verification and Validation The term “verification and validation” (V&V) means different types of activities to various engineering and technology communities. All of these activities, however, are focused on assessing and improving the credibility, accuracy, and trustworthiness of products or services. As a result, engineering societies were the first to begin to codify best practices by way of engineering standards to promote consistency, quality, and safety, especially public safety. V&V standards were initiated by the Institute of Electrical and Electronic Engineers (IEEE) in 1986. Since that time, the IEEE has published eight standards that deal directly with V&V practices and procedures. In 1987, the American Nuclear Society (ANS) began publishing V&V standards and has published two standards to date. In 1994, the U. S. Department of Defense (DoD), although not generally in the business of publishing engineering standards, published new V&V terminology that significantly diverged from the accepted terminology and practices. The DoD terminology and practices were written from the perspective of modeling and simulation (M&S), as opposed to software V&V of the previously published standards. This marked a radical change in perspective
COFFEE BREAK/EXHIBITS Celebrity Ballroom 2
10:00am–10:30am
PLENARY SESSION: PART 2 Celebrity Ballroom 1
10:30am–12:30pm
Session Chair: Scott Doebling, Los Alamos National Laboratory, Los Alamos, NM, United States Session Co-Chairs: Christopher Freitas, Southwest Research Institute, San Antonio, TX, United States, Ryan Crane, ASME, New York, NY, United States Speaker: Patrick Roache, Consultant, Socorro, NM, United States
6
TECHNICAL SESSIONS
WEDNESDAY, MAY 2
A Defense of Computational Physics: Popper’s NonVerifiability vs. Computational Validation Sir Karl Popper (1902-1994) is usually considered the most influential philosopher of science of the first half (at least) of the 20th century. His assertion that true science theories are characterized by falsifiability has been used to discriminate between science and pseudo-science. More recently, some critics of computational physics (broadly interpreted) have used his falsificationism assertion (that science theories cannot be verified but only falsified) to categorically and preemptively reject claims of Validation of computational models. Both of these assertions will be challenged, as well as the applicability of the second assertion to modern computational models such as climate models, even if it were considered to be correct for scientific theories.
Validation includes the observation that falsificationism, if it has any merits in its extreme form, is applicable at most to Kuhn's categories of highly ambitious "revolutionary science" or "crisis science" whereas computational Validation fits into the more modest "normal science." Of course, falsifiability is a tremendously important concept to science. But in the final analysis, Popper's philosophy of falsificationism, i.e. "falsifiability only" (a) is not defensible philosophically, (b) is not used significantly in (is not normative of) modern science practice, and (c) is neither applicable to modern computational physics modeling, nor endorsed by most of its practitioners. In any case, to avoid disputation and agonizing over what Popper or we may mean by Truth, we might grant his statement that “every scientific statement must remain tentative forever” in some rarefied and hopefully harmless sense, but note that Validation of computational physics models is thereby positioned in the same category as Newton’s laws of motion and gravity, Einstein’s theories, entropy, Darwinian evolution, conservation of mass, Fourier heat conduction, etc. We computational physics modelers are in good, respectable company.
The critiques of Popper's philosophy of falsificationism will be given at three levels: (A) philosophy of science, (B) empirical data on how science is actually conducted in the 21st century, (C) applicability to computational physics modeling and the question of Validation. (A) Popper's philosophy of falsificationism involves an "if and only if" dependence on falsifiability. It is universally agreed that falsifiability, the potential of a statement to be proven false, is a necessary component of a science statement, but Popper's extreme position of falsificationism claims it is also sufficient. This is at variance with more recent approaches by philosophers of science to the pseudo-science demarcation, and will be shown to be inadequate by Popper's own criterion. Further examples will be given of the frankly ridiculous extremes of this philosophy, the common-sense consideration of which should convince critics that this philosophy is not applicable to computational Validation.
This presentation is excerpted from the booklet "A Defense of Computational Physics" (2012) by the author. Speaker: Douglas B. Kothe, Oak Ridge National Laboratory, Oak Ridge, Tennessee, United States Predictive Simulation Challenges and Opportunities in CASL: The Consortium for Advanced Simulation of Light Water Reactors The Consortium for Advanced Simulation of Light Water Reactors (CASL) is the first U.S. Department of Energy (DOE) Energy Innovation Hub, established in July 2010 for the modeling and simulation (M&S) of nuclear reactors. CASL connects fundamental research and technology development through an integrated partnership of government, academia, and industry that extends across the nuclear energy enterprise. The CASL partner institutions possess the interdisciplinary expertise necessary to apply existing M&S capabilities to real-world reactor design issues and to develop new capabilities that will provide the foundation for advances in nuclear energy technology. CASL applies existing M&S capabilities and develops advanced capabilities to create a usable environment for predictive simulation of light water reactors (LWRs). This environment, designated the Virtual Environment for Reactor Applications (VERA), incorporates science-based models, state-ofthe-art numerical methods, modern computational science and engineering practices, and uncertainty quantification (UQ) and validation against data from operating pressurized water reactors (PWRs), single-effect experiments, and integral tests. With VERA as its vehicle, CASL develops and applies models, methods, data, and understanding while addressing three critical areas of performance for nuclear power plants (NPPs): reducing capital and operating costs per unit of energy by enabling power uprates and lifetime extension for existing NPPs and by increasing the rated powers and lifetimes of new Generation III+ NPPs; reducing nuclear waste volume generated by enabling higher fuel burnup, and enhancing nuclear safety by enabling high-fidelity predictive capability for component performance through the onset of failure.
(B) An empirical study by Hansson demonstrates that Popper's paradigm of science practice is not used in the great majority of papers published in the high-quality sample considered (69 of 70 papers published in Nature in the year 2000). Popper's basic paradigm of science practice involves an effort to disprove a science hypothesis which must be in the form of "All x are y." As Hansson points out, this would eliminate exploratory research and most actual science practice. (C) Even if the previous critiques are not accepted, falsificationism will be shown to be not applicable to computational Validation. Popper himself allowed for verification of a science theory (corresponding roughly to present normative use of "Validation" for computational models) in the sense of enumerable data (Popper's "numerical universality"), and this is the case for normative use of the term Validation. Most importantly, the distinction between Popper's approach to science verification and our approach to computational Validation can be best seen by distinguishing between the concepts of Truth and accuracy. Once consideration is given to questions with more than a binary outcome, Truth is widely recognized to be a highly problematical concept in philosophy. Popper (and his contemporary adversaries, the logical positivists) were pre-occupied with Truth. Computational modelers are concerned with mere accuracy. For example, the Truth position (in its extreme manifestation) cannot accept multiple theories; only one theory can have a possibility of being True (although it cannot be proven so). By contrast, computational modelers may accept more than one model as being usefully accurate, e.g. RANS turbulence models. Another example, provocative historical one, will be given of computational Validation of a theory that is not True. In a similar vein, the often mis-quoted assertion that "All models are wrong" will be challenged.
CASL is focused on a set of Challenge Problems that encompass the key phenomena currently limiting the performance of PWRs, with the recognition and expectation that much of the capability developed will be broadly applicable to other types of reactors. To provide solutions to these Challenge Problems, CASL executes in six technical focus areas to ensure that VERA: (1) is equipped with the necessary physical and analytical models and multi-physics
A summary contrast of Popper's falsificationism vs. computational
7
WEDNESDAY, MAY 2
TECHNICAL SESSIONS
integrators; (2) functions as a comprehensive, usable, and extensible system for addressing essential issues for NPP design and operation; and (3) incorporates the validation and UQ needed for credible predictive M&S. The utility of VERA to reactor designers, NPP operators, nuclear regulators, and a new generation of nuclear energy professionals is an important CASL performance metric.
physics modeling and the computing method keep unchanged, the relationship between uncertainty and design variables is considered keeping effectively when the design variables in the applied domain are not too far from the verified domain, so the comparison information from some object models can be used to determine the relationship, which can be extrapolated to the applied domain. A series of one-dimensional Riemann problems are designed to demonstrate the method offered above, there is a strong discontinuity at the interface of two districts. 49 and 8 object models are designed in verified and applied domains. The example proves that the uncertainty in applied domain could be inferred by the uncertainty in verified domain, but the precision of quantification would become bad when the object model goes far away from the verified domain.
An overview of the CASL mission and vision is given, along with the CASL goals and strategies and its technical plan currently being executed by the six CASL focus areas. Particular emphasis is given to the CASL Validation and UQ (VUQ) Focus Area, which is working to provide tools and methodologies for the quantification of uncertainties and associated validation of VERA models and integrated systems, which are essential to the application of M&S to nuclear reactor design and safety. Challenges, opportunities, and current progress are illustrated with example simulation results achieved to date on the CASL Challenge Problems.
LUNCH Celebrity Ballroom 2
Validation of Physics-Based Computer Simulations of NonStationary Random Processes via Hypothesis Testing in the Time Domain V&V2012-6022 Jeff Sundermeyer, Caterpillar, Inc., Mossville, IL, United States
12:30pm–1:30pm
Caterpillar products are tested and used in widely varying applications and environments. Even when executing a particular machine operation repeatedly (i.e. truck loading with a wheel loader) at the same location, with the same material pile, and with the same operator, there is inescapable ensemble variability that occurs between each pass of loading the truck. This variability will be present in any quantity of interest that one would care to measure during the event (e.g. cylinder or pin forces, machine velocity, etc.). In fact, even the length of the event itself will likely vary from one sample to the next. If a numerical simulation of the event is done using some mathematical model of the operator, machine, and earthen material, then a virtual time history can be produced corresponding to any variable that could be measured during the test. It is clearly unreasonable to expect the virtual time history to exactly match any one of the measured time histories. Rather, the best that can be hoped for is that the virtual time history is believably a member of the same population of time histories from which the test samples were randomly observed. The question of common membership in a population brings to mind the principles of statistical hypothesis testing. In this work, hypothesis-testing concepts (which are typically used with scalar random variables with no time dependence) are expanded to cover dynamic random processes that are not necessarily ergodic. Two classes of problems are considered: (1) one candidate curve versus n test curves, and (2) m candidate curves (perhaps created during a Monte Carlo simulation exercise) versus n test curves. The null hypothesis in both problem classes is that the time histories in the candidate set are members of the same population from which the test time histories were sampled. If too much statistical significance is associated with the candidate set in any of the statistical tests, then the null hypothesis of common membership must be rejected. It is intended that this hypothesis-testing scheme could be a part of a larger model verification and validation process that any virtual product development organization could employ.
UNCERTAINTY QUANTIFICATION, SENSITIVITY ANALYSIS, AND PREDICTION 2-1 UNCERTAINTY QUANTIFICATION, SENSITIVITY ANALYSIS, AND PREDICTION: PART 1 Sunset 3&4 1:30pm–3:30pm Session Chair: Sanjeev Kulkarni, Boston Scientific, Maple Grove, MN, United States Session Co-Chair: Ben Thacker, Southwest Research Institute, San Antonio, TX, United States Quantification of Uncertainties in Detonation Simulations V&V2012-6024 Ma Zhibo, Wang Ruili, Zhang Shudao, Li Hua, Institute of Applied Physics and Computational Mathematics, Beijing, China When systematic test is difficult to be implemented, the M&S (modeling & simulation) will be an important approach to get information in reliability certification, and the uncertainty of M&S plays a key role in the certification framework such as QMU (Quantification of Margin and Quantification). According to the mechanism of detonation process that the final performances are determined by initial conditions and the manner of materials, the essential of M&S on detonation system can be depicted by a functional, which imply the total uncertainty of M&S is introduced from three sources such as object modeling, physics modeling and computing, so the total uncertainty of M&S can be decomposed to three parts. In fact, M&S is expected to undertake the prediction task but before this, it must go through the Verification & Validation (V&V) in which the uncertainty is quantified through comparison between the numerical result and the standard result which has high confidence. So, the uncertainty quantification (UQ) can be divided into two steps. In the first, uncertainty is directly quantified through comparison with standard results of some object models, whose design variables form a verified domain for verification activity (or validated domain for validation activity), in the second, some object models are in an applied domain in which the M&S is used as science prediction and no standard result is available to comparison.
In addition to the hypothesis-testing process, this work also illustrates how it is possible to randomly generate additional time histories from an initial seed set of test time histories that are all members of some common population. These statistically generated time histories will exhibit the same ensemble variability in event length, amplitude, frequency content, and phase that was observed in the test set. Artificially generated time histories can be used to explore the likelihood that certain features of interest might manifest themselves in the more extreme cases. They might also be used to propagate variability through a chain of analyses, such as trying to populate the distribution of fatigue damage rates
In order to quantify the uncertainty correspond to the applied domain, a hypothesis is brought forward that the total uncertainty varies along with the design variables of object models, when
8
TECHNICAL SESSIONS
WEDNESDAY, MAY 2
associated with the ensemble variability in loading for a particular event. Generally speaking, they could be used whenever a larger sample of time histories is desired, but it is impractical to obtain them either through Monte Carlo simulation or via experiment.
applicable to modern, high-resolution simulation models with the goal to provide realistic margins of uncertainty for high complexity engineering systems and to assist code validation and licensing. Our main application is simulation models of nuclear engineering.
Quantification of Dryout Power Uncertainty and Propagation in a Best-Estimate and Uncertainty Analysis for Loss of Flow Safety Analysis V&V2012-6030 Yuksel Parlatan, Zlatko Catovic, David Burger, Ontario Power Generation, Pickering, ON, Canada, Upendra S. Rohatgi, Brookhaven National Laboratory, Upton, NY, United States
We seek to construct surrogate models of uncertainty by augmenting sampling of the model over uncertainty space with results of intrusive analysis that provide additional information from each model run. Our two main intrusive techniques are Automatic Differentiation that provides sensitivity information on the model, and dimensionality reduction, applied to model state, that provides imperfect, computationally cheap approximations to be sampled instead of running the full model. Our toolset now includes gradient-enhanced polynomial regression methods with improved selection of polynomial basis; stochasticprocesses based analysis that uses very small training samples and provides error bounds and statistical metrics for the prior regression model; framework for the use of dimensionality reduced representation of both the uncertainty space and model state space in construction of convenient representations of uncertainty.
This study focuses on developing a new methodology 1) to quantify dryout power prediction uncertainty using code validation exercises, and 2) to propagate it in a Best-Estimate Analysis and Uncertainties (BEAU) predictions. The method improves quantification of uncertainties and results in significant improvement in safety margin predictions compared to the currently utilized method. A thermalhydraulics code is to predict Dryout Power (DOP) using a Critical Heat Flux (CHF) correlation which is based on full scale reactor representative experiments. The current model uses the CHF correlation fitting error (residuals) as the uncertainty in predicting CHF as part of BEAU predictions. This is found to more than double-account the DOP prediction uncertainty, and led to overly conservative predictions. Sensitivity studies identified that the DOP prediction uncertainty as the most important uncertain parameter affecting the conclusions. This led to revisiting the basis of the modeling uncertainty for the DOP uncertainty.
In practical applications, the use of intrusive, hybrid methods of uncertainty quantification provides an improvement of one to two orders of magnitude in point-wise error in comparison with such standard techniques as linear approximation and straightforward polynomial chaos expansion. Our approach is also effective on very small training sets, where high-order approximation by standard techniques is not possible. We are currently investigating the process of calibrating the imperfect (lower-fidelity) data to recover the statistical information on the model in the situations where full model sampling is prohibitively expensive. This work has a practical focus on dimensionality reduction and calibration for the Navier-Stokes flow models with uncertainty.
In the new method, the DOP prediction uncertainty, both the bias and variation in the bias, is objectively determined by comparing code predictions against the actual test results in validation exercises, in other words treating the code as a black-box and simulating the experiments on which the CHF correlation in the code is based. This has resulted in a much smaller dryout power prediction uncertainty compared to the case using the CHF correlation uncertainty.
Efficient Computation of Info-Gap Robustness For Finite Element Models V&V2012-6043 Christopher Stull, Francois Hemez, Los Alamos National Laboratory, Los Alamos, NM, United States
Further inspection of the validation exercise results suggested that code has already implicitly propagated relatively large uncertainties in best estimate predictions. The variation in the bias from the validation exercises for the relevant CHF test data points was found to exceed the aggregate dryout power measurement uncertainty, which is calculated based solely on propagation of experimental measurement uncertainties. Hence, it was determined that there was no need to explicitly propagate dryout power prediction uncertainty in the Best Estimate and Uncertainty results as that would amount to double accounting of uncertainties. Significant improvement in results is expected in terms of BEAU analysis safety margins.
The finite element method has revolutionized engineering analysis for solid mechanics, heat transfer, and electromagnetics problems. At its inception in the late-1950s, engineers could not have predicted the variety or scale of problems this tool would come to address, but the general formulation of the finite element method, together with the considerable advancements in computing, has led to considerable and rapid growth of its application space over the past two decades. As with any tool however, the finite element method can easily be abused when analysts lack a sufficient understanding of its myriad features (e.g. element formulations, analysis options, etc.). This research focuses on a decision analysis framework, anchored in info-gap decision theory, a theory that provides a mechanism by which to model and manage the analysts lack of knowledge. The analyses espoused by this framework will aim to demonstrate whether or not predictions produced by a finite element model are sensitive to arbitrarily selected modeling assumptions, such that analysts may become better informed as to the risks associated with adopting these assumptions. To demonstrate the implementation of this framework, finite element models of varying levels of complexity will be constructed, where uncertainties associated with the parameters comprising these models will reflect the lack of knowledge on the part of the analysts. Additionally, emphasis will be placed on mitigating the computational burden of assessing the info-gap robustness of the most complex models by considering the adjoint solution of the finite element problem. It will be demonstrated that the adjoint
This study highlights the need to highlight the importance of sensitivity studies to rank importance of uncertain parameters, and the revisit the basis of the most important uncertain parameters and remove conservatisms. The study further shows an example of how to do this for DOP predictions. And by comparing the measurement uncertainties with prediction uncertainties, it provides a means to eliminate double accounting of uncertainties. Intrusive Analysis for Uncertainty Quantification, Verification, Validation of Simulation Models V&V2012-6036 Oleg Roderick, Mihai Antiescu, Argonne National Laboratory, Argonne, IL, United States We develop intrusive, hybrid uncertainty quantification methods
9
WEDNESDAY, MAY 2
TECHNICAL SESSIONS
methodology can offer many advantages to the analyst, depending upon the formulation of the info-gap problem.
From Validation Set Points to a Continuum Domain of Validation V&V2012-6016 Patrick Roache, Consultant, Socorro, NM, United States
Probabilistic Analysis for Validation and Verification of Computational Models V&V2012-6044 Xiangyi (Cheryl) Liu, Dassault Systemes Simulia Corp, Providence, RI, United States, Nuno Rebelo, Dassault Systemes Simulia Corp, Fremont, CA, United States
It has been argued [e.g. H. Mair], and is a justifiable position, that “Validation” only at discrete experimental set points is nearly useless for practical applications, and that a model should not be declared “Validated” unless the set-point Validation results have been extended to some useful continuum Domain of Validation. The several recent V&V Guides [AIAA, ASME V&V 10, ASME V&V 201] all are limited to set-point Validation, deferring the necessary interpolation (or extrapolation or curve fitting) to analysts with expertise in the various application areas. However, there are some features of the problems that are applicable generally, as considered in Refs. 2-4.
Computational analysis models such as finite element analysis (FEA) and computational fluid dynamics (CFD) have been used widely in various industries to analyze critical product performance characteristics, such as fatigue life and safety factors. To apply these computational techniques, a set of input parameters are required which may include geometry parameters, material properties, boundary and loading conditions.
The present work considers extension of set-point Validation to a continuum Domain of Validation within the model Validation framework of ASME V&V 201. The ASME Committee V&V 20 intends to publish a supplement that will include such an extension (as well as multivariate validation metrics) The present paper is somewhat a work-in-progress, although some of these results have already been published2. (These results are not to be understood as approved or authorized by the V&V 20 Committee nor by ASME). Essential elements are the following.
As all engineering and manufacturing processes are stochastic in nature, these input parameters all have built-in uncertainties. Manufacturing tolerances, as tight as they might be with modern precision manufacturing techniques, will affect the geometries of the computation models. Material properties may vary from one batch to another especially for the new alloy and polymer materials, the properties of which are very sensitive to the fabrication conditions. Moreover, compared with geometry and material parameters, the boundary and loading conditions perhaps have a lot more variations depending on the in-work or testing conditions of the product. Therefore, having a single set of fixed input parameters neglects the uncertainties and will not be able to capture the distribution of performance characteristics as observed in physical testing or in-work condition. And thus poses a challenge for validation and verification of computational models as well.
The distinction between model quality vs. quality of the validation exercise. Good model quality depends on model accuracy, i.e. small model (form) error. By contrast, quality of the validation exercise depends on quality of the experiments and quality of the application of the model (i.e. numerical uncertainty estimates, etc.). Thus a high quality validation exercise will result in tight uncertainty estimates, independent of model quality.
Monte Carlo Simulation methods have long been considered the most accurate means of estimating the probabilistic properties of uncertain system responses resulting from known uncertain inputs. To implement a Monte Carlo simulation, a defined number of system simulations to be analyzed are generated by sampling values of random variables, following the probabilistic distributions and associated properties defined for each. Combining Monte Carlo Simulation methods with computation modeling allows incorporation of uncertainties in the input parameters and provides a more realistic stochastic view of computational modeling validation and verification.
A brief review of interpolation vs. extrapolation vs. curve fitting (regression), especially in high-dimension parameter space. Extrapolation can be more reliable than interpolation, depending on closeness of support and crossing of physics boundaries (sonic lines, boundary layer transition, etc.). What is the goal of interpolation of validation set-points to discrete new model application points or to a continuum domain of validation. This involves the somewhat subtle issue of what quantities are to be interpolated and what are not. The estimates of model error and the total validation uncertainty will be interpolated, but the total uncertainty for the new application will also include the new numerical and input parametric uncertainties.
In this study, using an Abaqus analysis of implantable stent as an example, uncertainties in boundary conditions, specifically size and compliance of the blood vessel where the stent is deployed, and their effects on the fatigue safety factor of the stent are simulated using Monte Carlo analysis in Isight. The example demonstrates how seamless Abaqus and Isight are integrated together, and how easy it is to incorporate probabilistic analysis in computational modeling verification and validation.
A comparison of the terms “total validation uncertainty” vs. “model form uncertainty.” It is argued that the uncertainty result from the validation exercise should not be designated as “model form uncertainty” because this term suggests that the uncertainty is a property or attribute of the model itself (in parallel to “model form error”) whereas in fact it is a property of the validation exercise, not of the model. There is a meaningful sense of “model form uncertainty”, e.g. the incertitude of using incompressible flow vs. compressible flow equations, but this sense of “uncertainty” is usually not expressible in the “error bar” sense.
VERIFICATION AND VALIDATION FOR FLUID DYNAMICS AND HEAT TRANSFER 4-1 VERIFICATION AND VALIDATION FOR FLUID DYNAMICS AND HEAT TRANSFER: PART 1 Wilshire A
The combination of aleatory and epistemic uncertainties. A suggested minor modification of the Total Validation Uncertainty formulation of ASME V&V 20 to make the treatment of numerical uncertainty more easily justifiable and conservative for highconsequence applications. Instead of combining experimental, numerical, and input parameter uncertainties using RSS summation, the numerical uncertainty is added arithmetically to the RSS summation of the others.
1:30pm–3:30pm
Session Chair: W. Glenn Steele, Mississippi State University, Mississippi State, MS, United States Session Co-Chair: Hugh Coleman, University of Alabama at Huntsville, Huntsville, AL, United States
10
TECHNICAL SESSIONS
WEDNESDAY, MAY 2
References: [1] ASME Committee V&V 20, (2009), ASME V&V 20-2009. Standard for Verification and Validation in Computational Fluid Dynamics and Heat Transfer, 30 November 2009. (An ANSI Standards document)
A Generic Probabilistic Framework for Uncertainty Quantification and Validation Metrics of Engineering Analysis Models V&V2012-6103 Liping Wang, GE Global Research, Schenectady, NY, United States, Arun K. Subramaniyan, GE Global Research Center, Niskayuna, NY, United States, Don Beeson, Gene Wiggs, Vaira Saravanan, GE Aviation, Cincinnati, OH, United States
[2] Roache, Patrick J. (2009), Fundamentals of Verification and Validation, Hermosa Publishers, Albuquerque, September 2009. [3] Oberkampf, W. L., and Roy, C. J., (2010), Verification and Validation in Scientific Computing, Cambridge University Press, Cambridge, UK. [4] Roy, Christopher J. and Oberkampf, William L. (2011), A Comprehensive Framework for Verification, Validation, and Uncertainty Quantification in Scientific Computing, Comput. Methods Appl. Mech. Engrg. 200 (2011) 21312144.
Model validation metrics is one of key elements in Verification and Validation (V&V) for engineering analysis models. It provides a quantitative measure that characterizes the agreement between model predictions and observations. Due to the complexity of physics models, multiple sources of uncertainties, and lack of experimental data, it is often difficult to calculate accurate quantitative validation metrics for engineering analysis models. Currently most engineering analysis models are validated using only deterministic error estimation. This estimation has no recognition of uncertainty in the comparison. Therefore, conclusions drawn from this type of comparison are only qualitative, such as, “fair agreement” or “generally good agreement”. Also there is no unified validation metrics that can be applied for all engineering models under a wide range of available simulation and test data scenarios. To achieve a true and more useful quantitative measure of the comparison between the simulation prediction and the experimental measurements, probabilistic and statistical methods must be applied to take into account the multiple sources of the uncertainties and handle different model and data scenarios.
Verification and Validation of Heat Transfer and Friction Factor in Tubes with Various Twisted Tape Inserts at One Phase Flow V&V2012-6097 Stanislav Tarasevich, Artur Giniyatullin, Anatoly Yakovlev, Andrey Zlobin, Kazan Technical Research Technical University named after A.N. Tupolev, Kazan, Russia The present numerical and experimental work has been conducted in order to study the heat transfer and friction factor characteristics in the tubes with smooth and ribbed twisted tape inserts. The numerical simulations were carried out using commercial CFD software package ANSYS FLUENT. The mathematical modeling involves the prediction of flow and heat transfer behavior. The flow through the tube installed with twisted tape is turbulent and incompressible with constant properties. The flow assumed to be steady. Working fluid is water. To solve partial governing equations the finite volume methods are employed to the study.
This presentation will discuss a generic probabilistic framework for uncertainty quantification and validation metrics of engineering analysis models under a wide range of available simulation and test data scenarios. First, uncertainty quantification and sensitivity of computer models using probabilistic methods will be discussed. This includes the enhanced Kennedy and O’Hagan Bayesian method for model calibration, model updating, prediction and uncertainty quantification. Second, uncertainty quantification and sensitivity analysis using statistical methods to handle different test data scenarios will be discussed. These scenarios include both sparse and sufficient datasets with only measured inputs or both inputs and outputs. Third, validation metrics using the enhanced Kennedy and O’Hagans Bayesian framework and statistical methods will be discussed. The confidence intervals and posterior distribution of the model inadequacy generated by the Bayesian theory is used to provide the desired quantitative validation metrics for the simulation model. In the case that partial or all measured input values are unavailable in the test data, statistical methods are used for computing validation metrics. A discussion of interval analysis, sample Kolmogorov-Smirnov test, area metric, Bayes factor, and frequentist metric will be included. Finally, detailed engineering demonstration results will be provided.
The most popular turbulence models in engineering practice including the standard k-e turbulence model, the Renormalized Group (RNG) k-e turbulence model, the realizable k-e turbulence model, the Spalart-Allmaras turbulence model and the Reynolds Stress Model (RSM) are examined during the study. The results on heat transfer for smooth twisted tapes received in calculations were compared with well-known Manglik and Bergles correlation for tubes with twisted tape inserts. The results on friction factor were compared with Ibragimov correlation. The Renormalized Group (RNG) k-? turbulence model gives the best agreement with Manglik and Bergles correlation. Both near-wall modeling and wall function approaches are examined for this model. The numerical analysis is made for smooth twisted tapes at y=2.5; 4; 6 (y=S/d, where S pitch of tape twisting at turn on 1800). Pitch between ribs is equal to S and S/2. The angle between ribs and tube axis is 45 degree. The rib heights are h=0.5; 1; 1.5 mm. The range of Reynolds number is Re=10000-170000. The time-independent incompressible Navier-Stokes equations and the various turbulence models are discretized using the finite volume technique. The second-order upwind scheme and central difference schemes are used to model the convective and diffusive terms in governing equations. For all calculations the segregated solution approach using the SIMPLE (Semi Implicit Method for Pressure-Linked Equations) algorithm is used. The turbulence intensity is kept at 10% at the inlet. The velocity inlet was used for inlet boundary condition. A pressure-outlet boundary conditions was used to define the outlet static pressure.
Validating Expensive Simulations with Expensive Experiments: A Bayesian Approach V&V2012-6106 Arun K. Subramaniyan, GE Global Research Center, Niskayuna, NY, United States, Liping Wang, GE Global Research, Schenectady, NY, United States, Don Beeson, Gene Wiggs, Vaira Saravanan, GE Aviation, Cincinnati, OH, United States
Numerical data obtained for ribbed twisted tape inserts compared with our experimental data. The CFD simulation results are in good agreement with existing correlations and our experimental data, despite the fact that sometimes the simulations did not account all enhancement mechanisms and effects.
Validation is the task of ensuring that the simulation model represents the real physical phenomena within acceptable accuracy. Validating simulation models is a critical step before the model can be used for real applications such as design. The basic concept of validation has remained unchanged over centuries of scientific endeavor: compare simulation results with experimental
11
WEDNESDAY, MAY 2
TECHNICAL SESSIONS
data. Ideally, if we had a statistically large number of simulations and experiments, it is straightforward to use traditional statistical methods for validation. However, it is quite rare to have sufficiently large number of simulations and experiments in real engineering applications. Modern system level engineering models are typically complex and are computationally very expensive. For example, a system level non-linear structural finite element model of an aircraft engine can have 5 10 million nodes, with a 20 hour computational time for one run on 10 processors. An unsteady CFD simulation of a small sub-component like a gas turbine blade can take 6 weeks on 100 processors per simulation! Even with the fastest computers available today, it is impractical to perform a large number of simulation runs.
formulation, which allows for direct resolution of incident over-pressure wave reflections and transmissions, accounting for complex interior geometries with associated pressure shadowing. The unstructured grid elements or control volumes are tetrahedron. The solution method is based on solving the time dependent Euler Equations using a Riemann solver technique – a weighted averaged flux method. This paper presents the details of the model formulation as well as solution verification assessments and simulation validation for an actual detonation experiment in a multicompartment structure. In this V&V assessment, the procedures defined in the ASME V&V 20 Standard are used to calculate numerical uncertainty in a simulation, perform solution verification, and a validation of a simulation using experimental data. An end-to-end application of the ASME V&V 20 Standard is presented.
On the other hand, there are several challenges with acquiring good experimental data. For large engineering systems such as aircraft engines and gas turbines, it is not unusual to work with only a few experimental measurements. This is because system level experiments are few and far between mainly due to the associated costs. A system level ground test of an aircraft engine can cost $12 million and thus is not repeated often! Also, it is not always possible to measure all required quantities at all locations in the system due to space/cost constraints for adding instrumentation. For example, a multistage compressor might only have instrumentation to measure flow parameters at the inlet and exit of the compressor making it very difficult to validate a compressor model without measurements at each stage.
Validation of a New CFD and Fire-Modeling Code V&V2012-6167 David Keyser, Mark Butkiewicz, Survice Engineering Co, Lexington Park, MD, United States, Adalberto Castelo, Survice Engineering Co, Belcamp, MD, United States The Problem: Fire Integrity of Advanced Ship Structures One serious concern is the reliability of these new composite and aluminum materials under operating conditions when exposed to fire. ONR supported an SBIR, N07-098, to improve the ability to predict the residual structural integrity of these nontraditional shipbuilding structures during and after a damaging fire.
This talk will highlight the issues with validating expensive simulations with very sparse experimental data. Bayesian techniques have the advantage of working with sparse datasets. Their applicability to real engineering problems will be discussed. Kennedy & O’Hagan introduced the concept of calibrating simulation models while simultaneously computing the discrepancy between the calibrated simulator and experimental data. The computed discrepancy can be used for probabilistic validation. Validating computationally expensive engineering models of aircraft engines and gas turbines using Bayesian techniques will be showcased.
The Objective Develop a software tool to model residual structural integrity of structures during and after a fire. This could include fire growth and spread, convection, thermal conduction through the structure, the resulting changes in material properties, burning and off-gassing of the structure, softening, fracture, creep, and charring. The Approach SURVICE Engineering Company was awarded all three phases in this effort. The ApolloTM core CFD module was further advanced to implement multiple CPU core processing. In addition, methodologies for combustion and convective and radiative heat transfer were developed and integrated into the ApolloTM firesustainment module.
Application of V&V 20 to the Simulation of Propagating Blast Loads in Multi-compartment Structures V&V2012-6258 Christopher Freitas, Southwest Research Institute, San Antonio, TX, United States
Verification and Validation The validation test cases were executed to provide comparisons of Apollo’sTM flow and combustion outputs to other M&S benchmarks, including the JASPO Fire Prediction Model and the NIST Fire Dynamics Simulator . This presentation will give a toplevel overview of these comparisons, and then go into detail on two test cases which provide quantitative assessments of accuracy for the core CFD of ApolloTM
A Computational Fluid Dynamics (CFD) based methodology has been developed for the simulation of internal blast propagation through complex, multi-compartment structures. Specifically the BPM DDM – internal Blast Propagation Model Distributed Data Module – is a CFD code that simulates the detonation of AIREX threats and over-pressure expansion of the detonation products in the spatial region interior to a U.S. Navy surface combatant. The BPM DDM is one of many DDMs all incorporated into the Navy’s Advanced Survivability Assessment Program (ASAP). ASAP is a computational tool that performs time dependent, deterministic simulations of threat engagements of U.S. Navy surface combatants. Survivability and vulnerability assessments are then made through a probabilistic evaluation of a family of deterministic simulations in which input parameters are varied per simulation. The BPM DDM is an inviscid Computational Fluid Dynamics (CFD) code solving the conservation equations of fluid mechanics on an unstructured tetrahedral grid system. The numerical solution of these equations allows for the prediction of the dynamic, three-dimensional, time-dependent evolution or expansion of the explosive gas-phase products throughout the multi-compartment ship structure. Given threat initial data such as locations, masses, and detonation times, the BPM DDM resolves the time-dependent pressure, density, energy and velocity fields resulting from threat detonations. Ship geometry is resolved using an unstructured grid
i. ASME Nozzle Flow Test Case A superheated steam flow test case was used to validate the core CFD module. It is one of several well-known, semiempirical fluid dynamics problems to verify and validate the results of the core CFD portion of ApolloTM. The validating results are calculated from the ASMEs Performance Test Code 19.5, Flow Measurement. ii. Vortex-Shedding Test Case The second of the critical tests conducted on the ApolloTM CFD was modeling the fluid dynamics of oscillating flow caused by the interference of a cylinder normal to the flow of a fluid across it. The pioneering work was carried out by Vincenc Strouhal, who found that the sound produced by the wire was directly related to the vortex-shedding frequency and led to the dimensionless Strouhal number:
12
TECHNICAL SESSIONS
WEDNESDAY, MAY 2
(1) The earliest fluid-dynamical studies of the vortex-shedding processes from circular cylinders are usually attributed to von Karman, who observed the characteristic flow patterns now known as von Karman vortex street.
computer time and memory; however, the results obtained provide very good accuracy and indicate that the code is well suited to predicting the outcomes of future explosive detonations. Comparison of Numerical Simulation and Small Scale Compressed Gas Blast Generator Experiments V&V2012-6087 Matthew V. Grimm, Brandon J. Hinz, Karim H. MuciKüchler, South Dakota School of Mines and Technology, Rapid City, SD, United States
VALIDATION METHODS FOR IMPACT AND BLAST 5-1 VALIDATION METHODS FOR IMPACT AND BLAST Wilshire B 1:30pm–3:30pm Session Chair: Vicente Romero, Sandia National Laboratories, Albuquerque, NM, United States Session Co-Chair: Dawn Bardot, HeartFlow, Redwood City, CA, United States
Exposure to a shock wave and the subsequent overpressure created by an explosive blast can result in serious injury to the human body even if external signs of trauma are not present. Gaining a better understanding of the mechanisms contributing to those injuries can result in the design of personal protective equipment (PPE) for blast protection. Compressed gas blast experiments can be used to explore the mechanical response of PPE systems and instrumented surrogate headforms to blast loading scenarios in a laboratory environment. Likewise, numerical simulations can be used to quickly study multiple variables involved in a compressed gas blast but experimental data is needed to validate simulation results.
Numerical Simulation of a 100-ton ANFO Detonation V&V2012-6077 Paul Weber, Kyle K. Millage, Applied Research Associates, Inc., Arlington, VA, United States, Joseph E. Crepeau, Henry J. Happ, Charles E. Needham, Applied Research Associates, Inc., Albuquerque, NM, United States, Yefim Gitterman, Geophysical Institute of Israel, Lod, Israel This work describes the results from a government-owned hydrocode (SHAMRC) that simulated an explosive detonation experiment with approximately 100,000 kg of ANFO and 2,080 kg of Composition B (CompB). The explosive charge was approximately hemispherical and detonated in desert terrain. Many types of observation equipment were used to collect the data, including pressure gauges, seismometers, and cameras.
This paper presents an experimental setup designed to test a materials response to blast loading using a small scale compressed gas blast generator. The compressed gas blast generator is an open-ended shock tube which creates a shock wave when the diaphragm that separates the high pressure and low pressure (ambient air) regions ruptures. Different diaphragm materials and driver gasses were used to adjust the strength of the shock wave and the subsequent overpressures in the free-field. The overpressures were measured with a piezoelectric pressure sensor that was positioned off-axis from the exit of the compressed gas blast generator in order to preclude the interaction of the discharge flow with free-field overpressure measurements.
SHAMRC has been thoroughly validated and applies particularly to high-explosive detonations. Both two-dimensional (2D) and threedimensional (3D) simulations were conducted. The 2D simulation initial conditions assumed that the explosive masses were concentric hemispheres with the CompB inside the ANFO; the explosive mass was consistent between the experiment and simulation. The 2D assumption allowed for much quicker run times and, as expected, captured major experimental phenomena. The 3D simulation initial conditions assumed that the experimental geometry could be represented as quarter-hemispherical with symmetric boundary conditions to reduce the run time (a slight deviation from experiment). The 3D representation captured all relevant features of the experimental geometry (including the CompB being interspersed in the main ANFO mass).
A three-dimensional Eulerian model of the experiment described above was created in Abaqus/Explicit to compare with the experimental results. Since small elements were required to accurately simulate the propagation of the shock wave in the air, only one-quarter of the experimental setup was modeled to reduce the computational time needed to run the model. The model used linear eight-node Eulerian brick elements and fluid flow was restricted perpendicular to the models two planes of symmetry via zero velocity boundary conditions with the purpose of enforcing symmetry. All other external free-field boundaries were assigned free inflow/outflow boundary conditions to prevent the formation of a reflected shock and to facilitate a clean incident shock profile. Peak overpressure, positive phase duration, and specific impulse for the experiments and the numerical model were compared and good agreement was obtained between them.
The 2D and 3D simulations both yielded overpressure and overpressure impulse waveforms that agreed qualitatively with experiment, including the capture of the secondary shock observed in the experiment. The 2D simulation predicted the primary shock arrival time correctly, although the predicted secondary shock arrival time was early. 2D predicted overpressure impulse waveforms agreed very well with the experiment, especially at later calculation times, and prediction of the early part of the impulse waveform (associated with the initial peak) was better quantitatively for 2D compared to 3D. The 3D simulation also predicted the primary shock arrival time correctly, and secondary shock arrival times in 3D were closer to the experiment than the 2D results. The 3D predicted overpressure impulse waveform had better quantitative agreement than 2D for the later part of the impulse waveform (associated with the secondary shock). Zone-size sensitivity studies were conducted, and increasing the zone count from 16 to 225 million in 2D and from 512 to 1,728 million in 3D did not affect the results significantly.
Comparison of Numerical Simulation and Flow Visualization Experiments of Air Flow into Temporary Ballistic Wound Cavities V&V2012-6098 Brandon J. Hinz, Karim H. Muci-Küchler, South Dakota School of Mines and Technology, Rapid City, SD, United States A common type of battlefield injury involves high speed projectiles of different sizes and shapes hitting the human body, particularly the extremities. Gaining a better understanding of the mechanisms involved in those injuries can result in better strategies for providing medical care. One aspect that still requires additional research is the contamination of ballistic wounds resulting from the suction created during the formation of the temporary wound cavity. Studies published in the open literature have shown that in
The results of this study show that SHAMRC may be used reliably to predict phenomena associated with this 100-ton detonation. The ultimate fidelity of the simulations was limited by both
13
WEDNESDAY, MAY 2
TECHNICAL SESSIONS
perforating projectile wounds airborne debris such as skin, cloth and soil particles are introduced into the wound by either the projectile or by the suction created in the temporary wound cavity. The debris can transport bacteria that lead to infection, which results in delayed wound healing or more serious complications. The amount of suction and ultimately the bacteria distribution in ballistic wounds can vary depending on parameters such as projectile velocity, caliber, and mass. Numerical simulations can be used to study the suction effect but experimental data is needed to validate the simulation results.
composite and steel components. The analytical response does vary from the experimental behavior for an all composite structure undergoing large deformation. However, the peak deformation is accurately simulated.
This paper presents an experiment developed to provide an initial assessment of numerical simulations of the air flow into perforating projectile wounds. The experiment used rectangular prism targets made of soft tissue surrogate materials (either PERMA-GEL or ballistic gelatin). These targets were shot with 0.45-in caliber lead projectiles fired from air rifles at speeds up to 200 m/s. The air flow into the temporary cavity of the targets was visualized using a vapor curtain placed near the projectile entry location. High speed digital cameras captured the formation of the temporary wound cavity and the movement of the vapor curtain during the tests. Quantitative data was extracted from selected frames of the high speed videos using 2-D motion analysis software. To simulate the experiment, a Coupled Eulerian-Lagrangian (CEL) model was created in Abaqus/Explicit. In the model, the mechanical behavior of the tissue surrogate target was represented using a hyper-elastic constitutive relation. A small pre-made cylindrical channel was added to the tissue surrogate targets to avoid using techniques such as element erosion when modeling the passage of the projectile through the material. Qualitative and quantitative results from the model were compared with the results from the laboratory tests.
Space frames have been commonly used in vehicles to enhance the structural strength of the vehicle while reducing its overall weight. When a vehicle, with an internal space frame structure, is subjected to an impact load, the individual frames and joints of the space frame structure play a critical role in mitigating the generated shocks. In order to properly design the space frame structure, it is important to predict how these shocks move through the members of the space frame. While performance of space frame structures under static loads is well-understood, research on space frame structures that are subject to impact loading is minimal.
Finite Element Validation of Low Impact Response On A LabScale Space Frame Structure V&V2012-6171 Jagadeep Thota, Mohamed Trabia, Brendan O’Toole, University of Nevada Las Vegas, Las Vegas, NV, United States
In this work, a lab-scale cubical space frame structure, made of hollow square members that are connected together through bolted joints. The structure is shaped to allow acceleration signals to travel in three orthogonal directions. Low velocity non-destructive impact tests are carried out on this structure by using a force hammer. The resulting acceleration signals at the identified locations on the frame members are recorded. Two finite element (FE) models of the labscale space frame structure are developed: a model created completely from solid elements and a model comprising of only beam elements. These FE models were subjected to similar boundary conditions and impact load as in the experiment. Acceleration signals from the FE models were compared with the experimental data. The natural frequencies of the space frame structure of the experimental and the FE models were also compared. The beam elements FE model predictions matched better with the experimental data.
Keywords: Air flow visualization, numerical simulation of ballistic wounds, contamination of ballistic wounds. Analytical Methods for Blast Loaded Composite Structures V&V2012-6170 Stacy Nelson, Brendan O’Toole, Jagadeep Thota, University of Nevada, Las Vegas, Las Vegas, NV, United States
Verification of Dynamic Impact Response of Metal Cask under Aircraft Engine Crash V&V2012-6185 Sanghoon Lee, Woo-Seok Choi, Ki-Young Kim, Je-Eon Jeon, Ki-Seog Seo, Korea Atomic Energy Research Institute, Daejeon, Korea (Republic)
Explosion resistant containers and chambers could eventually become important for the safe storage and disposal of explosive materials and munitions. Light-weight explosion proof vessels that are made of fiber-reinforced composite materials are of specific interest as their decreased mass allows for an ease in transportability. When developing fiber-reinforced composite structures for dynamic loading, such as explosion resistant containers, efficient analysis techniques are required. The objective of this study deals with developing an analytical approach that can be implemented when designing blast loaded composite structures. Efficient analysis procedures to predict both the elastic and post-failure responses in dynamically loaded composite structures are developed. Three different case studies are examined to verify the accuracy of the developed analytical methods in their ability to specifically predict the peak strains and displacements in blast loaded composite structures. In each of the three cases, an open-ended cylinder of composite material, with or without an inner liner of steel, is subjected to a centrally placed, internal explosive load. A series of explosive tests are also completed with cylinders closely mimicking the structures that are analytically modeled. A comparison of the analytical and experimental results indicate that the developed analysis techniques should accurately simulate the deformation behavior and predict the peak strains in blast loaded composite structures consisting of both composite and metallic materials. The analytical methods very accurately simulate the elastic response of a blast loaded composite structure as well as the large deformation response of a structure consisting of
After the 9.11 terror attack, the physical protection of hazardous facilities such spent nuclear fuel storage systems against the targeted aircraft crash gains increasing attention. There are several systems for spent nuclear fuel developed and currently in operation, such as concrete cask, metal cask, concrete modules, storage buildings of various types, and so on. Some countries have tried to assess the safety of spent fuel storage systems against the targeted aircraft crash by numerical simulation and/or physical test using full scale/reduced scale models. The logics, procedure, and methodology used in the assessment are of great importance and the verification of the assessment result is also an integral part of the assessment. However, the verification is challenging because of the complexity of the engineering systems involved and the high cost of the test. In this research, the structural integrity of the dual purpose metal cask currently under development by Korea Radioactive Waste Management Cooperation (KRMC) is evaluated through dynamic impact analyses and the results are verified by tests under the high speed missile impact considering the targeted aircraft crash condition. The impact condition is carefully chosen through a survey on the accident cases and recommendations from literature.
14
TECHNICAL SESSIONS
WEDNESDAY, MAY 2
The missile impact velocity is set as 150 m/s, the measured velocity from the airplane that struck the Pentagon at the 9.11 terror attack. A large commercial aircraft currently operated in Korea is selected and a simplified missile simulating the engine of the aircraft is designed from an impact load history curve provided in literature. Among possible impact orientations, one is chosen in such a way that would cause the maximum damage to the containment boundary of the cask which is freely standing on a concrete pad. The missile hits the upper part of the cask nearby the lid closure.
buildings [5-7]. As building automation systems (BASs) using Ethernet and TCP/IP networking technologies are becoming a common feature for commercial buildings in the U.S., installing additional sensors and integrating more intelligent building control systems into BASs can be easily accomplished. With this evolution in building control systems, assuring reliable and accurate control of a system becomes however very challenging in light of potential uncertainties and signal errors. By adding uncertainty quantification and verification and validation (V&V) procedures in the real-time system operations, reliability and accuracy of system controls/operations can be improved by monitoring and removing errors/uncertainties in the model prediction. This concept can be eventually developed into an automated fault diagnostic and correction tool for building control systems.
In the analyses, the focus is on the evaluation of the containment boundary integrity of the metal cask. The metal cask consists of the cask body made of carbon steel with bolted closure, a canister made of stainless steel with welded closure and a dummy weight simulating the fuel basket. The cask body and its lid closure form the containment boundary of the cask. Implicit dynamic analysis is performed using LS-DYNA. Material properties for the cask and the impacting missile are obtained from tests using three types of testing equipment to cover a wide range of strain rate. The test data are appropriately manipulated for input to LS-DYNA. The bolt prestresses are measured using bolt strain gauges and reflected in the analysis by a dynamic relaxation process. The analysis shows that the cask gains quite significant rotational and translational velocity due to the impact and flies in the direction of impact. The bolted lid closure gains significant damage but no bolt failure or lid opening which would lead to the leakage of the inside contents of the cask are expected from the simulation.
In this paper, a methodology to estimate uncertainties and to validate/verify the model predictions against measurements in the real-time operations of data-driven, model-based building control systems is explored. The total uncertainties on the model predictions are estimated based on the sensitivities and systematic/random errors associated with each argument [8-10]. An example of an economizer control for a building ventilation system is provided to demonstrate the details of the uncertainty analysis and V&V procedure. References: [1] Galasiu, A.D. and Veitch, J.A., 2006, Occupant preferences and satisfaction with the luminous environment and control systems in daylit offices: a literature review, Energy and Buildings, 38 (7), pp. 728-742.
The analyses results are compared with the results of tests using a 1/3 scale model and the y show very good agreements. Since the test is very expensive and could not be repeated, a statistical approach for the verification is almost impossible for this case. Instead, mechanical responses at several points of the cask such as strain history, acceleration are measured and compared with the responses obtained by simulation to verify and support our assessment using numerical simulation. The leak rate of the containment boundary is finally measured using He spectroscopy and it showed that the lid closure including the bolts gains significant damage but it did not bleach the containment boundary, which supports our conclusion from the numerical simulation.
[2] Schuman, J., Rubinstein, F., Papamichael, K., Beltran, L., Lee, E.S., and Selkowitz, S., 1992, Technology reviews: Lighting systems, Technical Report No. LBL-33200, Lawrence Berkeley National Laboratory, CA. [3] Fisk, W. F. and De Almeida, A. T., 1998, Sensor-based demand-controlled ventilation: a review, Energy and Buildings, 29 (1), pp. 35-45. [4] Emmerich, S. J. and Persily, A. K., 1997, Literature Review on 2-Based DemandControlled Ventilation, ASHRAE Trans., 103 (2). [5] Yun, K.T., Cho, H., Luck, R., and Mago, P.J., 2011, Real-Time Combined Heat and Power (CHP) Operational Strategy Using a Hierarchical Optimization Algorithm, J. Power and Energy, 225 (4), pp. 403-412. [6] Cho, H., Luck, R., and Chamra, L.M., 2010, Supervisory Feed-Forward Control for Real-Time Topping Cycle CHP Operation,J. Energy Resource. Technol., 132 (1). [7] Henze, G.P., Dodier, R.H., Krarti, M., 1997. Development of a predictive optimal controller for thermal energy storage systems, Int. J. HVAC&R Res., 3 (3), pp. 233264.
VERIFICATION AND VALIDATION FOR ENERGY, POWER, BUILDING, AND ENVIRONMENTAL SYSTEMS
[8] ISO, 1993, Guide to the Expression of Uncertainty in Measurement (corrected and reprinted 1995),International Organization for Standardization.
9-1 VERIFICATION AND VALIDATION FOR ENERGY, POWER, BUILDING, AND ENVIRONMENTAL SYSTEMS Celebrity Ballroom 1 1:30pm–2:15pm
[9] ASME, 2005, Test Uncertainty, ASME PTC19.1-2005, American Society of Mechanical Engineers. [10] Coleman, H. W. and Steele, W. G., 1999, Experimentation and Uncertainty Analysis for Engineers, 2nd Edition, John Wiley & Sons.
Session Chair: Godfried Augenbroe, Georgia Institute of Technology, Atlanta, GA, United States Session Co-Chair: Heejin Cho, Pacific Northwest National Laboratory, Richland, WA, United States
Engine Performance Model Calibration with Unmatched Data V&V2012-6232 Michael Gorelik, Jacob Obayomi, Jack Slovisky, Dan Frias, Honeywell Aerospace, Phoenix, AZ, United States, John McFarland, Michael Enright, David Riha, Southwest Research Institute, San Antonio, TX, United States
Uncertainty Quantification of Model-Based Building Control Systems V&V2012-6225 Heejin Cho, Pacific Northwest National Laboratory, Richland, WA, United States, Godfried Augenbroe, Georgia Institute of Technology, Atlanta, GA, United States
Model calibration is the practice of using observations of a model output to update or enhance model predictions, usually through the process of updating the values of uncertain model inputs. Most formulations that have appeared in the literature are based on the nonlinear regression framework. This framework, however, operates under the assumption that for each observation of an output (Y), the corresponding values of the independent inputs (X) are also known (in other words, the input and output data are matched with each other). This paper presents a case study involving the calibration of a gas turbine engine performance model, in which the assumption that the independent inputs have known
A complex building energy system often requires a data-driven, model-based building control system to enable energy-efficient, cost-effective and environmental-friendly operations. A few examples are: an occupancy sensor and/or photosensor based lighting control system [1-2], a CO2 occupancy sensor based demand-controlled ventilation system [3-4] and a predictive model based power generation/thermal storage control system for
15
WEDNESDAY, MAY 2
TECHNICAL SESSIONS
values does not hold. The case study will use experimental observations of engine performance (horsepower and fuel flow) from a population of engines. Due to manufacturing variability, the compressor blades for the engines show unit-to-unit variations in their geometry. Blade inspection data are available for the characterization of these geometry variations, and CFD analysis can be linked to the engine performance model, so that parametric blade geometry can be considered a model input. The objective of the case study is then to use the engine performance and blade geometry data to update other model inputs, such as efficiency adders and turbine tip clearances. The challenge is that the correspondences between the performance measurements and blade geometry inspection data are not known; that is, it is not known which performance values correspond to which blade geometry values. This formulation has significant practical applications, for instance enabling the use of a substantial set of expensive legacy engine test data for models calibration.
This overview paper and presentation sets the stage for the other technical presentations in the session: Uncertainty Analysis of Building Performance Assessment.
VERIFICATION AND VALIDATION FOR ENERGY, POWER, BUILDING, AND ENVIRONMENTAL SYSTEMS 9-2 PANEL SESSION: UNCERTAINTY ANALYSIS OF BUILDING PERFORMANCE ASSESSMENTS Celebrity Ballroom 1 2:15pm–3:30pm Session Chair: Godfried Augenbroe, Georgia Institute of Technology, Atlanta, GA, United States Session Co-Chair: Heejin Cho, Pacific Northwest National Laboratory, Richland, WA, United States Validating CFD Models for the Carbon Capture Simulation Initiative V&V2012-6153 Emily Ryan, Boston University, Boston, MA, United States, Tadeusz Janik, Xin Sun, Pacific Northwest National Laboratory, Richland, WA, United States, Rick Wessel, The Babcock & Wilcox Company, Barberton, OH, United States
This paper will present a unique calibration approach not based on the nonlinear regression framework, in order to address the issue with unmatched input/output data. A probabilistic engine performance model will be developed, and the predicted distribution of engine performance will be calibrated against the observed distribution. The calculations will be accelerated through the use of Gaussian Process response surface approximations and the First Order Reliability Method (FORM). A Bayesian formulation will also be used in order to gain additional insight into the uncertainty associated with the solution. The above analysis is a part of the case study that was developed under the DARPA funding which is gratefully acknowledged.
The Carbon Capture Simulation Initiative (CCSI) is a partnership among national laboratories, industry, and academic institutions that is developing state-of-the-art computational modeling and simulation tools to accelerate the commercialization of carbon capture technologies from discovery to development, demonstration, and ultimately widespread deployment. Part of the CCSI program includes the development of device-scale computational fluid dynamics (CFD) simulations. The CFD simulations are investigating the operation of large scale, multiphase fluidized bed and moving bed reactors for the removal of CO2 from the exhaust gas of coal fired power plants. In order to use the information generated by the CCSI simulations with confidence, it is critical that the physical and mathematical models are continually and methodically validated with experimental data at various scales. At the V&V2012 Symposium we will present our work on developing a validation plan to identify and quantify the CFD simulation errors and uncertainty in conceptual and computational models measured in relation to experimental data. The validation efforts will be supported by experimental data from a variety of sources, including facilities at the National Energy Technology Laboratory (NETL) and NETL-supported industrial experiments at external locations, and from available literature data and published validation studies. A multi-tier validation methodology is proposed which divides the complexities of the full scale carbon capture system into simpler sub problems. We will start by validating simple unit problems which represent pieces of the multi-physics of the entire carbon capture system. We will next move on to considering the effects of upscaling and coarse graining methodologies which are used to deal with the geometrical issues associated with simulating a full scale carbon capture system. Validation of the upscaling focuses on the development of filtered models and their validation with experimental data and fine scale models. Next we will consider decoupled and coupled laboratory and pilot scale validation cases to investigate the effects of using the filtered models in larger scale systems and the combined effects of upscaling and unit problem coupling on the accuracy of the simulations. The final tier of the validation plan is the intermediate and full scale carbon capture systems. The overall goal of all of the validation tasks is to quantify our confidence in the predictions of the full scale carbon capture simulations. There is no validation data available for a full scale carbon capture system. Therefore, we will use the information gained from the smaller scale validation problems to quantify our confidence in our full scale simulations. Based on the individual tiers
Risk-conscious Design and Retrofit of Buildings for Low Energy V&V2012-6244 Godfried Augenbroe, Georgia Tech, Atlanta, GA, United States The paper gives an overview of the research conducted during the first two years under the NSF-EFRI-SEED grant Risk-conscious design and retrofit of buildings for low energy. The research started from the premise that current assessment of building performance is predominantly based on deterministic methods, which has led to a building stock that in many instances does not meet its predicted performance. The ongoing research not only offers a departure from current practice but changes our thinking about building commissioning, performance contracting, auditing, control and operation. It first highlights the recent surge in the need for uncertainty analysis in many other disciplines and the development over the last decade of robust methods to quantify the impact of assumptions and model simplifications on the outcomes. We discuss how an uncertainty analysis takes the uncertainty distributions of the dominant model parameters of as inputs. The paper shows how the uncertainty quantification (UQ) of model parameters can be accomplished for current building energy models, using example parameters such as local wind speed and wind pressure, and active mass in lumped models of spatial zones. The methodology for a rigorous UQ is discussed with respect to different sources of uncertainty and quantification based on comparison with a higher fidelity models, measurements or expert judgment. It is speculated how the research outcomes will influence design and retrofit decisions and how decisions related to investments in energy saving are supported in a risk conscious way. For the industry at large additional knock-on effects are expected, in particular in areas such as auditing methods, building regulation and performance contracting, as these trades will adapt to the new reality of rigorous uncertainty based assessments.
16
TECHNICAL SESSIONS
WEDNESDAY, MAY 2
of the validation plan we will be able to separate the effects of coupling vs. upscaling vs. model selection on the overall error of the full scale simulations. To provide a comprehensive assessment for the predictive uncertainty and confidence of CCSI CFD models and simulations, the validation activities are complemented by related model sensitivity and uncertainty quantification analyses.
Bayesian Calibration of Building Energy Models V&V2012-6255 Yeonsook Heo, Argonne National Laboratory, Argonne, IL, United States, Godfried Augenbroe, Georgia Institute of Technology, Atlanta, GA, United States, Ruchi Choudhary, Department of Engineering, University of Cambridge, United Kingdom
Uncertainty and Sensitivity Analysis of Complex Energy System Models V&V2012-6184 Heejin Cho, Pacific Northwest National Laboratory, Richland, WA, United States, Rogelio Luck, W. Glenn Steele, Mississippi State University, Mississippi State, MS, United States
In the current practice of energy retrofit projects, professionals perform energy audits, and build energy models to benchmark the performance of existing buildings and predict the effect of interventions on savings. In order to enhance the reliability of energy models, they typically calibrate models based on monitored energy use data. The calibration methods used for this purpose are deterministic and expert-driven. As a result, they result in deterministic models that do not provide information about underperforming risks associated with each intervention. The inability of quantifying underperforming risks has hampered not only leading to optimal decisions that reflect stakeholders’ objectives but also engaging building owners to invest in energy retrofits. Especially in the market of performance contracting, the expression of a guarantee is crucial for service providers and building owners to mutually agree on their decisions with high confidence.
A computational model of a complex energy system is often required to evaluate and optimize the design and performance of the actual system. When systems and their models are complex (i.e., containing large numbers of parameters and requiring extensive computational time to converge under time-varying condition), assuring the reliability and accuracy of models becomes very challenging and a methodical and efficient way to estimate uncertainty is necessary. The quantification of uncertainty is an essential feature in the verification and validation (V&V) procedures to validate simulation results against experimental measurements [1]. In addition, a long-term (e.g., a whole year) evaluation of system performance, which is an often necessary feature when the system performance depends on weather conditions or varying operational circumstances, makes uncertainty analysis even more difficult.
We propose Bayesian approach as the new calibration method such that the resulting calibrated models can quantify technical and final risks of interventions in a decision-making context. Bayesian approach quantifies uncertainties remaining in the model in the form of probabilistic outcomes. In addition, Bayesian calibration models can incorporate additional uncertainties coming from retrofit interventions to compute probabilistic outcomes of retrofit performance. Probabilistic predictions can be translated into a single value according to decision-makers’ objectives and risk attitude. We demonstrate through a case study that Bayesian calibration models serve as the core methodology to support riskconscious decision-making.
This paper presents an approach to quantify uncertainties associated with transient simulation results from complex energy system models due to simulation input parameters. The uncertainty in the simulation result is composed of contributions from the errors due to modeling assumptions and approximations, numerical solution of the equations, and simulation input parameters [1]. This study primarily focuses on determining uncertainties due to simulation input parameters. In many energy system applications, a numerical model consists of model parameters, initial conditions, and transient external inputs. The sensitivity (i.e., partial derivative) to each model parameter and initial condition at each time step can be determined by perturbing each of the arguments at a nominal value. The sensitivity to the time-varying external inputs can be determined in a similar manner by calculating sensitivities at each time step, however this numerical procedure can be greatly simplified using the principle of linearity and superposition. The proposed method utilizes the impulse response and the convolution integral to estimate the sensitivities to time-varying external inputs. Finally, the total uncertainties on the final result due to the simulation input parameters can be estimated based on the sensitivities and systematic/random uncertainties associated with each argument [2-4]. An example, which consists of a solar thermal energy system with energy storage, is provided to demonstrate the details of the uncertainty analysis procedure.
Effects of Sub-Optimal Component Performance on Overall Cooling System Energy Consumption and Efficiency V&V2012-6256 Javad Khazaii, Georgia Institute of Technology, Atlanta, GA, United States Northwest National Laboratory, Richland, WA, United States, Godfried Augenbroe, Georgia Institute of Technology, Atlanta, GA, United States, In the following paper which is prepared based on outcomes of the first writer’s PhD thesis, we have quantified the effects of equipment nameplate tolerances on the overall energy consumption and efficiency of commonly used commercial cooling systems. The main target of this paper is to discuss and present the methodology for calculating the percent of chances that a specific cooling system could deviate from a certain efficiency level by a certain margin, and use these results to guide practitioners and energy performance contractors to select, and guarantee the system performances more realistically. By doing that, we plan to present the establishment of a systematic approach of developing expressions of risk and reliability in commercial cooling system consumption and efficiency calculations, and thus advocate the use of expressions of risk as design targets.
References: [1] ASME, 2009, Standard for Verification and Validation in Computational Fluid Dynamics and Heat Transfer, ASME V&V 20-2009, American Society of Mechanical Engineers. [2] ISO, 1993, Guide to the Expression of Uncertainty in Measurement (corrected and reprinted 1995), International Organization for Standardization. [3] ASME, 2005, Test Uncertainty, ASME PTC19.1-2005, American Society of Mechanical Engineers.
This paper is similar to the thesis which this paper is prepared based on its findings, and will make a contribution to increasing our fundamental understanding of performance risk in selecting and sizing certain HVAC design concepts such as: (1) better understanding the performance risk involved in design & operation of commercial buildings in regards to its system efficiency, (2) changing the outcomes in building and systems design by developing and making
[4] Coleman, H. W., and Steele, W. G., 2009, Experimentation, Validation, and Uncertainty Analysis for Engineers,3nd Edition, John Wiley & Sons.
17
WEDNESDAY, MAY 2
TECHNICAL SESSIONS
the risk expressions as the design target, (3) arriving at overall energy savings in the amount of up to 30% by introducing a new risk based method for proper system selection, (4) avoiding unjust guaranty of unachievable levels of efficiency by encouraging the introduction of risk language into current ruling energy efficiency standards, that can be translated to ground for increasing of up to 10% in acceptable baseline system energy consumption by these agencies, and (5) influencing manufacturers by convincing the testing agencies to set down a more restricted standard for acceptable tolerance for individual equipment power consumption by showing that it can translate to up to 5% on overall energy savings.
the uncertainty in individual analysis inputs and the uncertainty in analysis outcomes. In addition, such analyses support verification of the model under consideration and provide guidance on how to appropriately invest resources if it is necessary to carry out additional experimental work or some other form of assessment to reduce the uncertainty in analysis inputs and thus reduce the uncertainty in analysis outcomes. For the preceding reasons, incorporation of procedures to perform uncertainty and sensitivity analyses is a fundamental part to assess results and verify a modular code with the complexity required to implement the XLPR methodology. The paper presents an integrated uncertainty and sensitivity analysis using xLPR pilot study results, including a demonstration of the probabilistic methods and approach used to couple existing models and software as modules within a software framework.
COFFEE BREAK/EXHIBITS Celebrity Ballroom 2 3:30pm–4:00pm
Uncertainty Quantification of Micro-Structure Based Plasticity Models Using Evidence Theory V&V2012-6102 Shahab Salehghaffari, Masoud Rais-Rohani, Mississippi State University, Starkville, MS, United States
UNCERTAINTY QUANTIFICATION, SENSITIVITY ANALYSIS, AND PREDICTION 2-2 -UNCERTAINTY QUANTIFICATION, SENSITIVITY ANALYSIS, AND PREDICTION: PART 2 Sunset 3&4 4:00pm–6:00pm
The principles of evidence theory are used to develop a methodology for uncertainty quantification of advanced plasticity models that account for strain rate and temperature history effects as well as the coupling of rate- and temperature-dependence with material hardening. Such models include a large number of material constants whose correct determination through fitting of the model with monotonic and reverse loading stress-strain curves at different temperatures and strain rates are jeopardized by various sources of uncertainty. Variability (aleatory uncertainty) that originates from randomness inherent in the material microstructure and properties and incertitude (epistemic uncertainty) that arises from vagueness or lack of knowledge about the material model and determination of its constants cause such uncertainties. The evidence-based uncertainty modeling approach is applied to the internal state variable based Bammann-Chiesa-Johnson (BCJ) plasticity model. Various combination of the required stress-strain curves at different ranges of strain rates and temperature from different experimental sources are considered in the fitting process to determine all possible sets of BCJ material parameters, each of them is considered as a piece of evidence for representation of uncertainty. All uncertain parameters are represented in interval form. Rules for identifying intervals that are in agreement, conflict, or ignorance are discussed and subsequently used to construct a separate belief structure for each uncertain parameter with degree of belief in every interval quantified by its basic belief assignment value. A joint belief structure of all BCJ material constants is constructed to present a unique belief structure that account for the effects of the all uncertain parameters. To reduce the computational cost of the uncertainty propagation procedure, radial basis function (RBF) surrogate models relating material parameters to simulation results are adopted and are maximized and minimized in each discrete proposition of the constructed joint belief structure to determine bounds of structural responses. Deformed diameter and length of an impacted 7075-T651 aluminum alloy cylinder is considered as simulation results in uncertainty propagation procedure. Target proposition sets are defined based on the observed experimental data on simulation results, and their evidence-based uncertainty measures (belief and plausibility) are estimated for uncertainty measurement. The large gap between estimated belief and plausibility indicates the existence of a large amount of epistemic uncertainty included in BCJ plasticity model, and the estimated high values of plausibility guarantee the validity of the model.
Session Chair: James O’Daniel, USACE/ERDC, Vicksburg, MS, United States Session Co-Chair: Edwin Harvego, Idaho National Laboratory, Idaho Falls, ID, United States Uncertainty and Parameter Sensitivity Analyses for the U.S. NRC: Understanding and Verification of the Extremely Low Probability of Rupture (xLPR) in Reactor Primary System Pressure Piping V&V2012-6084 Patrick Mattie, Cedric Sallaberry, Jon Helton, Sandia National Laboratories, Albuquerque, NM, United States, David Rudland, U.S. Nuclear Regulatory Commission, Rockville, MD, United States The Nuclear Regulatory Commission (NRC) Standard Review Plan (SRP) 3.6.3 describes Leak-Before-Break (LBB) assessment procedures that can be used to demonstrate compliance with the 10CFR50 Appendix A, GDC-4 requirement that primary system pressure piping exhibit an extremely low probability of rupture. SRP 3.6.3 does not allow for assessment of piping systems with active degradation mechanisms, such as Primary Water Stress Corrosion Cracking (PWSCC) which is currently occurring in systems that have been granted LBB exemptions. The NRC staff is working cooperatively with the nuclear industry and U.S. DOE National Laboratories to develop a modular based code and probabilistic assessment methodology to directly demonstrate compliance with the regulations. This tool, called the xLPR (eXtremely Low Probability of Rupture) code, must demonstrate the effects and uncertainties of both active degradation mechanisms and the associated mitigation activities. The novelty of simply being able to perform a complex calculation is past. If probabilistic calculations are to be used in the support of important decisions, penetrating questions must be answered about the nature, quality and significance of calculated results. Uncertainty analysis and sensitivity analysis are central to answering such questions, where the objective of uncertainty analysis is to determine the uncertainty in analysis outcomes that results from uncertainty in analysis inputs and the objective of sensitivity analysis is to determine the effects of the uncertainty in individual analysis inputs on the uncertainty in analysis outcomes. Appropriately designed uncertainty and sensitivity analyses contribute to the usefulness and credibility of an analysis by providing an unbiased representation of the uncertainty present in analysis outcomes and an assessment of the relationships between
18
TECHNICAL SESSIONS
WEDNESDAY, MAY 2
Roll-Up of Validation Results to a Target Application V&V2012-6118 Richard Hills, Sandia National Laboratories, Albuquerque, NM, United States
examples also illustrate many of the difficulties associated with the roll-up of validation experimental results to the application, as well as some of the limitations of the present methodology. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy’s National Nuclear Security Administration under contract DE-AC04-94AL85000.
Suites of experiments are preformed over a validation hierarchy to test computational models for complex applications. Experiments within the hierarchy can be performed at different conditions than those for an intended application, with each experiment designed to test only part of the physics relevant for the application. The experiments may utilize idealized representations of component geometries, with each experiment returning measurement types (i.e. temperature, pressure, flux, first arrival times) that may be different from the response quantity of interest for the application. Issues associated with the roll-up of hierarchical results to an application prediction include properly weighting of individual experimental results to best represent an the application, assessing whether the suite of experiments adequately tests the anticipated physics of the application, and characterizing the additional uncertainty in an application prediction due to lack of coverage of the application physics by the physics addressed by the validation experiments.
Simulating the Dynamics of the CX-100 Wind Turbine Blade: Part I, Model Development, Verification and Validation V&V2012-6120 Kendra Van Buren, Sez Atamturktur, Clemson University, Clemson, SC, United States, Mark G. Mollineaux, Stanford University, Stanford, CA, United States, Francois Hemez, Los Alamos National Laboratory, Los Alamos, NM, United States Verification and Validation (V&V) must be considered as an indispensable component for developing credible models for the simulation of wind turbines. The purpose of this presentation is to elucidate the process of a completely integrated V&V procedure as applied to the CX-100 wind turbine blade developed at Sandia National Laboratories. Design specifications of the geometry of the CX-100 blade are used to develop a three-dimensional finite element (FE) model for vibration analysis. A computationally efficient model is achieved by segmenting the blade geometry into six sections with homogenized, isotropic material properties. Data collected from experimental modal tests conducted at Los Alamos National Laboratory are used for calibration and validation. The scientific hypothesis that we wish to confirm by applying V&V activities is the possibility of developing a fast-running model that can predict the low-order vibration dynamics with sufficient accuracy.
The focus of the present work is to assess the impact that the lack of physics coverage has on the uncertainty in prediction for an application. This assessment requires the development of a model for the relation between the validation measurements and the desired response quantities for the targeted application. This is accomplished through the development of a meta-model that possesses the following features: The meta-model is robust to the presence of model parameter uncertainty in both the models for the validation experiments and the target application, as well as measurement uncertainty in the validation data. The meta-model accommodates possible incomplete physics coverage of the application by the validation suite. The meta-model allows mixed validation measurement and application response quantity types and addresses the impact of validation experiments performed at conditions different from those of the anticipated application.
In this study, the mesh size for the FE model is selected such that the overall numerical uncertainty caused by truncation effects is either similar to, or smaller than, the test-to-test variability. This rationale guarantees that predictions are sufficiently accurate relative to the level of uncertainty with which physical tests can be replicated.
The meta-models are constructed as weighted combinations of validation experimental models over neighborhoods around each measurement location, time, and type. Sampling techniques are used over these neighborhoods to characterize the dependence of the validation and application models on the important model arguments (i.e., model parameters and independent variables) so that the meta-models best represent the behavior of the application model for the response quantity of interest. To insure robustness, two approaches are used to evaluate the meta-model weights. The first is based on an objective function defined to explicitly accommodate the trade-off between the ability of the meta-model to resolve the target application model, and the sensitivity of the meta-model to parameter and measurement uncertainty. The second approach is based on partial least squares regression. The trade-off between resolution and sensitivity is addressed through the number of latent variables utilized for the regression.
A sensitivity analysis is first performed and organized using a Phenomenon Identification and Ranking Table to identify parameters of significant influence. Calibration to natural frequencies is performed in a two-step approach to decouple the material parameters from parameters that describe the boundary condition using i) a free-free configuration of the blade and ii) a fixed-free configuration. Calibration is viewed as a problem of inference uncertainty quantification where measurements are used to infer the uncertainty of model parameters. A subsequent validation assessment is grounded in the test-analysis correlation of mode shape deflections, an independent dataset that has not been exploited during calibration. This work highlights the V&V steps implemented to quantify the model uncertainties and further quantify the prediction uncertainty caused by our imperfect knowledge of the idealized material description. Part II of this effort incorporates a decision analysis framework to assess the effect on prediction accuracy of incomplete knowledge of models implemented to simulate a different configuration of the CX-100 blade tested at the National Renewable Energy Laboratory.
The methodology is applied to a series of example transport problems that represent complete and incomplete coverage of the physics of the target application by the validation experiments. The methodology estimates the uncertainty that is introduced due to the lack of coverage of the application physics, due to experiments performed at different conditions than those of the application, and due to uncertainties in the validation exercise (model parameter and measurement uncertainty). Relative assessment of the two approaches is accomplished through comparison of meta-model results to the original target application model results, and through a sensitivity analysis. The results indicate that the partial least squares approach is superior for the examples considered. The
19
WEDNESDAY, MAY 2
TECHNICAL SESSIONS
Simulating the Dynamics of the CX-100 Wind Turbine Blade: Part II, Model Selection Using a Robustness Criterion V&V2012-6121 Kendra Van Buren, Sez Atamturktur, Clemson University, Clemson, SC, United States, Francois Hemez, Los Alamos National Laboratory, Los Alamos, NM, United States
only imprecise information is available. Uncertainty quantification is important to get realistic descriptions for material and structural behavior. The application of artificial intelligence in structural mechanics is a new and alternative way to construct relationships between material-loads and material-responses under uncertainty. The artificial neural network concept can be used to identify uncertain stress-strain-time dependencies with data series obtained by experimental investigations. Recurrent neural networks for fuzzy data have been developed to operate as material formulation within the FEM. The network parameters are identified using swarm intelligence. A new particle swarm optimization (PSO) approach is presented, which can deal with fuzzy data. This enables to create special network structures considering physical boundary conditions of investigated materials. The neural networks can be trained directly with experimentally obtained stress and strain processes or indirectly using measured load and displacement processes of specimens. The indirect approach requires a numerical model of the experiment. FE models with neural network based material formulations are used to compute displacements due to the applied forces. The network parameters are identified by an inverse analysis. The new PSO approach is applied to minimize the distance between experimentally and numerically obtained uncertain displacements.
Several plausible modeling strategies are available to develop finite element (FE) models. One such example is the selection of solid, shell, or beam elements for modeling the vibration of slender structures such as wind turbine blades. The best modeling strategy remains unknown, while each strategy implies different modeling assumptions. This lack of knowledge constitutes a model-form uncertainty that must be considered and quantified when developing credible models. This presentation proposes a robustness criterion to compare two plausible strategies to model the vibration of the CX-100 wind turbine blade tested experimentally at the National Renewable Energy Laboratory. Part I of this effort entails creating a verified and validated FE model to predict the free-free and fixed-free vibrations of the CX-100 wind turbine blade. Part I entails quantifying different types of uncertainty, caused by truncation of the mesh size or material variability, on a limited run budget. Here, a different configuration is analyzed in which large masses are added to load the blade in bending during vibration testing. The two plausible modeling strategies involve modeling these large masses with solid elements or with a combination of point-mass and spring elements.
In order to get realistic material formulations, results of experimental investigations with different loading scenarios and boundary conditions are required. All available results are separated into training and validation data. The validation of neural network based material formulations is realized by load-displacement-time dependencies which are not used for parameter identification. Recurrent neural networks with different architectures and initial parameter sets are trained concurrently. The best network is selected by a weighted sum of training and validation errors.
A decision analysis framework is proposed to select one of the two FE representations, given their respective sources of uncertainty. This framework departs from the conventional approach that considers only test-analysis correlation to select the model that provides the highest degree of fidelity-to-data. Rather it is proposes to explore the trade-offs between fidelity-to-data and robustness-to uncertainty. Robustness to model imprecision and inexactness provides a method to augment our imperfect knowledge of the phenomenology being simulated, thusly lending credibility to the predictions. This analysis studies the ability of the two alternative FE models to predict the experimentally obtained natural frequencies. The effect of the imperfect representation of added masses is quantified by varying parameters of the two competing FE models. The robustness criterion proposed for model selection studies the extent to which prediction accuracy deteriorates as the lack-of-knowledge is increased. Trade-offs of fidelity-to-data and robustness-to-uncertainty are used to compare the accuracy of each FE model. Credibility originates from the modeling strategy that offers the best compromises between fidelity and robustness.
The developed recurrent neural network approach is verified with model based solutions. Application capabilities within the FEM are demonstrated for engineering practice, too. A Study of Bayesian Inference Based Model Extrapolation Method V&V2012-6133 Zhenfei Zhan, Yan Fu, Ren-Jye Yang, Ford Motor Company, Dearborn, MI, United States Model validation is a process to assess the validity and predictive capability of a computer model by comparing simulation results with test data for its intended use of the model. One of the key difficulties for model validation is to evaluate the quality of a computer model at different test configurations in design space, and extrapolate the evaluation results to untested new designs. In this paper, an integrated model extrapolation framework based on Bayesian inference and Response Surface Models (RSM) is proposed to estimate the performance of designs outside of the original design space. Bayesian inference is applied to quantify the distributions of the bias between test and CAE data in the validation domain. The RSM is used to extrapolate the hyper-parameters of the bias distributions to the untested domain. The prediction interval of the performance responses at the new designs are then calculated to guide decision making. This paper investigates the effects of different RSMs and sample sizes of the validation design points. A real world vehicle design example is used to demonstrate the proposed methodology.
Artificial Intelligence for Identification and Validation of Material Formulations with Uncertain Data V&V2012-6127 Steffen Freitag, Rafi L. Muhanna, Georgia Institute of Technology, Savannah, GA, United States The realistic description of dependencies between actions and responses of engineering structures requires adequate structural models. This includes descriptions for geometry, loads, boundary conditions, and material behavior. Numerical tools, e.g. the finite element method (FEM), are available to evaluate the structural models in order to compute structural responses such as displacements, reactions or structural reliability. In case of new materials, tests are required to investigate structural behavior. But obtained measurements are usually limited and imprecise. The selection or development of adequate structural models (e.g. for stress-strain-time dependencies), identification of their parameters, and model validation are difficult challenges, if
20
TECHNICAL SESSIONS
WEDNESDAY, MAY 2
Benchmark Random Eigenvalue Problems for Verification and Comparison of QMU Methodologies V&V2012-6136 George Lloyd, Timothy Hasselman, ACTA Inc., Torrance, CA, United States
VALIDATION METHODS FOR SOLID MECHANICS AND STRUCTURES 3-1 VALIDATION METHODS FOR SOLID MECHANICS AND STRUCTURES: PART 1 Sunset 5&6 4:00pm–6:00pm
QMU (quantification of margins and uncertainties) refers to the quantification of the degree to which operational margins of a system are circumscribed by a mathematical model of the response of the system, given the uncertainties associated with the system and its forcing environment, for a specific set of decision criteria. Since different decision criteria can exhibit radically different sensitivities to the same perturbation, it is imperative to specify the decision criteria at the outset. Within this scope and within the realm of structural dynamics, a wide variety of QMU methodologies have been proposed and tried during the last forty years. These methodologies differ among themselves through the choices of parametric and non-parametric descriptions of aleatoric uncertainties, the schemes employed to estimate measures of epistemic uncertainty by model averaging or regression of residuals in some fashion, by the assumptions reflected in limiting the domain and variations of model parameters in which the methodology can yield meaningful estimates, and by their ability to be utilized within the context of existing deterministic modeling tools.
Session Chair: Don Simons, TASC, El Segundo, CA, United States Session Co-Chair: Ben Thacker, Southwest Research Institute, San Antonio, TX, United States Establishing Uncertainty in Structural Modeling of Reinforced Concrete V&V2012-6010 Randy James, Anatech Corp., San Diego, CA, United States Reinforced concrete, one of the most versatile and common structural components in lifeline civil and safety-related nuclear applications, is also the most difficult to model analytically because of its inherent nonlinear behavior. This is especially true for structural assessments where modeling of severe damage and prediction of structural failure is required, such as establishing structural fragility under seismic or pressure loading, evaluating performance under extreme accident conditions, forensic assessments for root cause analysis, or even determining safety and reliability of aging and degraded structures. These predictive analyses require not only good validation of modeling methods but also establishing the uncertainty in the modeling to better interpret the analysis results when performing risk assessments. Modeling uncertainty has both aleatory and epistemic components. Aleatory uncertainty is associated with the variability of parameters, such as in-situ material properties, for which a probabilistic range of values can be determined. The methodology for establishing the aleatory uncertainty is a well-established process using probabilistic variations and randomly generated combinations of those parameters.
In cases of most concern, continuous non-linear structural systems are approximated by discrete and often reduced-order linearized mathematical models. Such models yield eigenvalues and eigenvectors (collectively, “eigenparameters”), which provide an expansion basis for general mathematical solutions. In particular, the eigenparameters can be used to study the sensitivity of the response of a system to perturbations, due perhaps to faults incurred by one or more elements that impact a decision criterion in a QMU process. Of course, because most system descriptions are stochastic in nature the resulting eigenparameter models are themselves stochastic. One of the great difficulties in ascribing concrete merits to any particular methodology has been that for even modestly complex systems which are dependent upon numerous assumptions, along with comparable measures of second-order uncertainties, expected measures of predicted uncertainties are found generally to be “in rough agreement.” This is to be expected from fluctuationdissipation arguments, where sensitivities of decision metrics to incertitude increase under large irreversibility’s and near resonances. Typically, most practical methodologies engender severe approximations (theoretically or computationally) and the comparisons which result tend to be indeterminate, particularly with regard to estimation of decision error rates, which are more dependent upon fidelity with respect to capturing individual outcomes in the tails than on ensemble expectations.
There is also epistemic uncertainty associated with inaccuracies in modeling a process, whether due to limited scientific knowledge about the process itself or limitations in the analytical modeling used to determine the outcome for any given set of aleatory variations. This epistemic uncertainty concerns the mesh fidelity, the type of element formulations and integration used, the equilibrium iteration algorithms, convergence tolerances, and effects of imposed boundary conditions, but mainly depends on the ability of the constitutive models to simulate actual material behavior and the interactions among structural elements, such as the concrete and rebar. This uncertainty could be assessed in a similar manner as the aleatory variability, but the probabilistic procedure for generating combinations of these parameter variations is not well defined. Unlike aleatory uncertainty, where the variation generally concerns a range of parameter values, epistemic uncertainty concerns the theoretical basis and numerical implementations of how the values are processed to arrive at a result. Thus, the analytical effort needed to consider variations and range of combinations in these epistemic modeling parameters would be substantial. Generally, the variability in the modeling for this type of uncertainty is estimated based on experience and judgment of the analysts involved in the assessment.
As part of an effort to better understand in a quantifiable way the merits and pitfalls of different QMU methodologies, this paper will discuss recent developments in random eigenparameter research, the formulation of meaningful benchmark problems based on these developments, and how this work can be used to enhance comparison among different QMU methodologies to more fully understand when the use of each is most appropriate, given the nature of a system, its excitation environment, and the decision metrics which have been selected.
This paper presents an engineering approach for establishing the epistemic uncertainty in modeling reinforced concrete structures, including loading conditions resulting in severe damage. The approach works in conjunction with validation efforts for the modeling methods and is consistent with probabilistic based methods. The approach is to use similar modeling for blind, pretest predictions of structural performance for comparison to
21
WEDNESDAY, MAY 2
TECHNICAL SESSIONS
structural test data. The structural tests should include the range of structural performance under consideration. The modeling uncertainty for the modeling methods used can then be determined using probabilistic methods to compare the predicted analysis results with the test results. This presentation will describe the method and an example application for developing the pressure fragility versus temperature for a primary containment system of a nuclear power plant design. In addition, discussions will be provided on the proposed variation of the method for cases where the validation modeling is for post-test simulation of structural tests.
by an EF5 tornado. The end product is a design that requires a multi-discipline engineer that can wear the civil engineering hat to produce basic requirements, a simulation and modeling engineer that can model the problem with proper geometry and boundary conditions, a mechanical engineer who can design proper locking mechanisms and shelter armor, a production engineer who can oversee proper construction and adherence to engineering drawings and then a test and validation engineer that can head up a test program that validates all requirements are met. Current safe rooms on the market are basic steel boxes fabricated at local welding shops around the country that are remarkable examples of common sense feel safe engineering. Although the safe rooms are marketed as a safe product that will protect a family during bad weather the vast majority of safe rooms are not safe at all. To solve the problem of building a true safe room requires good requirements, engineering analysis, and a test and verification program to validate that engineering models are correct.
Verification and Validation of an IPG Connector-Attach Model V&V2012-6148 Hui Jin, Michael Eggen, Medtronic, Inc., Mounds View, MN, United States An implantable pacemaker (IPG) usually consists of a connector module and an electronic mechanical assembly (EMA). The connector module is often attached to the EMA by fasteners or locking mechanisms and adhesives. Under mechanical load, the connector will deform with respect to the EMA, causing the electrical interconnect components between the connector and the EMA to be stressed. To regulate the stresses in these interconnects is one of the major tasks of the design and development concerning the device reliability. Finite element modeling has been used in Medtronic to evaluate and optimize the connector-attach designs. The accuracy or predictability of these models is often a key factor in design related decision making. This work is to assess the connector-attach models accuracy or predictability in a model verification and validation (V&V) process. The connector-attach model V&V is carried out on a typical IPG for a chosen loading condition. It is a bottom-up approach. The model accuracy is assessed by comparing the model and the experimental results, from the material level to the component level and finally to the assembly level. Through a numerical DOE, the model uncertainty is quantified at the assembly level. The V&V is planned and executed under the guide of the ASME V&V10-2006. All the V&V activities and outcomes are thoroughly documented. The V&V outcome showed that the model predictions at different levels agreed well with the corresponding experimental results for the intended use of the model. While the thorough documentation of the process would allow further improvement of the model accuracy by model update and better characterization of the input variables. An IPG connector-attach model accuracy has been assessed in a V&V process in a bottom-up approach. The process has verified and validated not only a particular device model but also a group of modeling methods, material and interaction models and the associated parameters. With these modeling methods, material models and parameters, similar connector-attach designs can be modeled and evaluated with more confidence prior to prototyping. Throughout this work, the ASME V&V10-2006 has provided a necessary conceptual framework and useful guide for model development, accuracy assessment, and uncertainty quantification of both the model and experiment.
Using commercial off the shelf software analysis packages it is possible to build a solid model that replicates a proposed design for each safe room version. Traditional wind loads are applied with the software to validate frame design and geometry. Wind loads generated from flow modeling of 250 MPH winds are used to verify that traditional wind loads match CFD models. Final testing for impact and debris are conducted at the Texas Tech Wind Science Laboratory. Debris testing includes launching 9lb 2x4 boards at shelter components at speeds of 150 MPH. High speed video and ShockWatch shipping sensors are utilized to visualize shock and deformation of the frame during test. Notes and images from debris testing are compared to computer deformation models to ensure that the computer models match real world shelter reactions. Design and Construction of Relational Database for Structural Modeling Verification and Validation V&V2012-6041 Weiju Ren, ORNL, Oak Ridge, TN, United States With rapid development of computational technologies, structural modeling and simulation are becoming increasingly popular for engineering design in recent years. Virtual demonstration and testing of engineering design concepts become highly desirable in large engineering projects for several technical and economic advantages including the possibility of extensive or even exhaustive virtual testing, significant project cost reduction, and elimination of real life loss or property damage from experimental accidents in the digital world. However, the increasing use of modeling and simulation, particularly for nuclear design and constructions, has also caused various concerns. If modeling and simulation results are erroneous, it will not only send research and development to a wrong track or lead program management to incorrect directions, but also possibly cause disastrous consequences in the following engineering constructions. To prevent such mishaps, verification and validation are needed to firmly establish credibility of modeling and simulation results. In this regard, we face two of the most important issues at the present: 1) a clear and commonly acceptable definition for verification and validation, and 2) a tangible procedure or system for developing confidently defendable verification and validation.
Safe Room Design and Verification Through Analysis and Destructive Testing V&V2012-6003 Alex Clark, Self Reliance Systems, LLC, Huntsville, AL, United States, Rodney Clark, Grassmere Dynamcis, LLC, Gurley, AL, United States
This presentation will discuss some ideas about developing a tangible information management system for verification and validation of structural modeling and simulation. Preferred major steps in structural modeling and simulation will first be reviewed. Corresponding to these major steps, verification and special experimental data generation activities for validation must be established. With a bottom-up approach to build credibility of the modeling and
With the recent natural disaster in North Alabama many companies have taken to building safe rooms that are designed to protect occupants from tornadoes as large as an EF5. As one would expect it is a complicated engineering problem to design an affordable but safe unit that can live in the environment produced
22
TECHNICAL SESSIONS
WEDNESDAY, MAY 2
simulation hierarchy, significant virtual and experimental data and information will be generated. Such data and information constitute crucial evidence that provides the core components for verification and validation. Challenges and possible resolutions to systematic and effective accumulation of such evidence for credibility of the modeling and simulation results will be discussed. In the discussion, a modular digital database system will be used as an example for designing and constructing a devoted verification and validation database with various desired characteristics including pedigree tracking, metadata documentation, effective searching, relational data automatic linking, interfacing with modeling and simulation software and so on. The system is expected to provide a tangible structure for the workflow of verification and validation corresponding to the modeling and simulation steps.
oversize balls. Theoretical approach is proposed by using contact angle changed equation of Hertzs contact theory of steel ball and deflection equation of rigid body model of block and rail. An experimental study is also done and the stiffness curves, as well as the block deformation in vertical and transverse direction were measured. Compared with the stiffness calculated by theoretical model, the difference of experimental stiffness values were less than 4.5%. Experimental results showed that the deformation of the block were deformed by the oversize balls. Pearson correlation analysis of transverse direction deformations and relative error of stiffness is 0.998 which shows significant correlation. Based on these investigations, this study proposed an equation for calculated stiffness modification and the equation is found by the best match between the calculated results and the measured results.
Validation of a Mathematical Model for an Automotive Cooling Module System Considering Both Vibration Isolation Capability and Fatigue Lives of Isolators V&V2012-6065 Dong-Hoon Choi, Chang-Hyun Park, Hanyang University, Seoul, Korea (Republic)
Experimental Correlation of an n-Dimensional Load Transducer Augmented by Finite Element Analysis V&V2012-6020 Tim Hunter, Wolf Star Technologies, Milwaukee, WI, United States Typical designs in structural applications undergo complex loading. In order to accurately predict the behavior of complex structures accurate representation of loads is required. Traditional methods of obtaining loads involve specialized load transducers and / or modification of structures to be sensitive to specific components of load. Presented in this paper is an alternative approach to load measurement which leverages the Finite Element Method in conjunction with a physical sample to produce an n-Dimensional load transducer. Experimental verification of this method is presented along with independent approaches to load measurement. A numerical method is presented when combined with proper test methodology produces an optimal n-Dimensional load transducer.
An automotive cooling module system is composed of a cooling module and two upper and two lower rubber isolators. The isolators are mounted between a cooling module and a carrier to isolate the car body from vibration due to the rotation of the cooling fan. Also, the isolators should be durable against fatigue loads originating from fan rotation and road disturbance. Thus, the isolators are required to be designed to maximize both vibration isolation capability and fatigue lives. In order to facilitate isolator design, we built and validated a mathematical model for the cooling module system taking both responses simultaneously into account because both response values are largely affected by same parameter values of the model. We evaluated the vibration isolation capability using natural frequencies of the system and evaluated the vibration fatigue lives of the isolators using areas under power spectral density curves (called PSD areas hereafter) obtained from loading histories imposed on the isolators.
VERIFICATION AND VALIDATION FOR FLUID DYNAMICS AND HEAT TRANSFER 4-2 VERIFICATION AND VALIDATION FOR FLUID DYNAMICS AND HEAT TRANSFER: PART 2 Wilshire A 4:00pm–6:00pm
In this study, an optimization technique was adopted to efficiently determine the dynamic stiffness values of the isolators to simultaneously minimize the deviations between natural frequencies obtained by a modal analysis using the mathematical model and an impact hammer modal testing and the deviations between PSD areas obtained by a vibration analysis using the mathematical model and a sine sweep vibration testing. We employed a regressionbased sequential approximate optimizer and successfully obtained an optimization result. The maximum relative errors (between simulation and experiment) of the natural frequencies and PSD areas were found to be 5% and 5.7%, respectively, which illustrated that we obtained a very good correlation.
Session Chair: Arthur Ruggles, University of Tennessee, Knoxville, TN, United States Session Co-Chair: Prasanna Hariharan, US Food and Drug Administration, Silver Spring, MD, United States Verification and Validation in CFD and Heat Transfer: ANSYS Practice and the New ASME Standard V&V2012-6128 Dimitri Tselepidakis, R. Lewis Collins, ANSYS Inc., Lebanon, NH, United States
To further validate the correlated axial dynamic stiffness values of the lower and upper isolators, dynamic testing to measure the axial dynamic stiffness value of each isolator was performed using the MTS 810 system and a load cell (load capacity is 500 N). The relative errors between the correlated and experimental values of the lower and upper isolators were found to be 7.8% and 5%, respectively, which validated that the axial dynamic stiffness values of the lower and upper isolators were well correlated.
Verification and validation (V&V) is an essential part of the process of simulation software development at ANSYS. A reliable and quantifiable degree of accuracy from simulation predictions has been a cornerstone of ANSYS success over the past four decades. The ANSYS Quality System meets both ASME NQA-1 and ISO 9001 quality standards and is continuously improved as new standards are established. In 2009, ASME published a new standard that recommends procedures for verifying and validating fluid/thermal simulations. ASME V&V 20-2009, Standard for Verification and Validation in Computational Fluid Dynamics and Heat Transfer, defines the concepts of verification and validation, provides definitions of error and uncertainty, and describes best practices for V&V in this area of engineering simulation. This paper presents three simple test cases as examples using the ANSYS
Experimental Results and Analysis Results of a Linear Guideway V&V2012-6054 Dein Shaw, PME/National Tsing Hua University, Hsin Chu, Taiwan, Taiwan, Wei-Lin Su, SKF, Hsin Chu, Taiwan Preload of a preload linear guideway is determined by four kinds of
23
WEDNESDAY, MAY 2
TECHNICAL SESSIONS
FLUENT software product, demonstrating the process of compliance with the ASME V&V 20 Standard. General ANSYS V&V procedures for CFD are also briefly discussed in the context of the new ASME Standard, summarizing the degree of congruence as well as differences and issues.
showed that the actively water-cooled divertor will be overloaded under certain plasma conditions. One proposed method to mitigate these thermal loads is to design a protective “scraper element” that will serve as a plasma shield. The scraper element geometry is dictated by the plasma physics and must be manufactured with submillimeter precision and located with millimeter accuracy in an extremely confined operational space. The scraper element will be constructed using Carbon Fiber Composite (CFC) Monoblocks with water cooling channels as these have already received qualification for the ITER reactor and can handle steady-state heat loads up to 20 MW/m^2. Water must be used to provide active cooling of the CFC Monoblocks, and twisted tape inserts provide protection against critical heat flux. Within these parameters, an acceptable scraper element design is subject to constraints including maximum CFC temperature, maximum fluid pressure drop, and maximum fluid temperature rise.
In the first code verification example we verify the Navier-Stokes equations in a Couette flow with a pressure gradient. The steady viscous flow between two parallel plates is a relatively easy twodimensional problem and uniform Cartesian meshes are employed. In the second example we verify the steady energy equation in an anisotropic conduction heat transfer problem. Solution verification is then applied on a single-phase incompressible flow downstream of an axisymmetric abrupt expansion in a circular pipe with a constant wall heat flux. The flow is particularly interesting in that the spatial peak heat transfer coefficient downstream of the pipe expansion is many times the fully developed heat transfer coefficient for the same Reynolds number. This high heat transfer coefficient occurs in the region of the reattachment of the shear layer to the tube wall, typically a distance 5-15 step heights downstream of the abrupt expansion, depending on the Reynolds number and the size of the expansion. Finally, the validation procedure is applied to the same case, the turbulent heat transfer in a pipe expansion, and the Nusselt number variation along the heated wall is compared to experimental data in the validation assessment.
This paper presents the application of the ASME Standard for Verification and Validation in Computational Fluid Dynamics and Heat Transfer (ASME V&V 20-2009) for a model scraper element geometry. Code verification is achieved through the use of a mature commercial CFD code, ANSYS CFX v13. Solution verification is performed including grid convergence as well as input parameter uncertainty propagation. Solution validation focuses on the most complex dynamics in the model the twisted tape fluid flow and heat transfer. Manglik and Bergles published Heat transfer and pressure drop correlations for twisted tape inserts in isothermal tubes(JHT, 1993) which provides a survey of several experimental results including plots with visual estimates of data scatter.
In total, this paper presents an example of each of the three procedures: code verification, solution verification and validation. Code verification was exercised on the Navier-Stokes and heat transfer equations, and solution verification and validation is achieved using the GCI method on a non-isothermal, turbulent flow with well documented experimental data. In general, ANSYS FLUENT is well verified and validated for this class of CFD and heat transfer problems; nevertheless, even these simple examples illustrate the value of thorough V&V in identifying where improvements are required in the mathematical models and where better experimental data are needed.
A CFD model of the twisted tape is solved to determine the pressure drop and heat transfer convection coefficient. The uncertainty in the numerical simulation includes grid refinement, input parameter uncertainty, and the experimental data uncertainty approximated from Manglik and Bergles. Inputs parameters considered include heat flux, mass flow rate, fluid viscosity, density, specific heat, thermal conductivity, and inlet temperature. The results indicate the CFD model under predicts twisted tape performance with a numerical solution uncertainty on the order of ten percent.
CFD Modeling of Twisted Tape Cooling for Fusion Reactor Components V&V2012-6129 Joseph B. Tipton, Jr., The University of Evansville, Evansville, IN, United States, Arnold Lumsdaine, Jeffrey H. Harris, Oak Ridge National Laboratory, Oak Ridge, IN, United States, Alan Peacock, European Commission, Garching, Germany, Jean Boscary, Max-Planck-Institut für Plasmaphysik, Garching, Select State/Province, Germany
This verification and validation process also helps to elucidate large sources for numerical uncertainty and, therefore, pathways forward for improvement. Specifically, the input parameter uncertainty is dominated by the mass flow rate. This leads to the identification of several challenges for CFD modeling of twisted tapes including thin wall tape meshing versus approximation and flow profile development.
With the high cost of building and operating fusion experiments, computational simulations are an essential part of designing and analyzing fusion components and facilities. “Improved integrated models that utilize advanced computational simulation techniques to treat geometric complexity, and integrate multi-scale and multiphysics effects, will be key tools for interpreting phenomena from multiple scientific disciplines and fusion experiments while providing a measure of standardization in simulation.” (Research Needs for Magnetic Fusion Energy Sciences, Report of the Research Needs Workshop, 2009, pg. 365).
V&V of a CFD Procedure for the Simulation of Water Flow through Perforated Plates Similar to the Ones of PWR Fuel Element End Pieces V&V2012-6158 José A. Barros Filho, André A. C. Santos, Moysés A. Navarro, Comissão Nacional de Energia Nuclear - Centro de Desenvolvimento da Tecnologia Nuclear, Belo Horizonte, Minas Gerais, Brazil
The Wendelstein 7-X stellerator is a case in point. It is an experimental fusion machine that is currently under construction at the Max Plank Institute for Plasma Physics in Greifswald, Germany. Unlike the tokamak geometry (such as the ITER experiment under construction in Cadarache, France), the stellarator produces a more stable plasma, as it does not require a large plasma current for plasma stability. The trade-off is that the stellarator requires complex 3D magnets in order to produce a stable field. This results in a complex divertor geometry. Recent physics calculations
Reliable methods for Verification and Validation (V&V) of CFD simulations are the present main concern for the fully acceptance of CFD calculations by nuclear regulatory agencies. The Thermal Hydraulics Laboratory of the Centro de Desenvolvimento de Tecnologia Nuclear-CDTN/CNEN has recently started a CFD V&V program intended to develop and validate procedures for the design and safety analysis of the advanced fuel elements components for the Brazilian nuclear plants. In this work a V&V process was performed for a CFD simulation procedure devised to estimate the pressure loss of water flow through perforated plates
24
TECHNICAL SESSIONS
WEDNESDAY, MAY 2
similar to the ones of PWR fuel element end pieces. The validation was carried out against experiments with plates of a wide range of geometric features similar to those used in end pieces. The ASME V&V 20 methodology was applied for the simulation of one perforated plate. Eight levels of mesh refinement were used. Refinement ratio was set to 1.3. To evaluate the simulation procedure range of applicability, the results obtained in the V&V study were used to define the parameters for the simulation of the other plates. The calculation domain was a scaled down representation of the real plate whose dimensions are chosen so as to keep invariant the most important parameters that represent the flow structure. It is composed of a cross section of the real plate with only one hole in 45 degrees symmetry. The width of the section outside the plate was determined so as to keep the flow area ratio invariant. This geometry was adopted after a thorough study comparing longer and wider domains with more holes which showed that the results did not change significantly. Fully Hexaedrical meshes were used generated by extrusion from the inlet to the outlet of the section. The simulations were performed with the commercial code ANSYS CFX 13.0. The standard k-? turbulence model was used. Iteration target was 10-8 for the momentum root mean square. In spite of the fine level of refinement reached and large number of meshes used, the results confirm the non-asymptotic convergence behavior for this kind of problem. A moderately refined mesh gave the best averaged agreement between experiments and simulations with reasonably low numerical uncertainty of ~4%. This mesh was used in the simulations of the other plates. The observed comparison error between experimental and numerical results for all tested plates was satisfactory, showing a maximum error of 7.7%.The authors express their thanks to the Fundação de Amparo à Pesquisa do Estado de Minas Gerais - FAPEMIG, for the financial support.
consistency analysis. The most sensitive parameters are identified and used in the subsequent quantification of predictivity. We show how this quantitative validation methodology is used to identify and improve the models and parameters among the multiphysics components of our simulation that contribute the largest error. We have used this validation process to demonstrate and advocate a validation procedure that is philosophically influenced by the Scientific Method, made quantitatively rigorous by Bayesian Inference, but made simple and practical through the use of a consistency constraint. Simple enough that the simulation scientist and experimentalist can perform the validation independent of a Markov-chain Monte Carlo statistical analyst. Validation and Uncertainty Quantification of a Turbulent Buoyant Helium Plume V&V2012-6177 Philip Smith, Anchal Jatale, Diem Nguyen, The University of Utah, Salt Lake City, UT, United States Large-scale buoyant plumes appear in the natural and built environments in the form of fires, steam vents, etc. We have been using large eddy simulations (LES) to model the performance of these plumes for predictive applications. Formal quantitative validation of the performance of the LES tool has been performed by using data from a one meter diameter Helium Plume at Sandia National Laboratory. The validation/uncertainty quantification (VUQ) methodology that we have employed is a data consistency method first proposed by Frenklach and coworkers from University of California, Berkeley. The methodology has been expanded for applications where function evaluations are expensive (as in this LES simulation where each realization requires a small, dedicated computer cluster of several hundred processors) and where the experimental data are very difficult to obtain and thus are sparse.
A Validation Methodology for Quantifying Uncertainty in High Performance Computer-based Simulations with Sparse Experimental Data V&V2012-6172 Philip Smith, Sean Smith, Jeremy Thornock, Diem Nguyen, Ben Schroeder, Anchal Jatale, University of Utah, Salt Lake City, UT, United States
Our VUQ framework produces the posterior uncertainty in applicable ranges of models as well as model parameter values. An input uncertainty map, which specified the model inputs and parameters with associated uncertainties or ranges, was initially prepared to identify the active variables for the final nonlinear validation. With the help of this map a design of experiment with 3 active parameters was constructed using a central composite design. A total of 15 cases were run with the LES simulator and data were collected at three different heights (0.2 m, 0.4 m, and 0.6 m above the helium inlet) in the domain. A surrogate model was constructed and used to search for a subspace where all experimental and simulation data are consistent. The predictions for the time-averaged velocities were consistent with all experimental observations at all locations. The use of VUQ framework also gives a measure of consistency between the model predictions and experimental data. This approach does not only parameterize the parameter uncertainty region but transfer the uncertainties of the experimental data into the model directly. Thus, allowing one to determine more-realistic bounds on model predictions.
The practical utility of the results of a numerical simulation are proportional to the degree to which the error and uncertainty in the simulation results have been quantified. We have used large eddy simulations (LES) on high performance computer clusters (up to 400,000 processors) to produce applied simulations for combustion systems. The intended use of these simulations is to assist in design and operation of new energy applications. As such, the available data for the intended use are both expensive and sparse. Computational and experimental data must be integrated through a range of experimental scales and through a hierarchy of complexity levels. We have built an hierarchical validation framework to inform the prediction of the complex application with data and models from simpler systems lower in the hierarchy. Through the use of Bayesian inference, we provide a mathematically sound foundation for this method. Our approach is to draw on prior information and to exploit a consistency requirement among the available experimental data sets and the simulations of these sets to quantify the uncertainty in model parameters, boundary conditions and experimental error and simulation outputs to produce predictivity.
Simulation Studies of Air and Liquid Impinging Jet Heat Transfer V&V2012-6026 Xin Gu, Olga Filippova, Yong Zhou, Rick Shock, Exa Corporation, Burlington, MA, United States
This paper presents the methodology and provides a specific practical example with a three level hierarchy. This example is that of a validation hierarchy for predicting combustion efficiency from industrial flares using LES. Each LES simulation requires a cluster of ~1000 cores. Experimental data are expensive wind tunnel measurements. We present the approach to obtain the uncertainty interval for the sparse data that are used as the likelihood in the
Turbulent jet impinging orthogonally onto a hot plate produces among the highest level of heat transfer coefficient encountered for single phase convections. Thus, it is a very important type of benchmark test for thermal validations. In this paper, we are using our commercial fluid flow solver, PowerFLOW to predict the heat transfer of air and liquid jet impinging onto a heated plate, where
25
WEDNESDAY, MAY 2
TECHNICAL SESSIONS
the results are compared to experimental data from Schroeder and Garimella [1] and Morris et al. [2], respectively.
generated impulse for testing is equivalent to the Extreme Service Condition Pressure (usually equivalent to CFDP) for the system requiring certification.
First, the air impinging jet case setup and results are discussed. The air is supplied from a plenum where a porous medium is placed inside only allowing air flow in the normal direction to serve the purpose of reducing the turbulence. At the other end of the plenum, there is a small round orifice with diameter of 6.35mm (D) and lengthto-diameter aspect ratio of 1. After the air exit the orifice, it will impinge onto a rectangular heated plate which is placed 4D away from the orifice exit. The rectangular heated plate has a size of 20mm x 20mm, with constant heat flux of 7.5 KW/m2. The Reynolds number is 20,000 based on orifice diameter as characteristic length. Four mesh resolutions are used: h36, h48, h72, h96. The heat transfer coefficient is averaged within each ring by equally dividing the heated plate from its center. Results yield very good resolutionindependence, where the highest resolution yield relatively better results. Next, liquid impinging jet results are discussed. The geometry setup is almost identical with air impinging jet, and the only difference is that the heating plate has a different size, which is 10mm x 10mm. The liquid used is FC77, with molecular Prandtl number of 25 at 20 degC. Three mesh resolutions are used: h36, h48, h72. Since the liquid molecular Prandtl number is a strong function of temperature, we found that by setting the near wall molecular Prandtl number at wall temperature, which is approximately 18 in this case, the results are very good compared to experiments.
For this presentation we specifically examine the M20 breech assembly of the 105mm M119 Towed Howitzer. A total of three M20 breech assemblies were tested until catastrophic failure. Magnetic particle inspections were performed periodically throughout testing to document the crack initiation and propagation in each test asset. These results were compared with numerical simulations of the breech assembly using models based on principal strain to determine the fatigue life to crack initiation. The numerical models are simulated using a modification of the commercial software FE-SafeTM, to account for conditions occurring within the breech. The material properties required for these simulations have been experimentally generated at Benét Laboratories. Comparisons between the Breech Fatigue Lab and the numerical simulations have validated that the numerical models can accurately predict crack initiation location and life cycle count for the breech assembly. In order to establish a final safe fatigue life for the system, an additional three breech assemblies will be dynamically tested to failure and compared with the numerical simulations to further validate the models. Validation of Enrichment Based Multi-Scale Method for Modeling Composite Materials V&V2012-6224 Andrew Littlefield, US Army RDECOM-ARDEC Benet Labs, Watervliet, NY, United States, Michael Macri, U.S. Army, ARDEC, Benet Labs, Watervliet, NY, United States
[1] V.P. Schroeder, S.V. Garimella, Heat transfer from a discrete heat source in confined air jet impingement, Heat Transfer 1998, Proceedings of 11th IHTC, Vol.5, August 23-28, 1998, Kyongju, Korea [2] G.K. Morris, S.V.Garimella, R.S. Amano, Prediction of jet impingement heat transfer using a hybrid wall treatment with different turbulent Prandtl number functions, Journal of Heat Transfer, Vol. 118, Issue 3, 562-569 (1996)
Fielded and future military systems are increasingly incorporating composite materials into their design. Many of these systems undergo rapid and severe changes in temperature, as well as subject the composites to environmental conditions which can cause micro damage leading to variations of the mechanical properties on the global scale. For these applications, it is critical to develop the ability to accurately model the response of composite materials, to enable engineers to accurately predict the response of the system.
VALIDATION METHODS FOR MATERIALS ENGINEERING 8-1 VALIDATION METHODS FOR MATERIALS ENGINEERING: PART 1 Wilshire B 4:00pm–4:50pm
A widely popular approach to resolve this is to assume homogenized effective properties throughout the composite. However, this ideology breaks down in critical areas when large thermal strains and micro-damage are prominent. To alleviate these limitations, a multiscale enriched partition of unity (POU) method is proposed which uses a structural based enrichment approach, allowing macro-scale computations to be performed with the micro-structural features explicitly considered. POU strategies have an advantage in that the enriched local function space may be easily varied from one node to the other allowing variances in the microstructure, such as temperature gradients and localized damage to fibers.
Session Chair: Dawn Bardot, HeartFlow, Redwood City, CA, United States Session Co-Chair: Nuno Rebelo, Dassault Systemes Simulia Corp, Fremont, CA, United States 105mm M20 Howitzer Breech Fatigue Life Modeling and Simulation V&V2012-6223 David Alfano, Michael Macri, U.S. Army, ARDEC, Benet Labs, Watervliet, NY, United States Conducting live fire testing for tank and artillery breeches to determine fatigue life is an excessively costly and time consuming process. To alleviate these issues, Benét Laboratories conducts dynamic fatigue life testing and simulations and a significantly reduced cost and schedule within a laboratory environment. Fatigue testing certifies safety of armament systems and ensures that high quality equipment is provided to the Warfighter. Dynamic fatigue testing is implemented according to International Test Operations Procedure 3-2-829 Cannon Safety Test. Following this procedure, the breech assembly is cycled at the Cannon Fatigue Design Pressure (CFDP) until failure occurs. Statistical analyses are then performed on the test data to determine the Safe Fatigue Life.
Aluminum metal matrix composite samples were fabricated and standard material property testing was conducted. The tests were conducted at room temperature, 500F, and 1000F. This data was then compared to the numerical predictions to validate them. Adjustments were made to the numerical models to account for any differences found between simulation and experiment.
Laboratory testing consists of imposing ballistic equivalent pressure loads onto the components in order to simulate live-fire. The Breech Fatigue Lab is capable of producing pressures comparable with live cannon fire in both magnitude and rate, such that the
Nanotechnology is inherently a multi-scale and multi-physics challenge. Multi-scale modeling is becoming as an essential tool for designing and fabrication of devices at the micro/nano-scale.
Key Challenges Facing Verification and Validation of Multi-Scale Modeling at the Nanoscale V&V2012-6238 Behrouz Shiari, University of Michigan, Ann Arbor, MI, United States
26
TECHNICAL SESSIONS
WEDNESDAY, MAY 2
A major category of multi-scale simulations is called concurrent multi-scale modeling. This approach links methods appropriate at each scale together in a combined model, where the different scales of the micro/nano-system are able to exchange information through coupled or matching interfaces. Numerically, the concurrent multi-scale modeling can significantly reduce the time of simulations. However, the biggest challenge of a concurrent multiscale modeling is validation and verification of method across the complete range of length and time scales.
The ASME V&V 40 subcommittee on Verification and Validation in Computational Modeling of Medical Devices has been formally established under the jurisdiction of the ASME Verification and Validation (V&V) Standards Committee. This subcommittee is part of a series of four V&V subcommittees, including computational solid mechanics (V&V 10), and computational fluid dynamics and heat transfer (V&V 20). While V&V 10 and V&V 20 are primarily focused on the development of more general guides and standards related to their respective computational methodologies, the focus of V&V 40 is tailored toward the development of standards and guides for computational models of medical devices.
In this article, we discuss the challenges of verification and validation of a new concurrent finite temperature coupled atomistic/continuum discrete dislocation method. The developed method employs an atomistic description of small but key regions of the system, consisting of millions of atoms, coupled concurrently to a finite element model of the periphery. The method is unique, because unlike the other concurrent methods, couples a continuum region containing any number of discrete dislocations to an atomistic region, and permits accurate, automatic detection and passing of dislocations between the atomistic and continuum regions. The method has the ability of treating dislocations as either atomistic or continuum entities within a single computational framework. This feature makes the method useful for understanding the major role of defects on the physical and mechanical performance of micro/nano-devices.
This presentation will provide an overview of the standard development process, which establishes consensus, and the activities of the V&V 40 subcommittee. A brief history of the formation of the subcommittee will be provided, followed by an overview of the V&V committees and their general V&V methodology. Finally, the subcommittee charter, activities, and future plans will be presented. Verification and Validation Methodologies for Prosthetic Heart Valves: Review and Considerations V&V2012-6105 Andrew Rau, Exponent, Inc., Philadelphia, PA, United States, Tina Zhao, Edwards Life Sciences, Irvine, CA, United States, Shiva Arjunon, Georgia Institute of Technology, Atlanta, GA, United States
The verification and validation of the developed method has been examined in the following three steps and in each step the major challenges are described.
Recent initiatives by both the U.S. Food and Drug Administration (FDA) and medical device manufacturers have sought to increase the credence given to computational modeling of medical devices for regulatory submissions. In order to establish model validity for predicting device performance, comprehensive methodologies for verification and validation (V&V) are necessary. Standardized methods or best practices for V&V of computational models for heart valve prostheses are not well established or documented. An overview of existing standards and methodologies for V&V of heart valve prostheses is presented, along with a discussion of important elements of V&V which should be examined in more detail. Additionally, a recommended methodology for V&V of both structural and fluid mechanical models which focuses primarily on biological prostheses is suggested. Physical parameters of interest from both structural and fluid models are highlighted, and a recommended approach for quantification of those parameters both analytically is presented.
As all concurrent methods, having an interface between the atomistic and continuum regions in our model is inevitable. In the first step, the seamless transfer of data between the atomistic and continuum regions of the simulations is verified in 1D and 2D. The detailed treatment of the model on the interface between the atomistic and continuum regions is outlined. The results show that the implemented numerical technique in the method can overcome the fundamental incompatibility of the nonlocal atomistic description and the local continuum description at the interface. In the second step, relatively simple validation multi-scale simulations are designed and a one-to-one comparison made between full atomistic and the multi-scale results. The limitation of running long time and large size fully atomistic simulations to verify the large multi-scale simulation results is addressed. In the third step, the model validation is tried to be found in experimental data gathered from nano-indentation and nano-scratching tests. Challenges in experimentally validating simulations are reported.
Computational Modeling Verification and Validation for Stents: Review Current Standards and Develop Example Problem V&V2012-6231 Sanjeev Kulkarni, Boston Scientific, Maple Grove, MN, United States, Tina M. Morrison, FDA, Silver Spring, MD, United States, Senthil K. Eswaran, Abbott Vascular, Santa Clara, CA, United States, Atul Gupta, Medtronic, Inc., Santa Rosa, CA, United States, Brian Choules, Richard Swift, Cook MED Institute, West Lafayette, IN, United States, Brian P. Baillargeon, Cordis Corporation, Fremont, CA, United States, Payman Saffari, NDC, Fremont, CA, United States, Xiao-Yan S. Gong, Medical Implant Mechanics LLC, Laguna Niguel, CA, United States, Soudabeh Kargar, University of Alabama, Huntsville, St. Paul, AL, United States, Dawn Bardot, HeartFlow, Redwood City, CA, United States,
STANDARDS DEVELOPMENT ACTIVITIES FOR VERIFICATION AND VALIDATION 11-3 PANEL SESSION: ASME COMMITTEE ON VERIFICATION AND VALIDATION IN COMPUTATIONAL MODELING OF MEDICAL DEVICES Wilshire B
4:50pm–6:00pm
Session Chair: Carl Popelar, Southwest Research Institute, San Antonio, TX, United States Session Co-Chairs: Andrew Rau, Exponent, Inc., Philadelphia, PA, United States, Ryan Crane, ASME, New York, NY, United States
The use of computational modeling (CM) in the design and evaluation of medical devices is becoming more prevalent in regulatory submissions. Additionally, the regulatory agencies’ initiation and promotion of eco-systems to facilitate usage of CM is further encouraging. Increased use results in the need for
Overview of ASME V&V 40 Committee: Verification and Validation in Computational Modeling of Medical Devices V&V2012-6253 Carl Popelar, Southwest Research Institute, San Antonio, TX, United States
27
WEDNESDAY, MAY 2
TECHNICAL SESSIONS
establishing credibility and the necessity in developing guidelines and methodologies for the verification and validation (V&V) process. The charter of the recently established ASME V&V40 Committee is to provide such guidelines for the V&V process, which is specifically intended for medical devices. Medical devices are unique from other mechanical structures like automobiles and airplanes because they interact with biological structures (e.g., blood vessel).
obtained from animal or bench-top experiments (i.e. can the simulation be extended to the human case when the validation is done in animals or bench-top experiments?) iii) Guidelines for performing and interpreting qualitative validation, which is the approach most often used today based on a literature review. In this approach, the final validation uncertainty is not quantified. Experimental and model results are presented, and the reader draws his/her own conclusion on the quality of the correlation. This approach is considered inferior to the quantitative approach but is sometimes the only practical solution.
This committee has been structured to include various medical device sub-groups such as Stents, Endovascular grafts, and Heart Valves. The focus of this abstract is to describe the activities of the Stent sub-group. Specifically, the goals of the Stent subgroup are:
In addition, the talk will also provide recommendations on example problems to be included in the standard such as the FDAs benchmark nozzle and blood pump models (https://fdacfd.nci.nih.gov).
1. To review currently available V&V Standards, methods or best practices for CM as applied to stents. 2. To identify gaps in available V&V best practices, establish a roadmap to eliminate these gaps and work with other subgroups in the direction of developing a comprehensive standard for CM and
Verification and Validation of Computational Solid Mechanics Models used for Medical Devices V&V2012-6247 Atul Gupta, Medtronic, Inc., Santa Rosa, CA, United States, Steven Ford, Edwards Lifesciences, Irvine, CA, United States, Tina M. Morrison, FDA, Silver Spring, MD, United States, William A. Olson, Ethicon Endo-surgery, Blue Ash, OH, United States, Nuno Rebelo, Dassault Systemes Simulia Corp, Fremont, CA, United States, Xiangyi (Cheryl) Liu, Dassault Systemes Simulia Corp, Providence, RI, United States, Sanjeev Kulkarni, Boston Scientific, Maple Grove, MN, United States, Bob Tryon, VEXTEC Corporation, Brentwood, TN, United States, Jinhua Huang, GE Healthcare, Magnetic Resonance, Waukesha, WI, United States, Anita Bestelmeyer, Arun Nair, BD Medical, Franklin Lakes, NJ, United States
3. To develop a demonstrative example by applying the aspects of the proposed standard to a generic stent design. The ASTM stent design, which is a non-proprietary stent, will be used as the platform for developing this example. All activities and successes of the Stent Subgroup will be presented and discussed in this paper presentation. Outline for the New ASME Standard for the Verification and Validation of Computational Fluid Dynamics (CFD) Simulation in Medical Devices V&V2012-6243 Marc Horner, ANSYS, Inc., Evanston, IL, United States, Dawn Bardot, HeartFlow, Redwood City, CA, United States, Jeff Bodner, Medtronic Corporation, Minneapolis, MN, United States, Ricky Chow, Lake Region Medical, Chaska, MN, United States, Prasanna Hariharan, US Food and Drug Administration, Silver Spring, MD, United States
Computational methods have become a mainstay in product development and reliability determination. As computational modeling became more integral to these processes, a need for a common approach to both verifying and validating the integrity of computational models arose. This was required in order to ensure a common strategy by which models could be considered both adequately precise and accurate for the purposes of proving out designs. V&V10 represents the culmination of efforts to create a standard methodology to verify and validate such models.
Computational fluid dynamics (CFD) is routinely used in the design and development of a wide array of medical devices such as ventricular assist devices, inhalers, and stents, to name only a few. Device manufacturers also frequently use CFD data to demonstrate safety and efficacy of their products as a part of the regulatory submission to the FDA. However, guidelines for using CFD to demonstrate product safety have not reached consensus. Recently, FDA in collaboration with medical device industries and academia conducted a round robin study to determine the current state and limitations of CFD modeling as applied to medical devices. The results from the round-robin study highlighted the need for proper validation of the CFD models before using them for demonstrating device safety.
The medical device arena however contains many unique modeling challenges (e.g., simulating the in-vivo environment) to the extent that a new effort to draft guidelines for such devices was initiated in the form of V&V 40. The benefits of such efforts are twofold. Firstly, these guidelines will help establish an industry specific means of verifying and validating models. Secondly, by following such guidelines, an increase in submission efficiency may be realized as both device manufacturers and FDA representatives will have a common blueprint for successful simulation and evaluation of devices.
Currently, there are no industry-wide consensus standards available for verification and validation (V&V) of CFD simulation for medical devices. The goal of the V&V 40 CFD subgroup is to develop a methodology for validating a CFD simulation in a medical device setting. Content will be largely borrowed from the existing ASME V&V 10 and V&V 20 standards. The approach will be to provide guidance for biomedical-specific areas where the application of V&V10/20s approach does not apply or is unclear. This includes the following topics:
V&V40 represents the new effort to establish medical device specific guidelines for verifying and validating computational models. This effort is represented by the solid modeling group and three devices specific subgroups: stents, heart valves and endovascular devices. Each group is tasked with ascertaining which topics are of greatest importance for future incorporation into standard guidelines. Subgroups work independently on their individual efforts while also communicating vertically to the senior committee members in order to insure adherence to the V&V40 mission.
i) Guidelines for evaluating the effect of geometry uncertainty on simulation uncertainty that typical users of V&V 10 and 20 do not encounter. Examples include geometry reconstruction, variations in patient populations, and previously discounted phenomena that now have biological consequences such as blood cell damage.
The solid mechanics group is focusing its efforts on crossdisciplinary topics that address broad challenges in simulation for medical devices using Finite Element Analysis. Top five initiatives were selected based on their importance through a survey amongst the group members. Those initiatives are:
ii) Guidelines for validation of device use in humans based on data
28
TECHNICAL SESSIONS
WEDNESDAY, MAY 2
• Verification strategies • Addressing product variability • Material model calibration • Boundary condition variability • Systematic means of determining worst case boundary conditions
computational modeling as an extremely powerful tool for evaluating the performance and durability of medical devices. The fidelity of these computational models is of extreme significance due to the natural criticality in reliability and durability of medical devices. Enhancing fidelity of these models requires establishing credibility and the necessity in developing guidelines and methodologies for the verification and validation (V&V) process. The goal of the recently established ASME V&V40 Committee is to provide such guidelines for the V&V process, specifically intended for medical devices.
Another initiative within the solid mechanics subgroup is to investigate the gaps that may exist between V&V10 and future requirements of V&V40 for medical device applications. The V&V40 efforts have begun to create the conditions necessary to formalize its intent. The solid modeling group is focusing its efforts on key topics of interests, including gaps it may find in V&V10 when used for the simulation of medical devices. It is our hope to be able to report developments at future venues.
The ASME V&V 40 committee has been structured to include general methodology and device specific various device subgroups in order to provide guidelines to cover both common and device specific topics. This committee consists of General Methodology, Solid Mechanics, Fluid Mechanics, Heart Valve, Stent, and Endovascular subgroups.
V&V 40 - Verification & Validation in Computational Modeling of Medical Devices General Methodology Subcommittee: A Review of Best Practices and Unique Issues in the Medical Device Community V&V2012-6248 Dawn Bardot, HeartFlow, Redwood City, CA, United States, Brian Choules, Cook MED Institute, West Lafayette, IN, United States, Hui Jin, Medtronic, Inc., Mounds View, MN, United States, Tina M. Morrison, FDA, Silver Spring, MD,United States, Kenneth Perry, Echobio, Bainbridge Island, WA, United States, Carl Popelar, Southwest Research Institute, San Antonio, TX, United States, David Quinn, Veryst Engineering, Needham, MA, United States, Timothy Rossman, Mayo Clinic, Rochester, MN, United States, Yong Zhao, BetterLife Medical, LLC, Simi Valley,, CA, United States
The focus of this presentation is on the activities of the Endovascular sub-group. Since there are several commonalities on topics of interest between this and other subgroups such as Stent and Solid Mechanics, the primary focus of this group will be the challenging aspects of V&V specific to Endovascular devices. Specifically, the goals of the Endovascular subgroup are: 1. To review currently available V&V Standards, available methods, and published literature data for V&V of computational modeling specific to Endovascular devices. 2. To identify gaps in available V&V best practices, establish a roadmap to address these gaps and work with other subgroups towards developing a comprehensive standard. 3. To focus on topics relevant to challenging and unique aspects of V&V of computational modeling of stent-graft devices that are not covered by the Solid Mechanics and Stent sub-groups. When possible, available published literature data will be referenced as examples throughout this guideline.
Formalized in 2010, the V&V 40 - Verification & Validation in Computational Modeling of Medical Devices Committee Charter is to provide procedures to standardize verification and validation for computational modeling of medical devices. The goal of a Verification and Validation Standard is to provide: common language and definitions, conceptual framework, methodology, guidance for implementation, and best practices.
The activities and progress of the Endovascular Subgroup will be presented and discussed during presentation the 2012 ASME Verification and Validation Symposium.
In this technical presentation the General Methodology Subcommittee will discuss the unique challenges associated with validation of medical device simulations. These simulations are typically assessed against evidence based comparators such as in vitro, ex vivo, animal, and clinical studies. However, these comparators may not perfectly reflect the clinical application of the medical device. Additional uncertainties unique to medical devices include physiologic parameters, anatomical geometries, and limited data on patient population physical properties. In light of these challenges, a computational model credibility assessment matrix is presented as a way to incorporate the risk and consequence associated with simulation results in the validation plan.
THURSDAY, MAY 3 CONTINENTAL BREAKFAST Celebrity Ballroom 2
7:00am–8:00am
REGISTRATION Sunset 1 Foyer 7:00 am–6:00pm UNCERTAINTY QUANTIFICATION, SENSITIVITY ANALYSIS, AND PREDICTION
Verification and Validation in Computational Methods for Medical Devices V&V2012-6254 Payman Saffari, NDC, Fremont, CA, United States, Brian Choules, Cook MED Institute, West Lafayette, IN, United States, Atul Gupta, Medtronic, Inc., Santa Rosa, CA, United States, Christine Scotti, W.L. Gore & Associates, Flagstaff, AZ, United States
2-3 UNCERTAINTY QUANTIFICATION, SENSITIVITY ANALYSIS, AND PREDICTION: PART 3 Sunset 3&4
8:00am–10:00am
Session Chair: William Bryan, ANSYS, Inc., Canonsburg, PA, United States Session Co-Chair: Francesco D’Auria, University of Pisa, GRNSPG, Pisa, Italy
Computational modeling of medical devices has become prevalent in recent years. Today, this technology is widely used throughout different stages of medical device development process such as design, manufacturing, testing, evaluation, and regulatory submission. The regulatory agencies’ have recognized
29
WEDNESDAY, MAY 2 / THURSDAY, MAY 3
TECHNICAL SESSIONS
A Bayesian Approach for Identification under Uncertainty using Support Vector Machines
of safety that includes load uncertainty can be made. When the uncertainty, U, is substantially lower than the margin, M, safety is very likely assured. When the uncertainty, U, is substantially greater than the margin, M, safety is very unlikely. When the uncertainty, U, is on the order of the margin, M, safety is uncertain. In all three cases, a conclusion regarding the design is reached.
V&V2012-6144 Sylvain Lacaze, Samy Missoum, University of Arizona, Tucson, AZ, United States Bayesian update approaches require the computation of a likelihood and the availability of a prior distribution of the update parameters. It is well known that the likelihood is nearly impossible to compute accurately for medium or high-dimensional problems with costly simulations. This issue is greatly amplified if several correlated responses are used for identification purposes since the likelihood would be a joint distribution.
The presentation will address strategies for decision making in the context of limited data for both load and resistance, which in practice is almost always true. In practice, we are usually concerned with incorrect predictions, i.e. inferences which, in theory, are provably untrue (something was determined to fail or not fail, based on a given threshold, but the reality is that the opposite occurred). Two types of incorrect inferences are addressed: they are referred to as Type I and II errors. A Type I error is one in which failures are over- predicted; a type II error is one in which nonfailures (or misses) are over-predicted, or in which failure is under-predicted. The latter would seem to be of greatest concern from the standpoint of mission risk.
Support Vector Machines (SVMs) provide a first step in overcoming the aforementioned hurdles. Indeed, an SVM is a classification approach that enables the construction of explicit boundary of a feasible identification domain in the parameter space where the errors between the experimental and computational data is acceptable. The boundary is constructed as a function of the update parameters and the uncontrollable parameters using a limited number of design of experiment samples. The SVM is then refined using an adaptive sampling scheme referred to as Explicit Design Space Decomposition. Once the boundary is constructed, the likelihood can be estimated in a straightforward manner using basic Monte Carlo sampling. Not that, in the cases where several responses are needed, only one SVM boundary is needed and their correlation is implicitly accounted for. Another advantage of the classification approach stems from the fact that it can handle discontinuous responses. This characteristic of the classification approach, the need of only one SVM, and the availability of an adaptive sampling scheme to obtain an accurate boundary makes this approach very competitive even in comparison with metamodeling approaches such as Gaussian processes.
The strategy of basing critical decisions on the probability of underestimating the probability of failure, in the context of limited data or information, is presented as an alternative to standard reliability theory where an average or expected probability of failure is normally used. In this context, the probability of under-estimating the probability of failure depends on the selected threshold; the lower the threshold the lower the probability of under-estimating failure probability. In other words, the risk of mission failure is more conservatively estimated by choosing a lower threshold. Continued Development of Statistically Based Validation Process for Computational Simulation V&V2012-6154 John Doty, University of Dayton, Dayton, OH, United States, Jose Camberos, Kirk Yerkes, Mitch Wolff, U.S. Air Force Research Laboratory, Dayton, OH, United States
The methodology is applied to finite element model updating using modal data (frequencies and mode shapes). In particular, the proposed approach allows one to efficiently include the MAC matrix as well as the frequencies (taken individually, and not in a global residual) for identification of the parameters. Several test examples will be presented and discussed.
This presentation addresses alternative strategies for quantification of margins and uncertainties (QMU) suitable for decision-making in a technical environment where, for example, one-of-a-kind missions are flown and false alarms can be tolerated from a conservative-risk point of view, or rare events are being observed where missed detections cannot be tolerated.
We continue the development of a statistically-based process for validation of computational experiments. The focus of this methodology is Uncertainty Quantification (UQ), Sensitivity Analysis (SA), and variance reduction. The two types of uncertainty (aleatory and epistemic) are incorporated into the methodology. For this investigation, the behavior of the simulation inputs is presumed unknown (aleatory due to lack-of-knowledge) and is therefore modeled using equally-probable uniform distributions. These distributions have known and quantifiable uncertainties that are used to determine the sampling residuals that are expected to follow a random normal distribution (epistemic with known mean and variance). Statistically-based sample sizes and uncertainty propagation limits are determined based upon a priori tolerance specifications as well as Type I and Type II risks. Newer quasirandom sampling procedures are demonstrated to be superior to classical pseudo-random sampling procedures. Further examples are presented that elaborate on the methodology and how it is extensible to all validation processes.
QMU is a term with meanings and implications which are in transition in different communities. In most contexts, it is used to refer to a systematic framework within which uncertainties can be quantified against operational margins. Traditional QMU acknowledges the existence of uncertainties in both loads as well as inherent variability due to tolerances and epistemic incertitude by which to judge the adequacy of a design. When a structure is designed to resist a load, R, and is subjected to a load, S (smaller than R, in this case), the margin of safety of operation of the structure is M. When both load and resistance are deterministic, the margin of safety is deterministic. If there is uncertainty, +U, in the load, and it is known precisely (in some sense), then a judgment
Modeling and Simulation (M&S) techniques are frequently used in the design and development of new systems to gather insight into the systems likely physical behavior. Recent trends in Uncertainty Quantification (UQ) are also being applied for validation of computational simulations. Many techniques exist that allow one to use statistics to optimize a design as well as reduce risk before moving forward to testing. The utility of these techniques are farreaching as many engineering projects use M&S during development. One step in the development of a model is the validation process. In general, the goal of a validation process is related to quality assurance by demonstrating that a set of inputs with known uncertainties will result in an output with a specified tolerance at a desired confidence
Quantification of Margins and Uncertainties in the Decision Making Process V&V2012-6146 Timothy Hasselman, George Lloyd, ACTA Inc., Torrance, CA, United States, Thomas Paez, Thomas L. Paez, Consulting, Durango, CO, United States
30
TECHNICAL SESSIONS
THURSDAY, MAY 3
level. Unfortunately, the validation process, in order to meet such tolerances, can be quite lengthy and expensive. This investigation is part of a methodology under development to improve the effectiveness and efficiency of the validation process. This topic is of considerable interest to the engineering community involved in testing and is the subject of ongoing and continued research.
surrogate models and it is determined that the 3rd-order Optimal Response Surface surrogate accurately reflects the calibration data and predicts the validation data from the original model within acceptable precision and risk bounds. The work within this paper is one step in the continued development of a validation process and of a quantifiable model confidence metric. This paper will address sensitivity to parameter uncertainty, experimental design, creation of surrogate models, and validation of the surrogate models. We will apply these steps to an aircraft synchronous generator model. This work compares surrogate models and development of model confidence metrics using statistical residual analysis.
Uncertainty Quantification Analysis in Multiphase Flow CFD Simulations with Application to Coal Gasifiers V&V2012-6155 Mehrdad Shahnam, Chris Guenther, Department of Energy National Energy Technology Laboratory, Morgantown, WV, United States, Aytekin Gel, ALPEMI Consulting, LLC, Phoenix, AZ, United States, Tingwen Li, URS Corporation, Morgantown, WV, United States
Model Validation Metric and Model Bias Characterization for Multiple Dynamic System Performances under Uncertainty V&V2012-6160 Zhimin Xi, University of Michigan - Dearborn, Dearborn, MI, United States, Yan Fu, Ren-Jye Yang, Ford Motor Company, Dearborn, MI, United States
In recent years the dramatic increase in the use of computer simulation methods to design a diverse set of complex engineering systems has reshaped the system design process; in particular, the use of physical models or prototypes has reduced, resulting in significant savings in cost and time in the design cycle. In spite of their widespread use and success, the current state-of-the-art in computer simulation approaches usually fall short in the crucial aspect of providing objective or statistically-meaningful confidence levels for the predicted results. The Department of Energy’s National Energy Technology Laboratory (NETL) is developing an integrated approach to combine theory, computational modeling, experiment, and industrial input to develop physics-based methods, models, and tools to support the development and deployment of advanced gasification based devices and systems. NETL has launched a R&D program to quantify uncertainty in model predictions for reacting gas-solids systems, such as a gasifier. NETL will be developing a practical framework to quantify the various types of uncertainties and assess the impact of their propagation in the computer models of the physical system to be able to give quantitative error-bars on the simulation data to help the designers, decision makers and operators of these gasifiers. Such quantitative measures will play crucial role in the assessment of operational reliability and investment risk of the proposed new designs and the time-to-market cycles. This paper presents an overview of the multiphase UQ activities at NETL and how UQ is being used in the area of coal gasification. In particular, this paper explores parameter sensitivity in the coal pyrolysis and gasification kinetics and how UQ is being applied to evaluate these sensitivities in key response variables inside a gasifier. This investigation extends these results to demonstrate how this approach could be used for commercial scale-up, design, and optimization.
Quantification of the accuracy of analytical models (math or computer simulation models) and characterization of the model bias are two essential processes in model validation. Available model validation metrics, whether qualitative or quantitative, do not consider the influence of the number of experimental data for model accuracy check. In addition, quantitative measure from the validation metric does not directly reflect the level of model accuracy, i.e. from 0% to 100%, especially when there is a lack of experimental data. If the original model prediction does not satisfy accuracy criteria compared to the experimental data, instead of revising the model conceptually, characterization of the model bias may be a more practical approach to improve the model accuracy because there is probably no ideal model which can predict the actual physical system with no error. So far, there is a lack of effective approaches that can accurately characterize the model bias for multiple dynamic system performances. To overcome these limitations, the first objective of this study is to develop a model validation metric for model accuracy check considering different number of experimental data. Specifically, a validation metric using the Bhattacharya distance (B-distance) is proposed with three notable benefits. First of all, the metric directly compares the distributions of two set of uncertain system performances from model prediction and experiment rather than the distribution parameters (e.g. mean and variance). Second, the B-distance quantitatively measures the degree of accuracy from 0% to 100% between the distributions of the uncertain system performances. Third, reference accuracy metric with respect to different number of experimental data can be effectively obtained so that hypothesis test can be performed to identify whether the two distributions are identical or not in a probability manner. The second objective of this study is to propose an effective approach to accurately characterize the model bias for dynamic system performances. Specially, the model bias is represented by a generic random process, where realizations of the model bias at each time step could follow arbitrary distributions. Instead of using the traditional Bayesian or Maximum Likelihood Estimation (MLE) approach, we propose a novel and efficient approach to identify the model bias using a generic random process modeling technique. A vehicle safety system with 11 dynamic system performances is used to demonstrate the effectiveness of the proposed approach.
Validation of Surrogate Models for an Aircraft Synchronous Generator V&V2012-6159 Jon Zumberge, U.S. Air Force Research Laboratory, Dayton, OH, United States, John Doty, University of Dayton, Dayton, OH, United States, Thomas Wu, University of Central Florida, Orlando, FL, United States A statistically-based process is under development for validation of computational simulations with experimental data. The focus of this investigation is the validation of the surrogate models for simulations for an electrical generator model. Several candidate designs for surrogate models were statistically analyzed for predictive power. Modern Design of Experiments was used to develop Central Composite Designs as well as 2nd-order and 3rdorder Optimal Response Surfaces and compare to standard engineering parametric designs. Sample sizes are statistically controlled for Type I and Type II risk for the Optimal Response Surface designs. Validations were performed for all the design
31
THURSDAY, MAY 3
TECHNICAL SESSIONS
VALIDATION METHODS FOR SOLID MECHANICS AND STRUCTURES
proposed by JAXA (Natori et al. [1]). On the configuration, radial ribs are initially straight, and are elastically deformed through deployment process to form an appropriate parabolic shape by tensile forces applied at hoop and tie cables. In the structural design process, the optimization algorithms are employed to satisfy the required surface accuracy in terms of the cross-sectional property of the radial rib and the hoop/tie cable tensile forces by adopting the Elastica-based finite element model, proposed by Tanaka [2].
3-2 VALIDATION METHODS FOR SOLID MECHANICS AND STRUCTURES: PART 2 Sunset 5&6 8:00am–10:00am Session Chair: Atul Gupta, Medtronic, Inc., Santa Rosa, CA, United States Session Co-Chair: Boris Jeremic, University of California, Davis, CA, United States
The present study describes the simulation-based design verification of this reflector with appropriate uncertainty estimation. Such design verification method is required for two reasons. First, the space reflector design should consider several uncertainty factors, e.g., an approximation error on numerical analysis model, manufacturing and assembly errors, especially on the control of the cable natural length, and uncertainty in thermal deformation and material degradation on orbit. Conventionally, the error budget is allocated to the uncertain factors based on experience. Then, performances of the designed structure are confirmed to have sufficient margin of the budget even in the worst case. However, the budget accumulation method will be difficult to adopt for the future highly precise space structures, because the budget would have a tendency to overestimate the uncertainty. The second reason is that the ground tests will be difficult for the larger-size space structures from aspect of facility cost. Therefore, the establishment of the simulation-based design verification method is quite important.
Validation of the “Digital Twin” of a Reusable, Hot Structure, Hypersonic Vehicle V&V2012-6051 Randall Allemang, Structural Dynamics Research Lab, Cincinnati, OH, United States, Stephen Michael, Spottswood, Structural Sciences Center, Wright-Patterson AFB, OH, United States, Thomas G. Eason, Structural Sciences Center, WrightPatterson AFB, OH, United States The challenges of providing a validation strategy for a reusable, hot structure, hypersonic vehicle represent an extension of current validation methods used in the risk analysis of several other complicated, modeled systems where little or no relevant data is available for the ultimate validation experiment(s). Notable examples include manned spaceflight, nuclear weapons and environmental modeling. The validation strategy will need to begin with project validation experiments at the highest level (full vehicle system) and end with scientific validation experiments at the material and component levels. The major challenges to the validation of the hypersonic vehicle include: 1) the appropriate use of data mining, 2) limits imposed by the use of existing test facilities, 3) resolving blind epistemic uncertainty (accounting for what is not known), 4) proper use of expert panel elicitation, 5) identification of appropriate inputs and physics, 6) focus on quantification of margins and uncertainties and 7) changing the modeling-testing culture. Of these challenges, the need is greatest to focus the validation strategy on providing the quantification of margins and uncertainties (QMU) as an outcome of attempting to attain specific validation metrics. Secondarily, the need is very great to alter the modeling-testing culture in light of the need to validate the reusable, hot structure, hypersonic vehicle with a minimum of physical testing.
This research investigates numerical accuracy of the simplified model of the radial rib/hoop cable reflector as the first step of the proposed simulation-based design verification. At first, the simplified model is introduced to demonstrate out-of-plane parabolic-shape deformation of the straight beam with non-uniform cross-sectional shape subjected to nodal forces simulating the cable forces. Then, the theoretical analysis based on extensible shearable Elastica considers structural nonlinear behavior is investigated to validate the mathematical model. The numerical accuracies of several finiteelement analysis codes including commercial and in-house codes are then investigated in comparison with the theoretical result. It is found that each code has sufficient accuracy, but has a unique inclination depending on theoretical formulation of the finite element or numerical integration. For the structural design purpose, evaluations of the sensitivity are also important to investigate the effect of uncertainty factors such as Young’s modulus and tie cable tensile load on the shape accuracy of the structure. Since the sensitivities have small differences between the analysis codes, most codes with adequate finite element are found to be useful for designing highly precise structure.
Structural Design Verification Using Simplified Model of Highly Precise Large-Scale Space Reflector V&V2012-6071 Nozomu Kogiso, Osaka Prefecture University, Sakai, Osaka, Japan, Hiroaki Tanaka, National Defense Academy in Japan, Yokosuka, Kanagawa, Japan, Takeshi Akita, Kosei Ishimura, Japan Aerospace Exploration Agency, Sagamihara, Kanagawa, Japan, Hiraku Sakamoto, Tokyo Institute of Technology, Tokyo, Tokyo, Japan, Yoshiro Ogi, The University of Tokyo, Tokyo, Tokyo, Japan, Yasuyuki Miyazaki, Nihon University, Funabashi, Chiba, Japan, Takashi Iwasa, Tottori University, Tottori, Tottori, Japan
Finally, the experimental verification of the simplified model is planned to compare the deformation shape with that of numerical results and to identify uncertain factors including measurement errors and environmental uncertainties. Currently, the experimental equipment including several measurement devices are constructed. Some results will be demonstrated at the symposium. References: [1] Natori, M. C. et al., A Structure Concept of High Precision Mesh Antenna for Space VLBI Observation, Proc. 43rd AIAA/ASME/ASCE/AHS/ASC Structures, Structural Dynamics, and Materials Conference, (2002), AIAA-2002-1359.
Large-scale deployable space reflectors have been used for several space missions, such as HALCA and ETS-VIII developed by Japan Aerospace Exploration Agency (JAXA). In these missions, the reflectors are required to have high surface accuracy, as well as lightweight and reliable deployability. For future space missions that will use shorter-wavelength, such as millimeter-wave, more precise surface accuracy with even larger diameter reflectors will be required. In order to realize such future missions, a new type of modularized reflector consisting of radial ribs and hoop cables was
[2] Tanaka, H., Optimum Design of Large-Deformed Beam Using Elastica-Based FEM, 22th Space Structure and Materials Symposium (2006), pp. 35-37, (in Japanese).
32
TECHNICAL SESSIONS
THURSDAY, MAY 3
Verification and Validation of Resonant Beam Structural Dynamics Simulations V&V2012-6162 Nathan Spencer, Michael D. Jew, Robert Fagan, Sandia National Laboratories, Livermore, CA, United States
Validation and Verification for Real-Time Acceleration Estimation of a Vibrating Source in Frequency Domain V&V2012-6002 Paul Lin, Cleveland State University, Cleveland, OH, United States
Finite element verification and validation efforts for an aerospace system are presented. Validation of the complete system model is problematic due to the limited and historical nature of available test data and economic barriers in acquiring additional data through environmental full system tests. To address these challenges, structural dynamic simulations of the physical system are used to design resonant beam test fixtures. Simplified hardware are mounted onto the resonant beams and excited with force inputs to generate the desired frequency and magnitude response levels. These tests are able to rapidly and economically generate data sets for validation of the corresponding simulations. Prior to validation, verification of the resonant beam simulations is performed via an order verification procedure. Post processing and data management approaches are also presented which increase the efficiency in executing both the verification and validation process. The results of the validation process vary depending on the response measure selected and are presented for peak accelerations, modal frequencies, and shock response magnitudes.
Background and Significance Accelerometer is widely used to directly measure the acceleration of a vibrating source. A Fast Fourier Transform (FFT) frequency analyzer can transform acceleration data from time domain to signal amplitude data in frequency domain. The sharp peaks of the signal amplitudes indicate the fundamental frequencies. For example, onboard the International Space Station, it is critical to know which vibrating sources are active that may impact the quality of microgravity environment. To avoid mistakes in source identification, it is desired to use a pair of data (frequency and acceleration) at any given time. Simulation Validation on Real-Time Estimation of Acceleration in Frequency Domain In frequency domain, power spectral density (PSD) is first generated. Then, a Parseval theorem is used. The theorem states that there exists equivalence between the root mean square (RMS) value of a signal computed in time domain to that in frequency domain. The equivalent RMS acceleration can be calculated by
Simulation of Non-Linear Components on Washing Machine Simulation V&V2012-6176 Shair Mendoza, MABE, Queretaro, Queretaro, Mexico
A_RMS = [Sum p(i)delta_f ]½ where i=0,1, 2, (n/2), n is the number of samples in the time domain, p(i) is the PSD value at frequency f(i), and delta_f is the frequency resolution.
This paper deals the simulation process for a front load washing machine suspension which includes a non-linear component such as shock absorber, in addition the paper presents an experimental data used to validate the simulation results.
However, this theorem cannot be used to estimate the equivalent acceleration at any given time. This study developed a numerical technique to quantify the RMS acceleration in real time. The technique was validated via simulation in choosing the appropriate frequency increment and a desired convolution mask to reconstruct the PSD data.
The distinct bodies in the present analysis are linked together through linear and nonlinear force elements. In order to simulate these types of force element stiffness and damping curves were built to represent damping force of the nonlinear elements. The model has been validated through experimental data.
Verification of Acceleration Estimation in Frequency Domain via Actual Testing Verification of this technique was conducted through actual testing. Acceleration can be the combinational effects of several vibrating sources at the same time. Thus, the technique can only be verified when a single vibrating source is active. The following data shows a test result of a source inside the Space Station vibrating at 60.18 Hz.
Validation of Shock Dynamics Models of Bolt Jointed Assembly V&V2012-6066 Qiang Wan, Xiao Shifu, Institute of Structural Mechanics, CAEP, Mianyang, Sichuan, China
With frequency resolution of 0.0305 Hz, the difference between the estimated acceleration and actual acceleration is 0% in X direction (1.9 milli-g vs. 1.9 milli-g), 2.1% in Y direction (4.9 milli-g vs. 4.8 milli-g), and 0% in Z direction (1.0 milli-g vs. 1.0 milli-g), respectively. Upon successful verification, the technique was then applied to acceleration estimation when multiple vibrating sources are present.
Model validation refers to the process of assessing the accuracy of a set of predictions from a computational model with respect to experimental measurements over some domain of the simulation input parameters for a particular application. This paper will demonstrate the application of model validation techniques to a transient structural dynamics problem. The study of interest is presented where a structural interface is loaded with a transient dynamic impulse and the propagation of shock wave through a bolt joint assembly. The objective is to validate the component can be represented with adequate fidelity in the system-level model. A set of experiments is conducted on the bolt jointed assembly where the strain responses to shock load are measured. A discussion of the features or characteristics of the response data are of interest. This is followed by a description of the finite element model used to analyze the response of the assembly, and a discussion of the sensitivity analysis and parameter screening process. A parameter effect analysis is performed to determine which of input parameters are most responsible for explaining the total variability of the output between experiment and finite model. A discussion of the model revisions and an assessment of the predictive fidelity of the revised model are performed finally.
Comparison of FEA Results to Physical Laboratory Results for F2077 IBFD Testing and Select F1717 Vertebrectomy Plate Testing V&V2012-6214 Brent Saba, Saba Metallurgical & Plant Engineering Services, LLC, Baton Rouge, LA, United States Finite Element Analysis (FEA) serves a useful tool in the medical industry, particularly when it comes to FDA required device testing. Results have been shown to match well with physical laboratory results in the cases of static axial compression and to a degree static torsion. In many cases, fatigue results show accurate location of fracture failure, as well as fatigue life. However, comparison of FEA and laboratory results have shown laser stenciling of parts is potentially causing lowered fatigue fracture toughness of the surface. In fact, some fractures are not occurring
33
THURSDAY, MAY 3
TECHNICAL SESSIONS
in the FEA predicted high stress zones, but instead in a separate moderate stress zone with the stenciling. A fatigue fracture analysis shows that once a minute surface crack forms in this degraded surface area, full fracture can occur relatively quickly.
Verified and Validated Calculation of Unsteady Overdriven Hydrogen-Air Detonation V&V2012-6218 Christopher Romick, University of Notre Dame, South Bend, IN, United States, Tariq Aslam, Los Alamos National Lab, Los Alamos, NM, United States, Joseph Powers, University of Notre Dame, Notre Dame, IN, United States
VERIFICATION AND VALIDATION FOR FLUID DYNAMICS AND HEAT TRANSFER 4-3 VERIFICATION AND VALIDATION FOR FLUID DYNAMICS AND HEAT TRANSFER: PART 3 Wilshire A 8:00am–10:00am
The dynamics of one-dimensional overdriven hydrogen-air detonations predicted in the inviscid limit as well as with the inclusion of mass, momentum, and energy diffusion were investigated. A series of calculations in which the overdrive is varied was performed. Strongly overdriven detonations are stable; as the overdrive is lowered, the longtime behavior of the system becomes more complex. It was found that detonations propagating into a stoichiometric hydrogen-air mixture at 0.421 atm and 293.15 K develop single frequency pulsations at a critical overdrive of f = 1.13. In the inviscid limit using shock-fitting, an oscillation at a frequency of 0.97 MHz was predicted for a f = 1.1 overdriven detonation, which agrees well with the value of 1.04 MHz observed in the equivalent shock-induced combustion experiment around a spherical projectile. The amplitude of these pulsations grows as the overdrive is lowered further. Decreasing the overdrive yet further, a bifurcation process occurs in which modes at a variety of frequencies are excited. The addition of physical mass, momentum, and energy diffusion has a stabilizing effect on overdriven detonations relative to the inviscid limit. In the viscous analog, the structure of these detonations is modulated, and the amplitude of the oscillations can be significantly decreased. Therefore, depending on the application, the use of the reactive Euler equations may be inappropriate, and the reactive Navier-Stokes may be a more appropriate model.
Session Chair: Prasanna Hariharan, US Food and Drug Administration, Silver Spring, MD, United States Session Co-Chair: Joel Peltier, Bechtel Corporation, Frederick, MD, United States Industrial Strength Verification and Validation of CFD at Hatch V&V2012-6192 Duane Baker, Hatch and Associates, Mississagua, ON, Canada, Tom Plikas, Umesh Shah, Jianping Zhang, Hatch, Mississagua, ON, Canada The Verification and Validation (V&V) of Computational Fluid Dynamic (CFD) codes and CFD models for real industrial problems is a significant challenge in the industry. Hatch provides engineering consulting services to clients primarily in the metals and energy industries and uses commercial CFD tools in innovative ways to solve a range of problems. The range of problems solved includes detailed flow distribution and mixing applications, chemical reactors, furnaces, etc. The physical processes are complex and often include complexities such as challenging turbulent and laminar-turbulent transitions, combustion, chemical reactions, thermal radiation, multi-phase phenomena, etc.
Heating Elements Convective Thermal Flux Optimization: Comparison between Numerical Results and Experimental Evidence V&V2012-6241 Elisabetta Rotta, Mario Maistrello, Giancarlo Chiesa, Politecnico Di Milano, Milano, Italy
As a result of the complex physics of many of the problems it is not possible to apply the more academic approach for V&V of focusing on refining one aspect of the model at a time to determine the impact on the solution. In addition, the academic approach is often applied to reduce the error to as low as possible i.e. to machine round-off, grid independent solutions, etc. The approach which will be discussed is how to determine the combination of the right level of detail in: geometry, mesh, boundary conditions, physical properties, physical models, numerical accuracy, convergence, etc. for a given problem with imperfect knowledge (boundary conditions, heat sources, material properties, etc.). The important criterion in this approach is how to achieve the most economical solution for the client in a model which has the same order of accuracy or inaccuracy in all aspects. Another important criterion is to determine how the solution would most benefit from additional model accuracy, which aspects to refine next, etc.
Aim of the present paper is to describe the research and optimization, using analytical numerical and experimental methods, to achieve an increment of natural convection heat transmission of heating elements. The research results helps the understanding of the phenomenon, allow optimization of radiators size and shape, give important reference to evaluate in advance thermal performance, defining component geometrical characterization. The main goal is to optimize heating element geometry to maximize weight/ heating power ratio, using solution that raise convective heating coefficient. The heating element numerically and experimentally analyzed, is a fixed heat flux radiator. The increment of the convective thermal coefficient is evaluated analyzing the surface temperature. CFD models where defined and adopted to simulate behavior of radiator- air interface and eventually validated, comparing simulations with laboratory results. The analysis is numerical and experimental, with cross reference of both, to obtain radiator configuration that increase thermal performance / weight ratio, and optimization of heat transfer processes related to shape configurations. This is achieved mainly optimizing natural convection at low surfaces temperatures. Where mainly analyzed passive technique of augmenting natural convention heat transfer, with an eye on fluid speed field modification. The radiant heat has been experimentally evaluated, and, in the experiment temperature range, has been estimated to be lower by a factor of 8. The numerical and experimental analysis and has as main goal the identification and development of configurations that allow higher thermal performance compared to a model used as reference , and the optimization of transport processes associated with heat flux and the model shape.
The method of industrial V&V will be illustrated by way of several real industrial case studies which provide details of the approach, the development of the models, the final simulation results, and a comparison with experiments, field validation data, or other analysis methods. Specific examples include Large Eddy Simulation (LES) of fully developed flow and pressure drop prediction of a unique process gas flow in a spiral wall corrugated metal pipe, mixing in the compartments of a pressurized oxidizing autoclave reactor, liquid-gas interface penetration depth prediction in a slurry pool, and air cooled ventilation system design for an aluminum casthouse.
34
TECHNICAL SESSIONS
THURSDAY, MAY 3
The choice of the active heating element was made to reduce the parameters of a mathematical model to perform more rapid and stable simulation. The heating body is operated using an electric heating element that, as its main characteristic, has a uniform distribution of thermal power output throughout its length. To verify the constancy of specific power, the surface temperature of the resistance with the IR thermography technique has been recorded in different power output conditions (50 W to 150 W) to verify constant temperature profile can be assumed. A full size radiator, consisting of three elements equipped with resistors in ceramic elements and filled with copper powder was assembled. The tests were conducted as open chamber, in an industrial warehouse. A complete set of thermocouples were used to monitor laboratory air temperature, floor ceiling and outer walls surface temperature in which the radiator was sampled.
validation plans have been proposed for stepping from computational modeling to PSM and then to the full size ship environment. The process has to recognize the critical differences in environments between scale modeling and that experienced by real ships. The sparseness of information on the exact state of cathodic regions, conditions of the cathodic regions (calcareous deposits are natural formations that change the electrochemical response of the exposed metal) and complexity of geometries all make comparison of real ship with either experimental or computationally obtained data challenging. Significant work has been completed that deals with the detailed comparison of computational codes and physical scale modeling, as well as cross verification between computational codes. More importantly for the design community, PSM has demonstrated its capability by the design of robust systems that have performed as predicted on ship. Understanding the difficulties associated with the real shipboard environment and resulting limitations on data collection is essential in developing any validation process. This presentation will introduce the audience to work performed at the Naval Research Laboratory focused on defining the validation problem for these systems and expanding the understanding of computationalexperimental-real ship response triad.
Results of numerical simulations carried out with the commercial CFD code Fluent. The simulations for the three-dimensional model of three elements of the constant heat flow radiator in open chamber were compared with experimental data. Numerical simulations were conducted assuming an outgoing thermal flux from the inner water channel equal to 450W. Numerical results were compared both with the images captured by the IR camera and with the data obtained from thermocouples. A good correlation between experimental and numerical results is demonstrated by qualitative and quantitative comparison between the images of the temperature obtained from numerical simulation and those obtained from the thermal camera. Numerically you can determine the temperature distribution on the surface of the three heating elements considered, the average surface temperature of the radiator and the average coefficient of heat transfer These values represent a crucial parameter for optimizing the geometry of the heating elements to improve the weight/thermal performance, maximizing just the convective heat transfer coefficient. The improvement of the convective coefficient will be derived from the surface temperature: for the same input power, the average surface temperature will remain constant, reducing the heat surface, increasing the mean convective heat transfer coefficient, and then ultimately having reduced the weight of the heating for the same thermal output thus maximizing power/weight ratio.
Performance of an Energy Recovery Facility Made of Pulsating Heat Pipes Filled with Mixing Working Medium V&V2012-6070 Guozhen Xie, Lirong Zhang, Beijing University of Civil Engineering and Architecture, Beijing, Christmas Island It is useful for energy saving to reclaim a quantity of heat from high temperature of heat transmitting medium too low temperature one by an energy recovery facility in an air-conditioning system or a heating system. In this investigation, a pulsating heat pipes recovery facility filled with mixing working medium was first-time developed and its performance was investigated by simulating and experimental methods. During investigation, the properties of mixing working medium that is filled into the pulsating heat pipes i.e. the heat transfer elements were predicted by software. The performance of the energy recovery facility was studied experimentally under the simulating summer working conditions. An energy recovery efficiency of the recovery facility is defined, and the relationship of the efficiency with angles between the pulsating heat pipes and flowing direction of wind, wind speeds, inlet temperature of the pulsating heat pipes heating end was tested and analyzed. The results show that the energy recovery efficiency will increases with both enhancing the inlet temperature of the pulsating heat pipes heating end and declining the wind speeds, and a maximum value of the energy recovery efficiency would be obtained when the angle is in 60degrees Celsius.
Validation Across Scales: Challenges of Shipboard System Design V&V2012-6207 Virginia DeGiorgi, Naval Research Laboratory, Washington, DC, United States Corrosion is a major concern for the US Navy. Cathodic protection (CP) which utilizes the relative electrochemical reactivity of different materials is a primary tool used against shipboard corrosion. The present US Navy design methodology for CP systems is the experimental technique physical scale modeling (PSM). In PSM both geometry and electrolyte conductivity are scaled and an aging technique is applied to each model to capture the effects of calcareous deposit formation. The US Navy CP system community (designers and operators) recognizes PSM as the only accepted design method. The ability to evaluate systems quickly is one consideration that has led to the proposed use of computational methods for the design of CP systems. In order to gain acceptance computational modeling results are compared to PSM result rather than shipboard data. This is due partly to the lack of shipboard data and partly due to the complexity of the shipboard environment. While not a simple task, it is possible to control experimental conditions so that there can be an exact match between computational and experimental environments. The quest that is currently unanswered is how to extend the computationalexperimental validation process to full size ship conditions. Various
Towards Justifying Model Extrapolations V&V2012-6076 Gabriel Terejanu, Todd Oliver, Robert Moser, Chris Simmons, University of Texas at Austin, Austin, TX, United States Computational models serve the ultimate purpose of predicting the behavior of systems under scenarios of interest. Due to various limitations, models are usually calibrated and validated with data collected in scenarios other than the scenarios of interest. Thus, model extrapolation becomes one extra challenging step, on top of calibration and validation steps. Indeed, there is no agreement in the scientific community today on what are the best practices for validation, as well as on how to corroborate validation results with predictive assessments. The main objective of this talk is to uncover the necessary conditions to claim that a model has predictive capability and on how to quantify
35
THURSDAY, MAY 3
TECHNICAL SESSIONS
its credibility. A comprehensive approach is proposed to justify extrapolative predictions for models with known sources of error. One example of models with known sources of error appears in continuum mechanics where the governing equations are derived using fundamental laws such as conservation of mass, momentum or energy. Here, localized errors appear through the introduction of constitutive laws to describe the response of a material to external forces. While some of the constitutive relations are derived using first principles, others are pure phenomenological. As such, these phenomenological relations are uncertain and need to be augmented with stochastic models to fully capture the behavior of the material.
1. TheRDS-facility (Reference Data Set for the selected facility): this includes the description of the facility, the geometrical characterization of any component of the facility, the instrumentations, the data acquisition system, the evaluation of pressure losses, the physical properties of the material and the characterization of pumps, valves and heat losses; 2. The RDS-test(Reference Data Set for the selected test of the facility): this includes the description of the main phenomena investigated during the test, the configuration of the facility for the selected test (possible new evaluation of pressure and heat losses if needed) and the specific boundary and initial conditions; 3. The QP(Qualification Report) of the code calculation results: this includes the description of the nodalization developed following a set of homogeneous techniques, the achievement of the steady state conditions and the qualitative and quantitative analysis of the transient with the characterization of the Relevant Thermal-Hydraulics Aspects (RTA);
An important problem in this context is Bayesian model calibration where Markov Chain Monte Carlo (MCMC) methods are used to obtain samples from posterior distributions. However, for this kind of problems, a single evaluation of the likelihood involves a forward propagation of model error in order to construct conditional probabilities, making the computational complexity of MCMC prohibited. Therefore, new efficient sampling algorithms are needed to address the model calibration in the general context of localized model error. In this study the approximate Bayesian computation will be used to address this problem of implicit models for the likelihood. The advantage of this approach compared with the classical one of explicit likelihoods, is that it provides a more accurate modeling for the phenomenon under consideration.
4. The EH (Engineering Handbook) of the input nodalization: this includes the rationale adopted for each part of the nodalization, the user choices, and the systematic derivation and justification of any value present in the code input respect to the values as indicated in the RDS-facility and in the RDS-test.
Validation Opportunities as Simulation Dimensionality Escalates V&V2012-6028 Arthur Ruggles, University of Tennessee, Knoxville, TN, United States Existing V&V standards for Computational Fluid Dynamics (CFD), such as V&V 20, focus validation on specific state parameters at a specific location and time in a simulation outcome. The uncertainty in the predicted outcomes is then defined for the chosen validation parameters through comparison with data from experiments. The state parameters in coupled multi-physics systems are controlled locally at a specific time by a summation of modes, such as eigenmodes. Each mode may represent a different physical artifact, with pressure wave propagation and enthalpy wave propagation offered as well known examples. The pressure wave, while normally associated with pressure, actually perturbs all the state variables according to that eigenmode. The enthalpy wave may also perturb all the state variables. The evolution of the state variables at the specific location and time chosen for validation is controlled by the evolution of the most dominant modes. Measurement of one or two state variables is generally not adequate to ascertain if these modes are properly represented in the simulation.
The connection between model calibration, validation and prediction is made through the introduction of alternative uncertainty models used to model the localized errors. In addition a validation metric is introduced to provide a quantitative characterization of consistency of model predictions when compared with validation data. The talk will discuss the challenges in caring out the proposed methodology on an illustrative example.
VERIFICATION AND VALIDATION FOR SIMULATION OF NUCLEAR APPLICATIONS 6-1 VERIFICATION AND VALIDATION FOR SIMULATION OF NUCLEAR APPLICATIONS: PART 1 Wilshire B 8:00am–10:00am Session Chair: Richard R. Schultz, Idaho National Laboratory, Idaho Falls, ID, United States Session Co-Chair: Yassin Hassan, Texas A&M University, College Station, TX, United States
Historical data used to validate multiphysics codes developed for nuclear system simulation included only a few state variables, with pressure, temperature and velocity most prevalent. Further, the locations where these parameters are measured in the system are sparse. The simulation state space typically includes seven parameters. This allows the simulation many degrees of freedom for predicting the measured parameters. Novak Zuber, while employed by US NRC, famously claimed that compensating errors were introduced in some simulations to predict measured outcomes. Indeed, without more comprehensive assessment of the state space in experiments, it is impossible to assess if the simulation being validated properly represents the underlying physics, even though it may accurately predict the limited validation data. More comprehensive assessment of the state space in an experiment makes it possible to determine which modes control the local instantaneous parameter variations.
Supporting Qualified Database for Uncertainty Evaluation V&V2012-6007 Alessandro Petruzzi, Nuclear Research Group San Piero a Grado, Pisa, Pisa, Italy Uncertainty evaluation constitutes a key feature of BEPU (Best Estimate Plus Uncertainty) process. The uncertainty can be the result of a Monte Carlo type analysis involving input uncertainty parameters or the outcome of a process involving the use of experimental data and connected code calculations. Those uncertainty methods are discussed in several papers and guidelines (IAEA-SRS-52, OECD/NEA BEMUSE reports). The present paper aims at discussing the role and the depth of the analysis required for merging from one side suitable experimental data and on the other side qualified code calculation results. This aspect is mostly connected with the second approach for uncertainty mentioned above, but it can be used also in the framework of the first approach.
A bridge is proposed between the dimensionality of the simulation and the dimensionality of the validation. In early times scaling was employed to identify the dominant physics for a system model with the objective of limiting the physics in the model to allow simulation with limited computational resources. Today models with comprehensive physics can be readily solved, and it is common for physics to be included in the model that may not be important to the simulation at hand. This may lead to dimensionality in the simulation that exceeds that required. Scaling is proposed to
Namely, the paper discusses the features and structure of the database that includes the following kinds of documents:
36
TECHNICAL SESSIONS
THURSDAY, MAY 3
identify the modes dominant to a validation activity. The dominant eigenmodes are used to reconstruct the validation state space and to identify the state variables most effective in representing the dominant physics. This reduces the effective dimensionality of the model for validation, and reduces the data collection obligations for a validation that effectively disallows compensating errors.
Minas Gerais - FAPEMIG, for the financial support. Consideration of Simulation and Testing Uncertainty for Meeting Control Valve Actuator Resonance Frequency Design Requirements V&V2012-6080 J. Adin Mann III, Gregory D. Westwater, Neal Willer, Alessandro C. Guariento, Emerson Process Management, Marshalltown, IA, United States
The nuclear industry is moving toward integration of nuclear physics, thermo-fluid, and structural simulation. This will extend the state space to dozens of parameters, and in some cases many modes will be important to a simulation outcome. Complexity of model geometry, boundary and initial conditions escalates along with the movement toward comprehensive integrated model physics. All of this increases the need for well designed and comprehensive validation data if compensating errors are to be avoided. The focus on dominant modes in validation state space selection emphasizes underlying physics in the validation process, and is more important as the dimensionality of simulation state space increases.
When using commercial software for simulations, there are aspects of the computational process which can only be explored through experimentation. Likewise, with a physical structure, experiments are needed to quantify the variability in the system response. This paper describes the work performed to quantify the uncertainty in the simulation and testing results for predicting the resonance frequency of control valve actuator structures. Resonance frequency is a critical design requirement for valve actuators to ensure that the actuator structure will not amplify the sources of vibration excitation. Examples of vibration excitation include fluid forces within the piping system, nearby mechanical equipment, and earthquakes. The requirements regarding earthquakes are particularly stringent for critical valves in nuclear power plants. These requirements are expressed as a minimum natural frequency, thus there is a need to predict the lowest expected value. In some cases there is also a requirement on accuracy of the predicted natural frequency, which assumes that tested values are the benchmark. There is therefore a need to quantify uncertainty in terms of the lowest possible value and the actual value of the predicted natural frequency.
Verification and Validation of a Thermal Stratification Transient Experiment CFD Simulation V&V2012-6038 André A. C. Santos, Hugo C. Rezende, Moysés A. Navarro, Comissão Nacional de Energia Nuclear - Centro de Desenvolvimento da Tecnologia Nuclear, Belo Horizonte, Minas Gerais, Brazil Thermal stratification and striping are observed in many piping systems including those of nuclear power plants. The periodic occurrences of these thermal transients lead to fatigue and may induce undesirable failures and deformations to the piping. To obtain some understanding on these phenomena, experimental and numerical programs have been set up at CDTN/CNEN. Experiments were conducted in a test section with a similar geometry to the steam generator injection nozzle of a Pressurized Water Reactor (PWR). Numerical simulations of these experiments were performed with the commercial finite volume Computational Fluid Dynamic code CFX 13.0. A vertical symmetry plane along the pipe was adopted to reduce the geometry in one half, reducing mesh size and minimizing processing time. The RANS two equations RNG k-eps turbulence model with scalable wall function and the full buoyancy model were used in the simulation. In order to properly evaluate the numerical model a Verification and Validation (V&V) process was performed according to an ASME standard. Numerical uncertainties due to mesh and time step were evaluated. Three progressively refined meshes, with approximately 2x10^5, 6x10^5 and 3x10^6 nodes, and three time steps, with values of 0.075s, 0.115s and 0.169s, were used to calculate the Grid Convergence Index (GCI) and evaluate the numerical uncertainties for the temperature profiles at fifteen selected thermocouple positions. The results showed that the mesh is responsible for most of the numerical uncertainty for the temperature profiles, especially at steep gradient regions. Validation was performed comparing numerical and experimental results taking in account all involved uncertainties calculating a validation error for the model. The validation results showed that the region of highest temperature difference, which is most critical for the piping integrity, was well predicted, with a relatively low validation error. It was also observed that the external temperature difference showed good agreement between experimental and numerical results during the evaluated time. In past studies a qualitative evaluation of the results would be considered sufficient, without evaluating important aspects such as mesh and time step influence in results. Appling a V&V procedure allowed an objective analysis of numerical results and of modeling quality. Even though V&V procedures are far from becoming a consensus, this study highlights the importance of proper quantitative evaluation of numerical results. The authors express their thanks to the Fundação de Amparo à Pesquisa do Estado de
Because the actuator structures contain multiple bolted joints, the connection between bolted elements can have a significant impact on the resonance frequency. Further, the actuator structures have a range of mass and stiffness values, and thus the influence of the joints and the best computational formulation for the joint contacts can vary between structures. A complicating unknown for simulations is that with some commercial codes underlying equations of the contact formulation and its implementation is confidential and also modified with yearly revision releases. Thus, while the contact formulation is critical to accurate predictions, the contact formulation which produces the best results can change between revisions. This forces an experimental approach to determine the best contact formulation and the uncertainty produced by the simulations. Further, the flexibility or rigidity of the test structure and likewise the valve, can also impact the predictions. Finally, when testing is the benchmark for the simulation results, the variability in the testing results can have an impact on the perceived accuracy of the simulation results. When working on a new actuator design all these factors can lead to prior work not providing the critical parameters required for accurate simulation results. Thus, an approach is needed for new structures, to not only use past work, but also have testing with in a range of simulation values in order to establish a bound for the expected uncertainty for the simulations of the new actuator. Extensive studies were carried out to establish the impact of (1) the contact formulation on the simulation results, (2) the impact of the testing methods on testing results, and (3) the impact of expected variation in the structure components on the testing results. As a consequence, an approach is being developed to quantify the uncertainties in the simulation and testing results to ensure that an actuator design is meeting a design requirement for a minimum resonance frequency and when needed, to ensure that the simulation method is validated to achieve a minimum difference between predicted and tested values.
37
THURSDAY, MAY 3
TECHNICAL SESSIONS
Verification of the Kuosheng BWR/6 TRACE Model with Load Rejection Startup Test V&V2012-6117 Kuan Yuan Lin, Chunkuan Shih, Institute of Nuclear Engineering and Science National Tsing Hua University, Taiwan, Hsinchu, Taiwan, Jong-Rong Wang, Hao-Tzu Lin, Institute of Nuclear Energy Research, Atomic Energy Council, R.O.C., Taiwan, Taoyuan, Taiwan
(Coleman and Steele, 2009). These measurements may occur in quantities that serve as either inputs or outputs to a model. The latter will be referred to as system response quantities (SRQs). When aleatory uncertainties are present in the experimental data used for model validation, then the metric used to assess model validity (i.e., the validation metric) should be statistical in nature. When epistemic uncertainties are present, a similar approach can be used based on imprecise probability theory (Walley, 1991; Ferson et al., 2003).
In this research, we have modified previous TRACE model of Kuosheng Nuclear Power Plant and used the start-up tests data to verify this model. TRACE (TRAC/RELAP Advanced Computational Engine) is a best estimate reactor systems code for analyzing thermal-hydraulic behavior in light water reactors. It can support a more accurate and detailed safety analysis of nuclear power plants. In this TRACE model, we modified the simulation of main steamline piping from one loop to four loops and reset the safety/relief valve which built on the main steam line. Besides, we used one bypass valve to simulate the real six bypass valve and make sure its flow rate satisfied the design value of 35% steam flow rate. Load rejection start up test was chosen to verify Kuosheng TRACE model. Our main purpose in this research is to observe the performance of TRACE modeling on main steam line, turbine control valve and bypass valve. The test result of 100% power load rejection was compared with start-up test data. Several important thermal parameters were concerned in our research, such as the steam dome pressure, water level of reactor vessel, steam flow and feedwater flow. At beginning of the test, turbine control valve closed and turbine bypass valve opened to release quantities of steam in main steam line, and the closure of TCV caused reactor scram. In analysis result, it was successfully simulated the decrease of feedwater flow due to the rise of dome pressure which was caused from fast closure of turbine control valve and initial high value of core power. Besides, the increase of dome pressure is smaller than predicted value and SRV did not open as expected. The Kuosheng NPP TRACE model was successfully established by using TRACE code. The verified results of this TRACE model reveal the respectable accuracy of the analysis of 100% power load rejection.
A common misconception is that a model can only be validated within the experimental uncertainty bounds of a measured SRQ. Consider the hypothetical case where the random variations in the measured SRQ come solely from random uncertainty in the system inputs (i.e., the measurement of the SRQ itself is perfect). With sufficient observations/samples of the SRQ, one can accurately characterize the experimental Cumulative Distribution Function (CDF) of the SRQ. If the random uncertainties in the system inputs are propagated through the model, one can also obtain a CDF of the SRQ from the model. If these two CDFs are equal, then the model has accurately predicted the effects of the system input uncertainties on the SRQ. For this idealized case, the model would be considered perfect even though there may be considerable uncertainty in the experimentally measured SRQ (arising purely due to uncertainties in the system inputs). For example, if the SRQ was peak material temperature with a measured value of 500 K +/- 200 K (with 95% confidence), then this information could still be used to establish the accuracy of the model (i.e., it would be shown to be perfect in the hypothetical example given above). In this talk, a simple algebraic model with one input and one output will be used to illustrate the concepts discussed above. In addition, the more realistic cases of imperfect measurements of the SRQ and imperfect models will be considered. The statistical validation metric that we will focus on is the area validation metric (Ferson et al., 2008; Oberkampf and Roy, 2010). Approaches to incorporate epistemic uncertainties will also be briefly discussed. This approach to model validation in the presence of uncertainty will be compared and contrasted to those from two recent ASME standards: ASME V&V 10-2006 (ASME, 2006) and ASME V&V 20-2009 (ASME, 2009).
COFFEE BREAK/EXHIBITS Celebrity Ballroom 2 10:00am–10:30am
References ASME, Guide for Verification and Validation in Computational Solid Mechanics, American Society of Mechanical Engineers, ASME Standard V&V 10-2006, New York, NY, 2006.
UNCERTAINTY QUANTIFICATION, SENSITIVITY ANALYSIS, AND PREDICTION
ASME, Standard for Verification and Validation in Computational Fluid Dynamics and Heat Transfer, American Society of Mechanical Engineers, ASME Standard V&V 20-2009, New York, NY, 2009.
2-4 UNCERTAINTY QUANTIFICATION, SENSITIVITY ANALYSIS, AND PREDICTION: PART 4 Sunset 3&4 10:30am–12:30pm
Coleman, H. W. and Steele, W. G., Experimentation, Validation, and Uncertainty Analysis for Engineers, 3rd Ed., John Wiley and Sons, New York, 2009. Ferson, S., Kreinovich, V., Ginzburg, L., Myers, D. S., and Sentz, K. Constructing Probability Boxes and Dempster-Shafer Structures, Sandia Technical Report SAND2002-4015, January 2003.
Session Chair: Hugh Coleman, University of Alabama at Huntsville, Huntsville, AL, Unites States Session Co-Chair: Robert Ferencz, Lawrence Livermore National Laboratory, Livermore, CA, United States
Ferson, S., Oberkampf, W.L., and Ginzburg, L., Model Validation and Predictive Capability for the Thermal Challenge Problem, Computer Methods in Applied Mechanics and Engineering, Vol. 197, pp. 24082430, 2008. Oberkampf, W. L. and Roy, C.J., Verification and Validation in Scientific Computing, Cambridge University Press, Cambridge, 2010.
Model Validation Issues in the Presence of Uncertainty V&V2012-6173 Christopher J. Roy, Virginia Tech, Blacksburg, VA, United States
Roy, C.J. and Oberkampf, W.L., A Comprehensive Framework for Verification, Validation, and Uncertainty Quantification in Scientific Computing, Computer Methods in Applied Mechanics and Engineering, Vol. 200, pp. 21312144, 2011 (DOI:10.1016/j.cma.2011.03.016).
Model validation is the assessment of a model relative to experimental data (Oberkampf and Roy, 2010; Roy and Oberkampf, 2011). This talk discusses issues pertaining to model validation in the presence of uncertainty. While the focus is on aleatory (i.e., random) uncertainty, epistemic (i.e., lack of knowledge) uncertainty is also briefly addressed. Typically, uncertainties arise due to random errors that occur during experimental measurements
Walley, P., Statistical Reasoning with Imprecise Probabilities, Chapman and Hall, London, 1991.
38
TECHNICAL SESSIONS
THURSDAY, MAY 3
Numerical Study of Feasible Nano-Scale Light Trapping Limits via Wavelet Transform Optimization V&V2012-6190 Shima Hajimirza, John R. Howell, The University of Texas at Austin, Austin, TX, United States
not strive to separate LOK uncertainties from those having a basis of characterized variability. Other important limitations of this validation approach will be discussed. Uncertainty separation is also disregarded as a matter of philosophy in professed Bayesiantype model validation approaches.
Light trapping is an important technique in increasing the efficiency of solar cells. For thin film cells with dimensions comparable or less than the incident light wavelength, surface nano-scale patterning provides new light trapping capabilities not present in conventional PV cells. However, the complications that arise from such dynamics make the analysis of light trapping via nano-scale patterning challenging. Numerical and experimental methods are used to study these limits, which obviously have very limited extends. Inverse optimization is a systematic form of numerical approach that allows us to find the limits of light trapping more efficiently, and comes as an alternative to exhaustive search simulations or experimental methods.
The Oberkampf & Roy validation framework does accommodate a separation of the two types of uncertainty, but some significant limitations of the approach will be described. A simple and economical method for segregating and propagating LOK and variability in the Real Space approach to model validation (and model conditioning and extrapolative prediction) will be outlined. The approach does not employ transform discrepancy measures like the VV20 subtractive-difference validation metric or the Oberkampf & Roy area metric to characterize discrepancy between experiment and simulation results. The advantages of the Real Space validation methodology will be explained. Like in Oberkampf & Roy, an approximate Probability Bounds representation is used for the two types of uncertainty.
In this work, we use inverse optimization to study light trapping in thin film amorphous silicon cells using metallic surface nano-scale patterns. We use a finite set of discrete Haar wavelets to describe an arbitrary shaped surface pattern, and use global optimization to find the coefficients of the wavelets for optimal absorption enhancement in thin film silicon. The motivation for choosing wavelet basis (vis-a-vis say Fourier basis) is the feasibility of fabricating the resulting nano-structures. In addition, we analyze the effect of structural variations and numerical error in the performance of the design patterns, which is a measure of stability for such structures.
Addressing Uncertainty Treatment in Fire Modeling V&V2012-6222 JongSeuk PARK, Korea Institute of Nuclear Safety, Daejeon, Korea (Republic), Daeil Kang, Korea Atomic Energy Research Institute, Daejeon, Korea (Republic) Detail analysis of fire scenario is a key element of performancebased fire protection programs for operating nuclear power plants and is performed to find out the fire area vulnerability including target elements. NFPA 805 requires fire modeling and uncertainty analysis to develop the fire scenario in nuclear power plants. Fire modeling is used to determine the survivability of SSCs as well as predict tenability within an analysis regime. Uncertainty analysis provides the assurance that the performance criteria have been met in the fire protection program and produces a probability distribution for target failure time. The uncertainty evaluation of input data for the fire scenario analysis in switchgear MCC room is performed with a set of 93th sampling calculations using Latin Hypercube Sampling (LHS) technique. 93th sampling calculations represent the 95% probability with 95% confidence limit based on Wilks two sided tolerance limit. FDS5, field model of fire simulation code, is used for fire modeling and MOSAIQUE is used for uncertainty analysis.
Treatment of Variability and Lack-of-Knowledge Types of Uncertainty in Various Model Validation Frameworks V&V2012-6193 Vicente Romero, Sandia National Laboratories, Albuquerque, NM, United States For model predictions supporting design, analysis, and decisionmaking there is a substantial call in the literature for segregating uncertainties originating from stochastic variability vs. uncertainties originating from a fundamental lack of knowledge (LOK). It is also common to refer to this type of segregation as aleatory vs. epistemic, but in some model prediction and analysis contexts only epistemic uncertainties exist yet a separation of uncertainties is still necessary for uncertainties originating from variability vs. from fundamental LOK.
The fire is assumed to start within the interior of the cabinet, and the smoke, heat, and possibly flames are assumed to exhaust from the air vent at the top of the cabinet. Heat release rate per unit area is calculated as a 3900 kW/m2. Material properties are adopted from NUREG-1934.
Accordingly, it would appear important to keep track of the two different types of uncertainty while conducting model validation activities. Indeed, ASME VV10 Guide for Verification and Validation in Computational Solid Mechanics mentions the importance of categorizing these two types of uncertainty (under the terminology aleatory and epistemic) in experimental and modeling aspects of validation activities. Nonetheless, there appears to be a dearth of model validation frameworks that allow or enable a segregation of variability and LOK.
The uncertainty evaluation shows that the majority of cable failures occur in the 195 to 230 second for tray A and the 800 to 925 second for tray B. To protect the integrity of cables from fires, the fire should be suppressed within 4 min, which is considered in the fire protection program. This study is to demonstrate that the combination of FDS5 model and the limited number of fire scenario with LHS techniques leads to a practical approach to meet NFPA 805 requirements.
The ASME VV20 Standard for Verification and Validation in Computational Fluid Dynamics and Heat Transfer considers only models fashioned for predicting deterministic phenomena, or for predicting a singular realization of stochastic phenomena. Results are characterized as subject to epistemic uncertainty regarding where the true result of an experiment lies (given uncertainties involved in measurement of the result), and subject to modeling and simulation epistemic uncertainties in predicting test-article response given uncertainties in the experiment inputs and conditions and model-intrinsic uncertainties innately associated with the model. The epistemic-uncertainty based VV20 validation framework does
Uncertainty Quantification for Systems with Discontinuities using Dynamic-Biorthogonality Based Approach V&V2012-6229 Piyush Tagade, Han-Lim Choi, Korea Advanced Institute of Science and Technology, Daejeon,Korea (Republic) Digital simulation of complex large scale systems is often uncertain due to unknown/poorly known physics, model parameters, initial
39
THURSDAY, MAY 3
TECHNICAL SESSIONS
and boundary conditions.
Key performance indicators for sustainability (KPI-S) quantify various interrelated aspects of sustainability, such as consumption of energy, material, and water, and the production of wastes and emissions. KPI-S can be assessed for a range of manufacturing processes, from relatively atomic unit processes to composite processes that synthesize numerous lower-level processes into a complex system. In general, a system process is a hierarchical network of lower-level processes.
Researchers in varied fields have already emphasized the need and importance of uncertainty quantification in simulation predictions. However, computational power requirement makes uncertainty quantification in large scale systems difficult. Stochastic spectral projection (SSP) based methods provide computationally efficient uncertainty quantification framework for large scale systems with accuracy comparable to Monte-Carlo methods. However, accuracy of SSP based methods deteriorates if solution evolves discontinuities.
For both process models and process measurements, availability and fidelity are typically quite heterogeneous across a given system process hierarchy. This issue complicates both KPI-S assessment and reliable decision-making. Two nominal situations occur, each requiring significant computational modeling. If KPI-S measurements are available at a higher level, then one may need to allocate the sustainability performance accurately among the lowerlevel processes. On the other hand, if lower-level KPI-S measurements are available, then one may need to aggregate the sustainability performances to a higher-level process. Furthermore, at any given level in the process hierarchy, KPI-S may be computed through surrogate measurements of relevant process parameters.
An example of such a discontinuity is shocks that evolve during transonic simulations. Simulation of such systems have attracted significant interest by research community and literature is rich with methods for resolution of discontinuities for deterministic simulations. However, use of stochastic spectral methods for uncertainty quantification of systems with discontinuities lead to a Gibbs phenomenon, characterized by first order spurious oscillations near discontinuities.
When KPI-S are computationally predicted without the possibility of validation against direct measurements, proper uncertainty quantification (UQ) increases confidence in sustainability performance assessments and decisions. Thus, this presentation will (1) describe the UQ issues with respect to allocation, aggregation, and surrogate measurements, and (2) present advances in the quantification of uncertainty for computational predictions of KPI-S in manufacturing processes.
The Gibbs phenomena have been investigated by researchers and different solutions are provided for its resolution for Fourier type expansion of functionals. Recently, similar investigation have been carried out for generalized polynomial chaos method and various techniques are proposed for its resolution. However, there is no prior study that investigates Gibbs phenomena for dynamic biorthogonality based method. Present paper investigates dynamic biorthogonality based approach for simulation of stochastic systems in presence of discontinuous solutions. The solution field is decomposed into a mean and a random field. The random field is represented as a convolution of separable Hilbert spaces in stochastic and spacial dimensions.
A challenging aspect of this UQ problem is the determination of appropriate uncertainty and sensitivity measures that are computationally feasible across heterogeneous process hierarchies. Hierarchical sensitivity analysis guides effective uncertainty analysis and reduction. Furthermore, process measurement and description requirements must be formalized to enable UQ for hierarchical allocation or aggregation. Both epistemic and aleatory uncertainties are considered in the approach. Epistemic uncertainties arise from ignorance about the involved processes, whereas aleatory uncertainties arise from inherent variability in processes.
Stochastic dimension is spectrally represented in polynomial chaos basis while spacial dimension is spectrally represented in eigenfunction basis. Dynamic evolution equations are derived that preserve orthogonality in polynomial chaos as well as eigenfunction basis. The paper demonstrates that existing TVD methods satisfies TVD properties in biorthogonality based approach. The resultant solution is a Fourier type expansion of a sample in eigenfunction basis with samples from stochastic basis acting as respective coefficients. The resultant solution is postprocessed using reprojection on Gegenbauer basis to resolve Gibbs phenomenon around discontinuities. Efficacy of the method is demonstrated for simulation of Burgers equation with uncertain initial conditions.
VALIDATION METHODS FOR SOLID MECHANICS AND STRUCTURES 3-3 VALIDATION METHODS FOR SOLID MECHANICS AND STRUCTURES: PART 3 Sunset 5&6 10:30am–12:30pm Session Chair: Richard Swift, Cook MED Institute, West Lafayette, IN, United States Session Co-Chair: Krishna Kamojjala, University of Utah, Salt Lake City, UT, United States
Uncertainty Quantification for Sustainable Manufacturing Processes V&V2012-6165 Mark Campanelli, National Institute of Standards and Technology, Gaithersburg, MD, United States
Verification & Validation of Shock Response Predictions Including Uncertainty Quantification V&V2012-6039 E. Thomas Moyer, NSWC/Carderock, West Bethesda, MD, United States
Sustainable manufacturing (SM) combines environmental and social considerations with traditional economic considerations in the manufacture of goods. Managing the sustainability performance of manufactured goods is a major goal of SM. Resourceconsuming and waste-producing processes influence a products overall sustainability performance at all stages of the manufacturing value chain. Furthermore, manufacturing is but one life cycle phase with sustainability considerations. Other phases include raw material sourcing, transportation, usage, and end-of-life disposal/recycling. The focus here is on the sustainability of processes in the manufacturing phase.
The response of ship structure and equipment to shock loading is critical to the design of robust ships for Navy applications. Modeling & Simulation (M&S) is an important tool in the design process. A critical element to the successful use of M&S, however, is Verification & Validation (V&V) of the predictions. The V&V process must address the fact that some elements of the M&S process are uncertain leading to a family of potential outcomes. Additionally, the experimental data available for V&V contains
40
TECHNICAL SESSIONS
THURSDAY, MAY 3
uncertainties as well. This paper develops a V&V/UQ (Uncertainty Quantification) framework for Navy shock applications. The methodology adopts the use of point-to-point, geometrically selective and global response comparison approaches. For the geometrically selective and global response comparisons, Principle Component Analysis is employed. Various statistical correlation methods are currently under development and consideration and results are presented. One particular challenge addressed is the need to separate overall (either temporal or spectral) response characteristics from specific features (e.g. kick-off velocity or average initial acceleration).
control settings; featured rotorcraft with researchable or accurately estimable characteristics; and contained sufficient detail and clarity of output to allow for a reasonable comparison to modeling results. We concentrated on a series of fifteen cases involving three rotorcraft systems that satisfied most of the above criteria. Our findings are that DESCENT does a consistently accurate job of predicting that safe autorotations are possible in situations where the maneuvers were demonstrated by flight test data. Since the primary application of the code is delineation of the conditions under which a best caseautorotation is safe for the pilot and vehicle, this is an extremely important finding. Furthermore, we found that the modeled pilots control input during an autorotation maneuver closely follows real-world pilot input in most circumstances. This helps to establish that accurate optimizations of the maneuver are being performed. Finally, a number of small improvements to the code were identified to help the model mimic non-optimal pilot response when appropriate. These enhancements all paid dividends in helping DESCENT gain fidelity to test results and more realistically model optimized maneuvers.
The methodologies being evaluated are tested on a simple single degree of freedom system considering uncertainty in 2 random variables. The results provide indications for promising comparative metrics as well as allow for the prediction of confidence measures. A second example including representative structure and mounted outfitting is investigated. Using this example, comparison is made between correlations based on point-to-point comparisons and correlations made based on Principle Component Analysis. Statistical measures for temporal comparisons are presented as well as for spectral comparisons. This example also demonstrates response specific metric comparisons. Uncertainty in the modeling parameters is quantified. This example is also used to develop an approach to establishing design margins for application. Shock design requirements demand determination of a shock design environment which will ultimately be validated by shock qualification testing. These requirements need to be established early in the ship design process. In addition to the inherent uncertainties in the M&S predictions, the requirements must allow for flexibility during the evolution of the overall ship design. Since the system design must proceed in parallel with the ship design, sufficient margin must be provided without dramatically impacting weight and cost. A final example demonstrates the application to the statistical measures under consideration to a typical response measurement and prediction from actual shock test data. This example provides the initial starting point for end use application.
Verification and Validation of Virtual Simulation Model for an Expandable Liner Hanger V&V2012-6095 Ganesh Nanaware, Tony Foster, Leo Gomez, Baker Hughes, Houston, TX, United States Expandable Liner hangers used for wellbore construction within oil and gas industry are complex mechanical systems. The consumable nature of the expandable products makes the accuracy and reliability of the virtual simulation predictions important for reducing the time and cost it takes to introduce a reliable and robust product to the competitive marketplace. This presentation summarizes the methodology of verification and validation of virtual simulation models used for the performance predictions of an expandable liner hanger. The virtual simulation model of an expandable liner hanger system comprises an adjustable swage to expand the hanger body, slips to hang the liner, and a packer to seal in variable diameter casing. Hypermesh® software is used to build the finite element analysis (FEA) models, and Abaqus® explicit simulation software is used to evaluate and predict the performance parameters such as required expansion force, hanging capacity, and seal integrity. An initial virtual simulation model is verified against simplified fundamental engineering calculations, past available data for similar types of products, and sound engineering judgment. Next, mechanical complexity is added into the verified simulation model to study the sensitivity of the various geometrical, operational and physical parameters. The sensitivity analysis is done based on Design of Experiments (DOE) using HyperStudy® software. Based on the initial sensitivity study, further simulations are performed to determine the effects of the identified variables or parameters on the simulated performance. The intimate relationship between modeling and experimental uncertainty is further explored by defining uncertainty as an integral part of the simulation model through the application of probability distribution functions (PDF) for each variable. The stochastic simulation is performed using probabilistic information for variables varying randomly using the Latin Hypercube sampling method. Sequential Optimization and Reliability Assessment (SORA) algorithm is also explored to optimize the design variables to achieve desired reliability. The simulation model results are validated against experimental data at which point any error between the model and the experimental data is identified. Depending on the quality of the experimental test and the error identified during validation, the virtual simulation model is then refined and run again until the model predicts similar performance compared to that of the experimental test. Finally, the validated
DESCENT Autorotation Model: V&V Strategy and Results V&V2012-6138 Andrew Drysdale, US Army Research Laboratory, Aberdeen Proving Ground, MD,United States The US Army Research Laboratory (ARL) employs a number of continually updated models in the execution of one of its core products, the assessment of the survivability and vulnerability (S/V) of Army inventory vehicles. One recent addition to its modeling suite is the DESCENT rotorcraft autorotation model. DESCENT accepts helicopter performance, mission, and environment characteristics as inputs and produces a time-history of the vehicles optimized autorotative descent to impact. The ultimate (impact) state of the vehicle is then used in more comprehensive S/V analyses. This gives ARL the capability of assessing the survivability of inventory rotorcraft that have lost engine power or transmission systems function in a flexible user-defined mission configuration and environmental situation. Verification and validation (V&V) of the DESCENT code presented some unique challenges. The questions that the V&V must address are whether the model successfully predicts the conditions under which a safeautorotation is possible, and whether the internal pilot response optimization that informs those predictions is sufficiently faithful to real-world maneuvers. A typical procedure for V&V of ARL S/V models is extensive comparison with experimental data; for a rotorcraft autorotation code such data is either too dangerous or too expensive acquire comprehensively. Instead, a literature search was undertaken for autorotation tests that recorded time-histories of state variables such as height, rotor speed, forward velocity, and
41
THURSDAY, MAY 3
TECHNICAL SESSIONS
simulation model is used to predict the performance of further design and development iterations of the expandable liner hanger.
friction and deadband. Therefore, the model validation of position feedback system was not satisfactory. Based on the observation of nonlinear performance, nonlinear modeling has been developed by inputting the square wave voltage to motor input and measuring the output of the potentiometer. Once the nonlinear modeling is added to the system model, the simulation result matches the experimental result very well.
This verification and validation methodology helped study the effects of various geometric variables of the liner hanger slip design and the impact on the liner hanger performance. In summary, this methodology helped to achieve significant reduction in development time and cost by minimizing the number of prototypes needed for an expandable liner hanger system.
In this experiment and simulation, linear and nonlinear models for the system are obtained for modeling purposes, and the major nonlinearities in the system, such as friction and dead zone, are investigated and integrated in the nonlinear model. Results of the real time experiments are graphically and numerically presented, and the advantages of the nonlinear modeling approach are revealed and proved to be feasible and more accurate. Further work may include online modeling of the linear and nonlinear system models using the recursive least squares method.
Validating Computational Models in the Presence of Uncertainty for the Response of Large-Scale Structures Subject To Impulsive Dynamic Loading with Limited Data V&V2012-6075 Michael D. Shields, Kirubel Teferra, Adam Hapij, Najib Abboud, Raymond Daddazio, Weidlinger Associates, Inc., New York, NY, United States
Model Validation for an Assembled Beam Based on Probabilistic Method V&V2012-6112 Chen Xueqian, Xiao Shifu, Liu Xin’en, Institute of Structural Mechanics, CAEP, Mianyang, Sichuan, China
Computational models of large and complex structures subject to impulsive loading can be both highly uncertain and time/resource intensive. Further complicating such problems is the desire to validate the results from these computational models against or use them as test predictions for a very limited pool of experimental data containing its own uncertainties. Experimental data for such systems is difficult to come by because large-scale tests are very expensive. As a result, it is common to validate computational models against as few as one experiment. Finally, the size and complexity of the computational models precludes traditional Monte Carlo methods from being employed in quantifying uncertainty and model validation.
The verification and validation (V&V) is an important means to carry out the high credible simulations, and identifying and quantifying of uncertain parameters is an important content in V&V. Model validation based on true tests will take long study periods and expensive cost to quantify the uncertainty of parameters, so the suppositional simulations are always used to replace true tests. An assembled beam made of two cantilevers with bolt-joints at the free-tip is considered in the paper. According to different engineering demands, multi-level application questions are brought forward. These application questions include not only the traditional determinacy but also the uncertainty, interpolation and extrapolation model assessment. To carry out these assessments, the test design in three challenging examples from the Sandia National Laboratory (SNL) V&V team is referred. The hierarchy of the test design in the assembled beam includes the suppositional calibration, validation modal tests and the suppositional shocking tests. In the paper only the highest application question is discussed. That is, considering the uncertainty on joints and boundaries, what is the probability that the response on some point is greater than a certain threshold under a given random force power spectral density load. It is require that the probabilities of the acceleration and displacement responses exceeding the given threshold 350m/s2 and 1.2mm are all less than 0.01.
The present work will outline a methodology to use a small sample bootstrap Monte Carlo method to quantify the uncertainty in large computational models coupled with a variety of metrics to verify and validate simulated structural response histories against a single set of experimental response histories. The primary objective is to capture the uncertainty in the computational model with as few simulations as possible and use statistics of the resulting dataset to validate the model against the experiment with some level of confidence. The work will also be presented in the context of pretest response prediction. Modeling and Experimental Validation of A DC Motor with Nonlinear Position Feedback V&V2012-6215 Shouling He, Vaughn College of Aeronautics and Technology, Flushing, NY, United States
The finite element models of two cantilevers and the assembled beam with free-free boundary are modeled with 3D brick elements, as called SOLID45 in ANSYS. The uncertainties of boundary and joints are considered through the elastic modulus of the material of the root cantilever and joint part of assembled beam. According to each suppositional calibration test, the corresponding elastic modulus in the subsystem can be identified by the optimization module of ANSYS. The parameter uncertainty can be analyzed with respect to the all identified results. The uncertainty models of suppositional elastic modulus are modeled as normal distributions with the assumption of Rayleigh proportional damping. The damping coefficients are identified and quantified according to the suppositional validation tests. The uncertainty propagation is carried out by Monte Carlo simulations. The validation metric of 95% confidence interval is used to judge whether the models are accepted. The validation results in the validation and accreditation tests show that the models are accepted. Last, the model is used to predict the response on the appointed point. The prediction results show that the probabilities of the acceleration and displacement responses exceeding the given threshold are 0.0546
Modeling and Experimental Validation of A DC Motor with Nonlinear Position Feedback Modeling of electrical or mechanical systems is an essential stage in practical control design and applications. Control system designs that operate at varying conditions or require high precision operation raise the need for a nonlinear modeling and experimental validation. In this paper, we present nonlinear modeling of a DC motor rotating in bi-direction with real-time experiments. The motor system discussed in the paper contains a high quality, low friction, 18-Watt granite brush DC motor with a directly mounted tachometer to measure the velocity. In addition, a rotary potentiometer via a rubber belt is connected to the DC motor to measure the rotary angle of the motor. When we modeled the motor system with velocity feedback in a traditional approach, the mathematical model was verified by both programming in MATLAB and Simulink simulation environment. The simulation result was validated by the real-time experiment. However, the position feedback using the potentiometer which is connecting to the rubber belt exhibits nonlinear behavior due to the
42
TECHNICAL SESSIONS
THURSDAY, MAY 3
and 0.0267 respectively, the two probabilities are all greater than the 1% acceptability threshold
of Beyond DBA accidents. To give experimental background to the in-vessel corium retention by external reactor vessel cooling (ERVC) intended to apply in the Paks NPP, the CERES facility, the integral type model of the vessel section cooling had been developed and constructed.
Validation of a Beam Fluid Element Model for Structural Dynamic Analysis of Reactor Internals V&V2012-6198 Jin Seok Park, Korea Atomic Energy Research Institute, Taejeon, Korea (Republic)
Results of PMK-2 experiments for transients, DBA and Beyond DBA accidents contain 55 off-normal events, as different sizes of cold and hot leg breaks; primary to secondary leaks; natural circulation disturbances; plant transients and accidents. Tests obtained from the CERES facility include 4 series of tests covering the expected range of parameters of cooling loop to be implemented in the plant. Methodologies of code validation include both qualitative and quantitative assessments. The qualitative method, which was almost exclusively applied in the early phase of code validation by integral type experiments, has been applied to the PMK-2 and CERES tests. For the quantitative assessments, the Fast Fourier Transform Based Method has been applied. The report presents the results of assessment activities, discussing the safety significance of validation and verification.
Abstract Seismic analysis model is required to evaluate the mechanical structural integrity of reactor internals when earthquake is occurred. Finite element method is used to prepare a seismic analysis model for reactor internals. The reactor internals are complex mechanical structures, so much analysis time is required to obtain the seismic responses with the detail finite element model. A beam element model combined with fluid elements is introduced in order to reduce analysis time and provide convenience to handle, which is simplified from the detail solid model. The beam fluid element model consists of BEAM4 elements, rotational spring elements, MASS21 elements, and FLUID38 elements. BEAM4 elements represent structures such as the reactor vessel, upper guide structure, and core support barrel. Rotational spring elements were adopted to simulate a connection condition between one cylinder and another cylinder. MASS21 elements represent the fluid contained in cylinder and FLUID38 elements for the fluid between cylinders. We obtained dynamic characteristics, natural frequencies and vibration modes, from both
CFD Validation of Acoustic Resonance on Main Steam Safety Valve in APR1400 V&V2012-6189 Sang-Gyu Lim, Sung-Chang You, Han-Gon Kim, Korea Hydro & Nuclear Power Co., Ltd., Central Research Institute, Daejeon, Korea (Republic)
the beam fluid element model and structural vibration test. Though the beam fluid element model for the scaled-down reactor internals has a few hundred elements, the estimated natural frequencies are almost the same as those obtained by vibration test. The beam fluid element model may be a useful analysis model to obtain the spectrum response of reactor internals for a seismic event.
Many boiling water reactors (BWRs) experienced a failure of steam dryer after extended power uprate (EPU). Unit 2 nuclear power plant of Quad Cities (QC) experienced a significant damage on the steam dryers while normal operating under EPU condition. According to the investigation of root cause, steam dryers in Quad Cities plant had been exposed by highly fatigue due to acoustic resonance vibration. This significant hydraulic source of acoustic resonance generated from the safety relief valve (SRV) in the main steam line (MSL). After this experience, Nuclear Regulatory Commission (NRC) issued that the applicant and licensee for new type of reactor should perform a vibration and stress analysis for adverse flow effect such as acoustic resonance. Recently, many BWRs in Japan has been implemented EPUs for economic benefits however they had to evaluate possibility of those failure by scaledmodel tests and computational fluid dynamic (CFD). Okuyama performed a fundamental experiment for flow-induced acoustic resonance under various conditions of one or two side branches. Okuyama showed the relationship between Strouhal number (St) and acoustic resonance. In the St range of 0.3-0.5, a significant resonance wave occurred at the branch pipe. Based on Okuyamas results, Morita performed scaled-model tests and CFD code calculation for a real BWR case.
VERIFICATION AND VALIDATION FOR SIMULATION OF NUCLEAR APPLICATIONS 6-2 VERIFICATION AND VALIDATION FOR SIMULATION OF NUCLEAR APPLICATIONS: PART 2 Wilshire B 10:30am–12:30pm Session Chair: Hyung Lee, Bettis Laboratory, West Mifflin, PA, United States Session Co-Chair: Richard R. Schultz, Idaho National Laboratory, Idaho Falls, ID, United States Verification and Validation of Thermal-hydraulic System Codes for VVER Type Nuclear Power Plants V&V2012-6142 György Ézsöl, Attila Guba, László Szabados, MTA KFKIAEKI, Budapest, Hungary
In Korea, KHNP conducts an assessment of acoustic resonance vibration on the steam generator internals to prove the safety of steam generator internals for APR1400, using a commercial CFD code, ANSYS CFX 13.0. Firstly, applicability and reliability of CFX code for prediction of acoustic-resonance phenomenon are validated based on Okuyama results. The result of CFX code validation shows a good agreement with BWR test data. A predicted acoustic frequency of CFD calculation is aligned with a calculated result based on correlation of acoustic resonance. Based on the previous results, assessment of acoustic-resonance vibration is implemented under full power condition of APR1400. In this condition, steam velocity in the MSL is about 40m/s. The CFX code calculates low fluctuating pressure in the MSL under full power condition. In contrast, CFX predicts that the acoustic-resonance wave is drastically increased in the MSL when the steam velocity approaches about 60m/s. This paper illustrates the results of benchmark analysis and APR1400 analysis comparing with BWR test data and CFD data.
Research and development programs have been performed in Hungary at the MTA Atomic Energy Research Institute (AEKI) to verify and validate thermal-hydraulic system codes for safety analyses of VVER type nuclear power plants. Computer codes applied in Hungary for safety assessment of the Paks nuclear power plant of VVER-440/213 type are ATHLET and RELAP5, while CATHARE is used as an independent tool in support of the regulatory authority. The validation activities have been concentrated for these codes. The codes had been developed for PWRs, therefore, the validation for VVER applications is of primary importance. To get VVER-specific experimental data base the PMK-2 facility, the integral type thermal-hydraulic model of the Paks nuclear power plant had been developed and constructed to perform experiments covering transients, DBA accidents and a set
43
THURSDAY, MAY 3
TECHNICAL SESSIONS
Validation of CFD Calculations for the ISP-43 Rapid Boron Dilution Transient V&V2012-6161 Robert Brewster, CD-adapco, Melville, NY, United States, Emilio Baglietto, Massachusetts Institute of Technology, Cambridge, MA, United States
The potential of using dimensionless ? groups from field equations to identify important physical phenomena was originally identified by Zuber in the Hierarchical, Two-Tiered Scaling Methodology. In this study Pi groups were used to develop a Quantified PIRT to identify important phenomena and uncertainty contributors based on PI group rankings. A larger PI group represents a faster physical transfer process implying higher importance during the transient thus identifying important uncertainty contributors. The uncertainties of the code simulation results have to be quantified in order to perform a meaningful comparison between code simulation results and experimental results. This study employs Wilks formula to determine the number of code runs needed to satisfy the prescribed acceptance criteria of 95% probability with 95% confidence.
Soluble boric acid (H3BO3) is used to control the excess reactivity during the operation of pressurized water reactors (PWRs). During so-called boron dilution transients (BDT), the borated coolant water is diluted by mixing with deborated water. A sudden decrease in boric acid concentration in the coolant can lead to undesirable reactivity excursions in the core which constitute a safety concern.
The uncertainties in code modeling parameters can be reduced through code calibration process using a Bayesian statistics. Data accuracy comparison between code simulation results and experimental results can be performed given that the uncertainties of both of them are quantified. Through the simulation-experiment comparison, the code fidelity, capability and limitations can be identified during the code validation process. Based on the code uncertainty quantification methods developed for reactor licensing in the BEPU approach, a comprehensive code uncertainty quantification method is investigated in this study for the purpose of code validation.
International Standard Problem No. 43 (ISP-43) addresses the ability of computational fluid dynamics (CFD) to simulate certain types of rapid boron dilution transients. Experimental data were collected using the University of Maryland’s 2x4 Thermal hydraulic Loop (UM 2x4 Loop) and the Boron-mixing Visualization Facility. In the Loop experimental program, the dilute volume was simulated by cold water and the borated primary coolant is simulated by hot water. Temperature measurements at various locations in the test vessel indicate the degree of mixing of the dilute and primary volumes as a function of time. In this paper we perform solution verification and validation of the STAR-CCM+ CFD software against the ISP-43 experimental results. An accurate representation of the experimental apparatus has been constructed in the form of a CAD model. Numerical models were generated using five different nominal mesh sizes, resulting in mesh size ratio of approximately 1.4. It is shown how STAR-CCM+s meshing strategy lends itself to such parametric mesh studies. Extrapolated solutions and discretization uncertainties are estimated using different techniques (e.g. least squares and mixedorder Richardson extrapolation) to test their suitability for industrial problems with complex geometry and physics. The results are then compared to the measured ISP-43 data using validation metrics based on statistical confidence intervals, as proposed by Oberkampf & Barone. Sensitivity of the results to inlet boundary conditions (flow rate and turbulence levels), as well as the initial temperature, is also investigated.
V&V Activities in Progress on Sodium Fire Analysis Codes for Fast Reactors V&V2012-6183 Shuji Ohno, Hiroyuki Ohshima, JAEA, Ibaraki-ken, Japan, Yuji Tajima, ENO Suri Kaiseki Research, Chiba, Japan, Hiroshi Ohki, NDD Corporation, Ibaraki-ken, Japan Evaluation of thermal consequence is one of the important issues when we take into account the accidental situation that a chemically active liquid sodium would leak out into an air atmosphere and cause fire in a sodium-cooled nuclear power plant; i.e. a fast reactor. The authors have therefore developed a sodium fire analysis code system as a numerical simulation tool for plant safety evaluation and been conducting the activity of verification and validation (V&V) for systematically demonstrating the reliability and the accuracy of the simulation tool. This presentation describes constructed plan and current status of the V&V activities mainly focusing on a zone-model-based sodium fire analysis code SPHINCS[1] as well as a field model CFD code AQUA-SF[2].
Thermal Hydraulic Computer Code Validation Using Quantified PIRT for Uncertainty Quantification V&V2012-6164 Jeffrey Luitjens, Hu Luo, Brian Hallee, Qiao Wu, Oregon State University, Corvallis, OR, United States
The activities are being initiated fundamentally referring to the existing guidelines; the Regulatory Guide 1.203 (USNRC) for the system code SPHINCS, or guides of AIAA G-077-1998 and ASME V&V 20-2009 for the CFD code AQUA-SF. Actual on-going V&V procedure includes the development of the Phenomena Identification and Ranking Table (PIRT) for postulated accident scenarios to be evaluated and the construction of Assessment Matrix for code validation planning. The matrix indicates and summarizes both separate effect tests (SET) and integral effect tests (IET) which are selected mostly from the existing experimental research database accumulated in our institute JAEA. The presentation introduces a validation example with an SET in which the sodium spray fire analysis model in the codes predicts the burned amount of a falling sodium droplet with the error less than 30% for wide range of experimental droplet combustion condition. Other validation examples are further shown for the SPHINCS code using IET data; the experimentally measured transient gas pressure behaviors during sodium spray fire in a confined 100 m3 steel vessel are well simulated by the code.
The U.S. NRC revised the reactor licensing rules in 1988 to allow for the use of best estimate computer codes if the Best Estimate Plus Uncertainty (BEPU) approach is followed. The BEPU approach requires computer code models to be validated through a wide range of separate and integral effect tests with code uncertainties quantified following the Code Scaling, Applicability and Uncertainty (CSAU) methodology. The important uncertainty contributors of the thermal hydraulic codes are traditionally identified through a Phenomena Identification and Ranking Table (PIRT), which is based on the knowledge and experience of experts. The BEPU approach and CSAU method were supported by the nuclear society immediately after the release, but with noticeable limitations. Nonparametric statistics based on Wilks formula or Bayes theorem was proposed to replace the response surface method adopted in CSAU method to account more comprehensive uncertainty sources. To enhance the credibility and fidelity of the next generation safety analysis computer codes, the capability to predict the code accuracy with given uncertainty range is required. Thus, a rigorous code validation method should be developed to account for both the code uncertainties as well as the experimental uncertainties.
[1] A. Yamaguchi, et al., 2001, Nucl. Technol., 136, 315-330. [2] T. Takata, et al., 2003, Nucl. Eng. Des., 220, 37-50.
44
TECHNICAL SESSIONS
THURSDAY, MAY 3
Use of a Response Surface Methodology to Evaluate Comparative Ratios of Code Predictions to Experimental Results V&V2012-6227 Jonathan Adams, Ian Kiltie, Rolls-Royce Power Engineering PLC, Derby, United Kingdom
VALIDATION METHODS 12-1 VALIDATION METHODS: PART 1 Wilshire A 10:30am–12:30pm Session Chair: Edwin Harvego, Idaho National Laboratory, Idaho Falls, ID, United States Session Co-Chair: David Hall, SURVICE Engineering Company, Ridgecrest, CA, United States
Part of the work undertaken within the nuclear division at RollsRoyce Power Engineering is the analysis of the thermal performance of nuclear reactor cores using thermal hydraulic analysis computer codes developed and validated using experimental data obtained from testing.
Complications to Validation of Public Healthcare Cost in the U.S. Today V&V2012-6015 Andrew Loebl, Oak Ridge National Laboratory, Oak Ridge, TN, United States, Kelly Walker, Capitol Resource Associates, Annapolis, MD, United States
Effort has been directed into the use of existing code prediction/experimental result comparisons to highlight parameter ranges within the design space that require further experimental testing in order to strengthen the validation.
In 2009, total healthcare expenditures reached $2.4 trillion (17.3% of GDP). These expenditures are estimated to rise to 19.3% of GDP by 2019. Today, public healthcare programs spend $1.3 trillion+ for 130+ million beneficiaries. Economists agree that current spending is simply not sustainable. It is essential for this society’s future that a National Laboratory, invests the computational sciences expertise and the advanced hardware at scale, with collaboration with subject matter experts to build a knowledge discovery infrastructure to separate and identify the real population of claims data from the corrupted population of claims data. Current systems for credit card transactions, financial transactions and the stock market are models of a sort but insufficient to address Waste Fraud, and Abuse.
The thermal hydraulic analysis code performance is evaluated via a code prediction to experimental result ratio (CER), with a ratio in proximity to unity indicating good agreement between the code prediction and experimental result. Because of the combination of the large number of dependent variables, influential in both the code predictions and experimental results, and the shear quantity of data points to be considered, the evaluation of a response surface methodology was undertaken to assess its applicability for further analysis of the data. Using the statistical analysis software Minitab, a response surface was fitted to the CER based on 8 input parameters of interest. The aim being to ascertain that no combinations of specific parameters or range of parameters exist that leads to poor results in CER. Any bias in code prediction is undesirable as the code is required to perform consistently over the full range of design parameters of interest in meeting validation criteria.
Applied subject matter expertise, unprecedented computing power, and massive architecture will address analytics opportunities. This includes the growing needs for “line-speed” data extraction and scalability ranging from petabytes to exabytes. It is important to reveal cost savings and uncover a empirically based policy to create an optimal balance of limited resources in the public healthcare system.
An Initial response surface fit based on 8 input parameters to all values of CER yielded a fit with an average error of ~ 20%, indicating that the selected input parameters could not be used to predict the corresponding CER with acceptable reliability. Producing a fit using the same 8 parameters to values of the code predictions alone, yielded a similar fit with an average error of ~ 20% also. This is unsurprising as the correlations utilised by the code are formed, in part, from the experimental results, and as such the experimental uncertainties become embroiled within the correlations to some extent. Also, it should be noted that there are a significant number of other parameters that are influential in both the code predicted and experimental results, with the fits limited to 8 inputs of most interest. Omission of the full gambit of parameters is a contributing factor to the relative lack of fit.
It is not unusual for government to need to recover from fragmented, uncoordinated islands of information that have been developed to meet individual program needs and cope with, now, antiquated technology. A platform that facilitates information sharing among healthcare authorities and users, other federal public healthcare Departments, State Medicaid Departments and stakeholders on a routine, timely basis is key to modern infrastructure architecture for harnessing knowledge to provide cost, utilization and quality optimization. Curatorship in commanding master data across public healthcare programs, and assuring access, dynamic verification, and validation at machine speed for ultra-scale knowledge discovery ultimately provides a trusted data source for outcomes analysis, quality measures, and comparative effectiveness research.
Most interesting results are found through isolating the dataset to data points with a CER >1.1 and 0.6) the results of RELAP5 Mod3.2 calculations show a reasonable agreement with the experimental data. The cases with high coolant quality at fuel channels outlet are most important for safety justification, because high coolant quality in fuel channels usually is the symptom of fast core structures overheating process. Thus, the comparison of calculation results with available experimental data of RBMK designers showed that developed RELAP5 model is reliable for modeling of heat and mass transfer processes in fuel channels and reactor cavity of RBMK-1500. Such model was successfully used in RELAP5 analysis of processes in reactor cooling circuit and reactor cavity in case of station blackout in RBMK-1500.
A total of 23 RD-14M blowdown tests were selected for the code accuracy assessment, including single- and multi-channel tests with break sizes ranging from 15 to 48 mm. In general, CATHENA over-predicts void fraction during the initial fast transient phase of RD-14M large break tests, and under-predicts void fraction during the same phase of RD-14M small break tests. The worst underprediction is found in the two tests with the smallest break size (15 mm breaks). Six of the 23 tests were selected to perform an integrated uncertainty analysis with perturbations of the code modeling parameters. The 2-sided tolerance limits were derived from 155 uncertainty simulations for each test. Sensitivity analyses for spatial discretization, temporal discretization, break discharge coefficient, and break opening time were conducted to confirm the adequacy of the CATHENA idealization used in this code accuracy assessment. This project was undertaken to quantify the CATHENA accuracy and the uncertainty range in the predicted channel void fraction during the early blowdown phase of a LOCA. The work supports the improvement of the break discharge model in the CATHENA code by providing the assessment results for various break sizes and the sensitivity analysis to the break discharge model. The work confirmed the adequacy of the uncertainties considered in the code modeling parameters for large break LOCA tests, and supports the analysis of CANDU power plants when using a best estimate and uncertainty (BEAU) analysis methodology.
Another example is devoted for validation of Accident localisation system (ALS) model. Ignalina NPP ALS is a pressure suppression type confinement performing function of last barrier on the path of radioactive emissions from reactor to environment. Validation of
50
TECHNICAL SESSIONS
THURSDAY, MAY 3
Assessment of CFD Model for Hydraulic Characteristics V&V2012-6217 Joy Chou, Yuh-Ming Ferng, Ming Xi Her, National Tsing Hua University, Hsinchu, Taiwan
is designed to study transport of hydrogen by convection and diffusion through an air-filled containment. The GOTHIC simulation results performed the concentration of hydrogen, and the results showed the same trend with the experiment.
Molten salt reactor (MSR) is one of the most important nuclear reactor of the fourth generation. It can be used as a fast breeder reactor and solve the problem of fuel shortage in the future or to burn down high level wastes. Molten salt refers to a salt that is in the liquid phase that is normally a solid at standard temperature and pressure. It has the advantages of high heat capacity, low viscosity, high boiling point, low vapor pressure and low costs, but it also faces the drawback of low conductivity and high Prandtl number. According to the past experiment we have done, we analyzed the heat conduction capability and pressure drop of molten salt in a micro-channel. With the data acquired, we now use Computational Fluid Dynamics (CFD) to simulate this experiment and find the best model that corresponds to the characteristics of molten salt. The geometry of the micro-channel is 300 in length, 1 in height and 0.5 in width (mm). We have done the sensitivity test for 3 proportional meshes and chosen the best model for further simulation. The fluid model was undertaken with FLUENT code, considering the momentum equation and energy equation. We assume steady state, laminar flow, uniform inlet velocity at the inlet and pressure boundary condition employed at the outlet. The simulation for pressure drop with different velocities coincides well to the theoretical value with errors below 2%. The flow in a microchannel becomes fully developed at approximately 0.00667mm from the inlet, which is only 0.0000223 times of the length. CFD predicted results correspond with the calculation ones obtained using the well-known correlation. In addition, the measured values of pressure difference are larger than the correlation calculation ones. Now we are finding the main reason of the discrepancy. For future work, we will simulate the heat conduction capability of by giving a temperature at the inlet, outlet, and wall of the model, and the properties of molten salt will be defined by the data from previous experiment as polynomials.
The Study of CFD Simulation for Air Ingress Accident with HTGR V&V2012-6209 Ming-Jui Chang, Yuh-Ming Ferng, Huai-En Hsieh, Bau-Shi Pei, National Tsing Hua University, Hsinchu City, Taiwan Air-ingress of very high temperature gas-cooled reactors (VHTGR) was that air entered the core through the break pipe at a loss of coolant accident. When this accident occurred, the air entered the core might cause oxidation of the core and graphite structure and corrosion of the fuel. As a result of the oxidation and corrosion situation might lead to the reactor safety issues. For observing the phenomena, we used three dimensional computational fluids dynamic (CFD) to present the result. We established the three dimensional model by using GAMBIT and use FLUENT to simulate the accident. All initial conditions of the temperature, pressure and concentration with the gas system were using the full power operating situation and those values were used in FLUENT as the boundary conditions. With the numerical analyses, we used turbulent flow model by the standard k-? and the near-wall treatment by using standard wall functions. For solution methods, the scheme was SIMPLE and the spatial discretization used with the second order upwind for density, momentum and turbulence. We considered that the residual was convergence with 10^-5. The results had shown the air entered into the core bottom and it just spent ten seconds. Then the air passed through the fuel region and generated some carbon dioxide and carbon monoxide by oxidation of the fuel. Finally, we used FLUENT to simulate the benchmark of the multicomponent model with chemical reaction case in Air Ingress Benchmarking with Computational Fluid Dynamics Analysis. And we observed the trend is identical, so we use the diffusion coefficients for the air-ingress case.
The Simulation with the Fukushima Hydrogen Explosion Accident by GOTHIC V&V2012-6204 Hung-Yi Shen, Bau-Shi Pei, Zhen-Yu Hung, Huai-En Hsieh, National Tsing Hua University, Hsinchu City, Taiwan
Critical Heat Flux Phenomena Observation for Nuclear Reactor Vessel Bottom V&V2012-6194 Huai-En Hsieh, Yuh-Ming Ferng, Bau-Shi Pei, National Tsing Hua University, Hsinchu City, Taiwan
Loss of coolant accident (LOCA) is a kind of design basis accidents (DBA). During and after a loss-of-coolant accident in a light water reactor, hydrogen gas may be generated from chemical reactions and radiological decomposition of water. Flammable concentrations of hydrogen are of particular concern. We use GOTHIC (Generation of Thermal Hydraulic Information for Containment) code to perform the safety analysis of the Mark-I containment and adopt the data of hydrogen flow rate offered from MAPP (Modular Accident Analysis Program) to build up the boundary condition after the Fukushima accident of the hydrogen burned inside the reactor service floor on March 11, 2011. The GOTHIC model include the control volumes and junctions are used to divide the containment system into several functional blocks, and set the rate of generating hydrogen data as the boundary conditions. Then, we can do the analysis to the concentration of hydrogen variations versus time and the concentration in the different level. The simulation results have shown that is much easier to accumulation of hydrogen in the higher space then the lower space. Once of the concentration of hydrogen with the air to reach 4% will have the possibility of hydrogen explosion.
In-Vessel Retention External Reactor Vessel Cooling (IVR-ERVC) is a strategy that mitigates the consequences of the severe accidents in which the core melts and relocates to the lower head of the reactor vessel. And the critical heat flux is the critical phenomena to reach the condition with the vessel damaged when the high temperature debris dropped down to the vessel bottom and without the effective external vessel cooling. For the purpose to observe the local heat transfer characteristic at the vessel bottom site, we established a downward-facing critical heat flux experiment after the Fukushima accident in 2011. In this experiment, it has a RPV bottom design. Variable distances between the heat transfer surface and the coolant flow inlet are 10cm and 15cm. Total heat transfer surface is 50cm^2. Inlet flow rate is about 3.6 l/min with 25oC subcooled water of atmosphere. The maximum generation heat with the source is about 6.5kW. For measuring the downwardfacing surface heat flux, there are 8 thermocouples installed inside of the heat source and 4 thermocouples connected directly on the heat transfer plane, and the distance between the near wall interior thermocouples and the outside plane thermocouples is 1cm. With the uncertainty analysis, each thermocouple has 0.7% error for the temperature measuring. The error with flow meter is 0.3%. The whole systematic error is about 2%. The random error is about 3%
A benchmark was established to assess the ability of GOTHIC to accurately predict transport of hydrogen. The benchmark is used the Battelle-Frankfurt model containment (BFMC) test facility which
51
THURSDAY, MAY 3
TECHNICAL SESSIONS
that includes the environment pressure, the coolant water temperature, etc. Total heat loss is about 3% after calculation by the heat transfer areas differences, the heat loss will induce the surrounding water temperature changed. Finally, the experiment results shown the CHF temperatures are 88oC and 104 oC with 15cm and 10cm. Calculate the heat flux by using the Fouriers law are 0.2MW/m^2 and 0.4MW/m^2, and shorter outside coolant flow inlet distance had better heat transfer capability, the CHF increased for 0.2MW/m^2.
is usually a streamlining initiative with the intent of reducing the number of physical tests. These are just some of the issues that must be considered when guidance and regulations are developed to implement M&S into the certification process.
STANDARDS DEVELOPMENT ACTIVITIES FOR VERIFICATION AND VALIDATION
The process of software verification and validation is sometimes misunderstood and misinterpreted. Verification is the process of determining that a computational model accurately represents the underlying mathematics for its solution. Validation is the process of determining the degree a model represents the intended physical use. This paper will discuss regulation requirements for software used in the US nuclear industry. It will examine the process of software verification and validation incorporated at ANSYS, Inc. and the services provided to customers to help them meet regulatory requirements.
Verification and Validation in a Regulated Software Environment V&V2012-6074 William Bryan, ANSYS, Inc., Canonsburg, PA, United States
11-1 STANDARDS DEVELOPMENT ACTIVITIES FOR VERIFICATION AND VALIDATION: PART 1 Celebrity Ballroom 1 1:30pm–3:30pm Session Chair: William L. Oberkampf, Consultant, Georgetown, TX, United States Session Co-Chair: Francesco D’Auria, University of Pisa, GRNSPG, Pisa, Italy
The process of software verification and validation begins with verification. Code verification and solution (or calculation) verification combine to make up the process of verification. Code verification establishes that the code accurately solves the mathematical models incorporated in the code. It provides evidence that the mathematical models and solution algorithms are working correctly and that the code is error free for the simulation. Solution (or calculation) verification estimates the numerical accuracy of the calculation. It should establish confidence that the solution of the mathematical model is accurate.
Implementation of Verification and Validation in Certification by Analysis from a Regulatory Perspective V&V2012-6031 David Moorcroft, Federal Aviation Administration, Oklahoma City, OK, United States, Joseph Pellettiere, Federal Aviation Administration, Dayton, OH, United States Historically, physical testing has been required to certify that safety products meet regulatory requirements, which can impose a significant cost burden. Modeling and simulation (M&S) is increasingly used in research and development. Several government agencies now allow M&S to be used within the certification process for safety products. Some examples are: the Federal Aviation Administration with aircraft seats, the Federal Highway Administration with roadside safety structures, and the Food and Drug Administration with some medical devices. The physical testing requirements can be deterministic limiting not only the data available for comparison in validation, but also the degree of uncertainty quantification possible. This presentation will discuss verification, validation (V&V), and certification from a regulatory perspective in a deterministic testing world.
Code validation is established by comparisons of simulation results with an appropriate experimental result for a specific condition. There can be no validation without experimental data with which to compare the result of the simulation. The paper will also discuss approaches that quantify the degree of accuracy from the comparison of solution and data for specified variables at specified validation conditions. However, for most conditions validation is a matter of engineering judgment specific to the problem area. Uncertainty Estimates in Naval Surface Ship Hydrodynamics V&V2012-6094 Joel Park, Naval Surface Warfare Center Carderock Division, West Bethesda, MD, United States, Ahmed Derradji-Aouat, National Research Council of Canada, St. Johns, NF, Canada
Many V&V documents focus on the modeler. This leads to recommendations and explanations based on the needs of the modeler. Usually, only cursory attention is given to the decision maker, who is typically considered to be the customer. For a model with less than ideal performance, the customer can either accept the current performance or increase the funding to allow for improvements. For regulators, neither is an option. Requirements must be defined before there is even knowledge of the specific system, and these requirements must be universally applied. This can lead to many issues, such as how to differentiate between new software and off the shelf software which does not provide tangible code verification. Regulators may also have a different interpretation of validation metrics when considering one-sided pass-fail criteria or determinations of worst case scenarios. Another issue is that certification testing is conducted to a passing condition which will be the basis for the model, leaving the ability of the model to predict failure and anomalies unknown. One of the biggest issues related to a deterministic certification reality is the lack of repeated testing, which limits the quantification of uncertainties. Physical testing is conducted to show compliance; once that is met, testing is completed. Without bounding the uncertainty in testing, the confidence in the M&S to replicate reality can be difficult. This issue is made more challenging because M&S
The International Towing Tank Conference (ITTC) is a voluntary association of worldwide organizations that have responsibility for the prediction of hydrodynamic performance of ships and marine installations based on the results of physical and numerical modeling. ITTC was founded in 1932 in Hamburg, Germany. The 24th ITTC in Edinburgh, Scotland, in 2005 [1] initiated a Specialist Committee on Uncertainty Analysis (UAC). Final reports of the UAC were presented at the 25th ITTC in Fukuoka, Japan, in 2008 [2] and the 26th ITTC in Rio de Janeiro, Brazil, in 2011 [3]. The 24th ITTC [1] also initiated an inter-laboratory comparison of two surface ship models, which are geometrically similar to a precontract version of the U. S. Navy Arleigh Burke Class Cruiser, DDG-51. The larger model of the two models was 5.72 m in length and was tested at 18 towing basins around the world including the U. S. Navy David Taylor Model Basin (DTMB) and IOT. The purpose of the proposed presentation is to outline some of the uncertainty estimates and procedures from the ITTC.
52
TECHNICAL SESSIONS
THURSDAY, MAY 3
In addition to the final reports, the products of the UAC were five uncertainty analysis procedures. The ITTC adopted the ISO Guide to the Uncertainty in Measurement (GUM) [4] as the basis of the new procedures. The uncertainty procedures adopted by the ITTC include the following:
simulation tools or as a reason why they failed to get the most out of such software. The two important barriers with the highest rating were validation of analysis solutions and lack of analysis skills. These responses clearly indicate a need for an increase in the pool of competent engineering analysts and improved learning.
Guide to the Expression of Uncertainty in Experimental Hydrodyanmics Uncertainty Analysis: Instrument Calibration Uncertainty Analysis: Laser Doppler Velocimetry Calibration Uncertainty Analysis: Particle Imaging Velocimetry (PIV) Freshwater and Seawater Properties
In this situation, JANCAE organizes The Material Modeling Committee as a practical approach to the study of nonlinear materials. The Committee was originally established in 2005 to study mainly rubber-like materials and resin materials. The findings including hyperelastic and viscoelastic tests of elastomers and high-speed tensile tests of resin materials are available on the website. Then, its research activities have diversified into all material nonlinearity including metal plasticity. The members study from theory to practical application including material testing methods, test data handling and parameter identification techniques for practical constitutive equations.
The model tests at IOT and DTMB were documented with uncertainty estimates. The resistance results at DTMB [5] were in agreement with earlier test results for DTMB model 5415, which is the same size as the ITTC model. The DTMB model 5415 data were reported in 1982. The results of the model test are being applied for validation of computational codes with research primarily sponsored by the U. S. Navy Office of Naval Research (ONR).
A recent focus of the Committee is to develop a unified usersubroutine, which can be used without distinction of FE software. The first result from a two-year project is called UMMDp, Unified Material Model Driver for Plasticity, which covers constitutive equations for anisotropic metal plasticity including von Mises, Hill, Gotoh, Barlat, Banabic, Cazacu, Karafills & Boyce, and Vegter. The key feature of UMMDp is that the program prepares a unified interface routine for several commercial FE codes. The program is available for Abaqus, ANSYS, ADINA, LS-DYNA, Marc and Radioss.
References: [1] ITTC, 2005, Proceedings of the 24th International Towing Tank Conference. [2] ITTC, 2008, Proceedings of the 25th International Towing Tank Conference. [3] ITTC, 2011, Proceedings of the 26th International Towing Tank Conference. [4] BIPM, 2008, Evaluation of measurement data Guide to the expression of uncertainty in measurement, Joint Committee for Guides in Metrology, Bureau International des Poids et Mesures, Paris, France.
The authors will introduce JANCAE focusing its V&V related activities.
[5] Park, Joel T., Ratcliffe, Toby J., Minnick, Lisa M., and Russell, Lauren E., 2010, Test Results and Uncertainty Estimates for CEHIPAR Model 2716,Proceedings of the 29th American Towing Tank Conference, pp. 219-228.
Standardization of Verification & Validation for Computational Weld Mechanics V&V2012-6116 Dave Dewees, The Equity Engineering Group, Inc., Shaker Heights, OH, United States, Garrett Sonnenberg, Newport News Shipbuilding, Newport News, VA, United States
The Japan Association for Nonlinear CAE and its V&V Related Activities V&V2012-6109 Takaya Kobayashi, Mechanical Design & Analysis Corporation, Chofu, Tokyo, Japan, Hiroto Ido, LMS Japan K.K, Yokohama, Kanagawa, Japan, Junji Yoshida, University of Yamanashi, Kofu, Yamanashi, Japan, Hideo Takizawa, Mitsubishi Materials Corporation, Kitamoto, Saitama, Japan, Kenjiro Terada, Tohoku University, Sendai, Japan
Distortion, residual stress and altered material properties are a fundamental outcome of the welding process. The welding community is largely forced to apply a trial and error approach to obtaining a required end product, which is costly and timeconsuming. Computational welding mechanics (CWM) has emerged over the last decades to try and address these challenges; that is to reduce risk, cost and span, while improving quality and predictability. The use of CWM has however lagged behind related technologies such as computational solid mechanics (CSM) and computational fluid dynamics (CFD). This is in large part due to a lack of a standard verification and validation framework, particularly since the typical CWM analysis is considerably more involved than the typical CSM analysis, for example.
The nonprofit organization JANCAE, The Japan Association for Nonlinear CAE (chairperson: Kenjiro Terada, Tohoku University), offers several activities to Japanese domestic companies, universities and software vendors to gain a deeper understanding of nonlinear CAE including its main work, the nonlinear CAE training course held twice a year. Cumulating total over 3,300 engineers participated to this training course through 2001 to 2011. The course is so designed that the participants can share common opportunity and facilities based on a well organized scheme to strengthen each ones understanding of the nonlinear computational mechanics and the CAE technologies in practical use.
In 2007, the American Welding Society (AWS) established a technical committee (A9) with members representing the academic, research and industrial communities to develop a standard for CWM. This standard has now been completed and is in the balloting stage, with publication expected in 2012. The standard specifically addresses verification and validation of CWM. Verification tests the chosen mathematics, while validation demonstrates that the reality being modeled (e.g. distortion, residual stress, microstructure) is predicted with sufficient accuracy, robustness and reliability. The motivation, content and lessons learned from this multi-year process are presented here.
To help ensure that the JANCAE deliverables can meet the needs of the engineering analysis, an extensive survey of industry needs is undergoing. As a first attempt of the survey, respondents were invited to participate in the survey during the nonlinear CAE training course held in November 2011. A total of 122 completed surveys were received from many industry disciplines. To discriminate the contrast between the needs of domestic industry and the needs of overseas industry, the questions were carefully prepared with the survey report of EASIT2 as a guide. EASIT2 is a research project funded by the European Union Lifelong Learning Programme and the project partners. Respondents were asked to rate a number of issues concerning CAE as to what degree they saw them as a barrier to the use of
53
THURSDAY, MAY 3
TECHNICAL SESSIONS
The Use of V&V at EDF in Support to the Safety Demonstration V&V2012-6115 Christian Chauliac, EDF (Electricité de France), Villeurbanne, France
about the instrumentation used. Measurement of quantities involving derivatives of a primitive variable (such as shear stress or heat flux) would be graded higher than measurements of primitive variables (e.g. velocity) or their integrals (e.g. drag coefficient). The submitting authors would be credited with a publication, and use of the data in a V&V study would result in a citation. Subscriptions to the database would be sold to libraries and other entities that commonly subscribe to journals. Revenue generated through subscriptions would be used to maintain the data and make it available to subscribers. The publisher would also provide webbased services that allow a user to choose the measurement variables of interest and to downsample a dataset prior to download.
EDF is one of the main operators of nuclear reactors in the world with more than 60 nuclear power plants in activity in France and abroad. In order to get the operation license, EDF has to provide the Safety Authorities with a complete safety demonstration of the plants. For that purpose, EDF has developed its own engineering capabilities and uses a large variety of computing codes in various disciplines such as neutronics, thermal-hydraulics, structural mechanics, material science (in particular, behavior of material under irradiation), chemistry and physical-chemistry.
The existence of this journal and its review process would force a shift in thinking among experimentalists interested in validation, as well as funding agencies who pay for experiments, since a traditional experiment aimed at discovery of physics would be graded poorly by this journal. Modelers interesting in V&V would find a resource that makes the quality of a dataset for its intended purpose clear. Most importantly, the longevity of the database will be assured by the commercial interest of the publisher.
At the beginning of the French nuclear program (in years 1970’s), simple physical models and numerical methods were integrated in these computing codes. The approach to V&V was rough but the weaknesses of the whole approach were compensated by large conservatism in the assumptions for safety demonstrations.
VALIDATION METHODS
The progressive development of more efficient computing hardware has opened the way to the extensive use of new codes with more accurate physical models and numerical methods. In particular, the introduction of 3D approaches is a major step. The goal is to better assess the actual plant behavior under nominal and accidental situations and, eventually, to gain additional operational margins.
12-2 VALIDATION METHODS: PART 2 Wilshire A 1:30pm–3:30pm Session Chair: Christine Scotti, W.L. Gore & Associates, Flagstaff, AZ, United States Session Co-Chair: Atul Gupta, Medtronic, Inc., Santa Rosa, CA, United States
These new capacities of the computing codes but also their complexity and the questions they raise, the large scope of scientific disciplines involved, the use of both in-house and external codes, sometimes the distribution of calculation tasks between EDF and some subcontractors, all these conditions have led EDF to progressively set up rules to handle the codes with more formalized Quality requirements mainly based on experience feedback.
Validating M&S with T&E Data: Pitfalls, Problems and Potential Solutions V&V2012-6100 David Hall, SURVICE Engineering Company, Ridgecrest, CA, United States
The presentation will provide some illustrations of these requirements. First of all the requirements for the computing codes themselves (including the numerical solvers, the preprocessing and the post-processing tools) will be presented. Emphasis will be put on V&V and some examples of V&V results obtained for EDF 3D codes will be shown. Second, the questions raised by the application of the codes to safety demonstrations will be presented with the associated complementary requirements.
When we think of validating models and simulations (M&S), we all think of comparing M&S outputs with test data. But how many of us have ever actually been successful in accomplishing that comparison in any repeatable and statistically significant way? How do you validate models and simulations of complex systems using equally complex testing with many fiscal and technical constraints on data collection? Validating M&S of these complex systems is difficult for a number of reasons, not least of which is obtaining validation data for all of the functionality in the system at the level of detail required to compare to M&S inputs and outputs.
The EDF internal organization set up for the observance of these requirements will be described.
A primary key to successfully validating M&S with T&E data is to involve the modelers and analysts up-front in the development of test plans, and particularly test data collection plans. M&S validation requirements are usually much more stringent than are data requirements for the program actually conducting the test; if the modelers are not involved up front, then it is likely that the data required to validate the model either won’t be collected, or won’t be collected with sufficient accuracy. This paper will discuss approaches to validation of M&S with T&E that were originally developed by a project sponsored by the Office of the Secretary of Defense (OSD); the methodology combines a functional element validation approach with end-to-endvalidation. Examples of validation data requirements as compared to system test requirements will be given, with some anecdotal illustrations from past programs.
A Long-Term Solution to a CFD V&V Database V&V2012-6259 Barton Smith, Utah State University, Logan, UT, United States While there is wide spread agreement that a permanent database for computational fluid dynamics verification and validation (V&V) data is needed, attempts to create such a database have met with failure due to fluctuations in funding and/or commitment of personnel. To date, national labs have been chosen to house and administer the database. We propose a new entity analogous in most ways to a journal. The database would be administered by a publisher that would employ (on a voluntary basis) editors and associate editors. All submissions to the database would be peer reviewed. Datasets would be graded according to how well they meet the requirements for verification or validation. For instance, in the case of validation, a dataset receiving a high grade would contain all information necessary as input to current and future CFD models (boundary conditions, inflow and outflow conditions, material properties), uncertainties on all quantities, and information
54
TECHNICAL SESSIONS
THURSDAY, MAY 3
Applying Risk-Based M&S VV&A Techniques to Test and Laboratory Facilities V&V2012-6101 David Hall, SURVICE Engineering Company, Ridgecrest, CA, United States, James Elele, Jeremy Smith, Naval Air Warfare Center, Aircraft Division, Patuxent River, MD, United States, Charles Pedriani, SURVICE Engineering Company, Lexington Park, MD, United States
To determine the stiffness of the flexible element simulation data (FEA), an analytic approach, and measurements have been taken. When neglecting the ram the flexible element can be regarded as a Kirchhoff plate [5-7]. Neglecting the ram, applying a pressure load in the center of the plate, and using a fixed bearing on the edge of the plate are obvious simplifications. But still the analytic approach can be used in order to test simulation data concerning plausibility. Within the simulation boundary conditions can be represented more realistically as contact modeling and elastic bearings can be implemented. The final validation of both simulation and analytic approaches can only be done with real measurements. For that purpose a heatable test rig was constructed where instead of the pressure a variable but defined load can be applied. When evaluating measurement data of several identical elements or testing the same one several times, big mismatches in stiffness resulted. One reason for that is the construction-conditioned tolerance when applying the load in the center of the element. When just taking construction-conditioned tolerances into account in this specific test rig a radial variance of 0.7 mm can occur theoretically while all elements are within production tolerance. Real measurements resulted in radial variances of even more, 1.1 mm. To solve this problem a friction bearing that works at a large range of temperatures was constructed and integrated into the test rig which now allows to compare measurements with other data. The allowed value of radial variance was determined to be not larger than 0.1 mm. Using the friction bearing and taking real measurements again resulted radial variances smaller than 0.05 mm.
Because there can be serious negative consequences from acting on erroneous M&S and analysis conclusions, the DOD has developed and is exercising a risk-based process for verifying, validating and accrediting M&S used in system acquisition. Basing the M&S VV&A process on risk allows the practitioner to focus activities on the areas of greatest potential impact to the program, and to those areas that reflect the most uncertainty in M&S outputs. Test and laboratory facilities can potentially have even greater potential negative consequences to a program than M&S if there are errors present in the test and analysis results; test results are usually considered closer to the truth than M&S results, and consequently errors in those test results are more likely to result in inappropriate decisions being made based on those results. We are applying the M&S VV&A process to test and laboratory facilities being used to support the testing of a new identificationfriend-or-foe (IFF) system at the Naval Air Systems Command. The use of erroneous outputs from the test facility and analysis process (should they exist) could influence design and implementation decisions.
This work concentrates on minimizing tolerances in real measurements to make them reproducible and reliable as this is an important requirement in order to validate simulation data.
This paper will discuss how the risk-based M&S VV&A process is being applied to the test and laboratory facility, issues associated with this different application of the process, and thoughts on the broader applicability of risk-based VV&A beyond the current application.
We like to thank the Dr. Johannes Heidenhain Stiftung for enabling this work and supporting us financially. Further the author likes to thank his coauthors and colleagues for helpful critics and discussions. Special thanks to Heiner Kinscher, Gerhard Ribnitzky and his team.
Minimizing of Tolerances of a Test Rig for Physical Testing and Validation of Simulation Data V&V2012-6119 Thomas Ottnad, Franz Irlinger, Tim C. Lüth, MiMed, Technische Universität München, Garching bei München, Germany
References: [1] Ottnad, T., Irlinger, F., Lueth, T.C., 2011, Test Environment for an Elastic Mechanism - Related to a Kirchhoff Plate - for Usage as Pressure Sensor. In Proc. ASME 2011 International Mechanical Engineering Congress & Exposition (IMECE 2011), Denver. [2] Gepp, S., Ottnad, T., Irlinger, F., Lueth, T.C., 2011, Fabrication of Micro-Dies for Extrusion of Polymer Melts, in Proc. SPE 2011 Annual Technical Conference of the Society of Plastics Engineers (ANTEC 2011), Boston, 1243-1247.
As computing technologies advanced in great leaps during the past decades and this progress is still going on a lot of computer simulations are well established today. Without the benefits of Computer-Aided Engineering tools the design of a lot of products would not be possible just thinking about lightweight design to give an example.
[3] Gepp, S., Ottnad, T., Kessling, O., Irlinger, F., Lueth, T.C., 2011, Druckabhängigkeit des Massenstroms von Polypropylenschmelzen durch Mikrodüsen kleiner 500 Mikrometer, Chemie Ingenieur Technik, 83(4), 552557. [4] T. Ottnad, S. Gepp, F. Irlinger, and T.C. Lueth, Optical Analysis of Extrudate Swelling of Polymer Melts, in Proc. SPE 2011 Technical Conference of the Society of Plastics Engineers (EUROTEC 2011), Barcelona.
Although most software tools are getting more and more reliable criticism of simulation data is mandatory. Proof of plausibility and accuracy are important requirements in order to apply simulation data on realistic problems.
[5] Kirchhoff, G, 1850, Über das Gleichgewicht und die Bewegung einer elastischen Scheibe, - Journal für die reine und angewandte Mathematik (Crelles Journal), 40, 51-88. [6] Altenbach, H., Altenbach, J., Naumenko, K., 1998, - Ebene Flächentragwerke. Grundlagen der Modellierung und Berechnung von Scheiben und Platten. Springer, Berlin.
An example can be seen in a flexible element which can be used as a pressure sensor element. Such a flexible element was presented at ASME IMECE 2011 [1] and is based on the idea to calculate the pressure of a fluid, for example a plastic melt during injection molding, by measuring the deflection of the element and knowing its stiffness. As control of pressure is a decisive part in plastics processing providing an easy way to integrate pressure sensors even when changing the design of the injection system is aimed for. Knowing the pressure of a plastics melt in injection molding allows to carry out investigations concerning design of nozzles, drop of pressures in nozzles, and swelling of the melt [2-4]. To keep this flexible element as simple as possible it is designed as a thin plate with a ram on its outer side.
[7] Timoshenko, S., Woinonwsky-Krieger, S., 1987, Theory of Plates and Shells, McGraw Hill, New York.
55
THURSDAY, MAY 3
TECHNICAL SESSIONS
Validation and Verification of Simulations that Incorporate Battlefield Behaviors V&V2012-6125 Jeffrey Smith, US Army Research Laboratory, WSMR, NM, United States, Jayashree Harikumar, Raymond F. Bernstein, Jr., New Mexico State University, Las Cruces, NM, United States
Validation of Numerical Simulation Code to Predict VortexInduced Vibration around an Elastic Circular Cylinder V&V2012-6047 Daehun Song, Qiang Xu, Naoki Nishikawa, The University of Tokyo, Tokyo, Japan, Satoshi Someya, Agency of Industrial Science and Technology, Tsukuba, Ibaraki, Japan, Koji Okamoto, The University of Tokyo, Tokyo, Japan
The U.S. Army employs simulation models to, among other uses, support its system acquisition decisions and to inform its assessments of vehicle survivability, vulnerability, and lethality. Most often, these simulations are of complex physics phenomena, for example ballistic fragmentation; however, recent advances in computer technology allow these simulations to include warfighter induced battlefield behaviors. The inclusion of battlefield behaviors substantially increases the complexity of the simulations. Increased complexity creates confidence issues for both the analyst and the evaluator who use these simulations to address their analytical issues. One approach that addresses these confidence issues is the process of validation, verification and ultimately the accreditation of the simulation.
Vortex Induced Vibration (VIV) is one of the important phenomenon in Fluid Structure Interaction (FSI) for various industries such as civil engineering, mechanical engineering, and nuclear engineering. Since VIV is quite complicated phenomenon due to the complex dependence of vortex shedding to the oscillation by vortex itself, precise verification and validation are required. There are various researches with experiments of VIV and its numerical simulations but there are few researches with validation comparing those in V&V view point. For this research, the commercial program, ANSYS CFX 12.0 was used for numerical simulation. The natural frequency of experimental system was precisely measured and applied to numerical system and reduced damping were measured at both experiment and numerical simulation. In numerical simulation, several turbulent models such as k-Epsilon, RNG k-Epsilon, kOmega, and SST(Shear Stress Transport) are used and sensitivity test were carried out with mesh type and non-dimensional time step being varied. In experiment, Reynolds number is ranged from 5.0E3 to 2.5E4 and reduced velocity is ranged from 0.8 to 4.0. The vortex shedding frequencies and flow fields around an elastic cylinder were measured by Dynamic PIV measurement technique, which is the latest measurement technique in flow dynamics with high spatial and time resolution.
Validation and verification (V&V) of a simulation that models a system or a particular physical phenomenon is often complex but is ultimately deterministic. Battlefield or warfighter behavior is unpredictable, so how does one address V&V of a simulation that introduces battlefield behaviors to aid the study of engagement level effects on platform survivability, vulnerability and lethality? In this paper, we suggest an approach that structures developmental activity for these simulations to support V&V, and show how this support aids in accreditation. We contend that V&V of simulations that includes battlefield behaviors depends not only on the physical and behavioral battlefield phenomenon, but also on the specific analytical questions to be answered by the study of those phenomena.
Non-dimensional vortex shedding frequency and Strouhal number were compared according to reduced velocity with experimental results and numerical simulation. It is represented that not only natural frequency but also damping are important factors for the accurate simulation of VIV. In the numerical system, the numerical damping causes high damping factor and makes difference with experiment.
A Design for a V&V/UQ Discovery, Accumulation, and Assessment Process V&V2012-6174 Patrick Knupp, Keith Dalbey, Dena Vigil, Sandia National Laboratories, Albuquerque, NM, United States
COFFEE BREAK/EXHIBITS Celebrity Ballroom 2 3:30pm–4:00pm
An important but often overlooked set of issues arises when applying Verification, Validation, and Uncertainty Quantification to large modeling and simulation projects consisting of many different team members, codes, experiments, and levels of fidelity. How does one communicate between team members when there is no agreed upon VVUQ terminology? How does one communicate the results of VVUQ to those outside the project in a transparent manner? How does one assess the work and measure progress towards completion? How can the work done be re-used later? How can one ensure that the right VVUQ data is collected? How can this data be assembled into a coherent narrative that increases confidence of the end-user? A preliminary step towards providing assistance to those who must deal with these issues is described in this talk and comes in the form of interactive software connected to a database. The software is used to query team members to collect, organize, assess, and present VVUQ data in the proper context. Use of the software adds value to the project in that it manages the VVUQ effort in its full complexity and combines the knowledge of multiple team members into a coherent whole.
VERIFICATION FOR FLUID DYNAMICS AND HEAT TRANSFER 7-1 VERIFICATION FOR FLUID DYNAMICS AND HEAT TRANSFER: PART 1 Sunset 5&6 4:00pm–6:00pm Session Chair: Yassin Hassan, Texas A&M University, College Station, TX, United States Session Co-Chair: Dimitri Tselepidakis, ANSYS Inc., Lebanon, NH, United States Verification Procedures for Blood Damage Modeling V&V2012-6019 Marc Horner, ANSYS, Inc., Evanston, IL, United States Blood damage is a critical aspect of establishing the hemocompatibility of blood-contacting devices, esp. blood pumps and heart valves. Devices with poor flow features may incur high rates of hemolysis and/or platelet damage/activation. The former reduces the oxygen carrying capacity of blood while the latter increases the potential for blood clots. Either lowers hemocompatibility and therefore the long-term clinical benefit of the device.
56
TECHNICAL SESSIONS
THURSDAY, MAY 3
Computational fluid dynamics (CFD) has become an important tool for investigating blood damage because it provides a more controlled way for comparing designs versus experiments, where variability in the blood supplied is a concern. The current state of the art is to calculate the fluid flow and pressure profiles using CFD and then integrate a blood damage equation along flow paths from the inlet to the outlet.
obey the original equation of state. Some problems are discussed for the application of these solutions for code verification in the way of convergence test. It should be noted that the underlying concepts for the convergence metric in Lagrangian space and in Eulerian Space are quite different. For Lagrangian simulation, Lagrangian convergence metric should be used because the grid in lagrangian space is fixed. Analytical integration of the additional source terms over the cell is preferred if possible. At least the integration must be precise enough with numerical method of sufficient high order accuracy.
The hemolysis index equation of Giersiepen et al (Int. J. Art. Organs, 1990) is most commonly used to estimate damage to red blood cells. This equation relates hemoglobin release to shear stress and exposure time via a power-law equation. The non-linear time dependence has been a significant source of error in the literature and is the focus of this presentation. The author will present common pitfalls of implementing the damage equation along with two examples that permit verification of user-implemented algorithms via closed-form analytical solutions. A Couette flow is the first example and Hagen-Poiseuille flow is the second. Couette flow is convenient because the shear stress is constant throughout the domain and Hagen-Pouiseuille flow is reminiscent of tubing that connects the pump to the arterial tree.
The man manufactured solutions of this paper have been applied to some hydrodynamic Analysis codes based on staggered grid or cell centered Lagrangian methods. The effectiveness and efficiency of the man manufactured solution method for code verification has been proved. A Coefficient Based Source Term Evaluation for the Error Transport Equation V&V2012-6027 Ismail Celik, West Virginia University, Morgantown, WV, United States
It is also important to establish the correct way to report blood damage. The author will therefore present a brief motivation for using mixed-cup measures to report blood damage as this quantity is not discussed in the literature, but is required to account for weighting the blood damage at the outlet with the exit flow profile.
The quantification of numerical uncertainty in computational fluid dynamics (CFD) almost always requires an estimation of the discretization errors involved in the numerical solution of the governing partial differential equation. This error be estimated using the well known Richardson Extrapolation (RE) method or variants of it. RE typically requires 3-4 sets of geometrically similar meshes with refinement ratios of at least 1.3. Due to some fundamental problems in convergence to reach the asymptotic range and in many cases high cost involved, it is desirable that the discretization error be estimated on a single grid. The challenging problem in this approach is the accurate evaluation of the source term in the error transport equation (ETE) formulated for the discretization/iteration error in the variable that is being calculated on a given mesh. The present work focuses on practical ways of evaluating this source term. Several possibilities are presented with some applications to demonstrate the feasibility of these new methods.
Once verification is complete, one can move to the testing and validation phase for their device with confidence in the underlying numerical implementation. Code Verification with Man Manufactured Solutions for Hydrodynamic Analysis Software Based on Lagrangian Numerical Method V&V2012-6023 Zhang Shudao, Wang Ruili, Li Hua, Ma Zhibo, Zhou Haibing, Institute of Applied Physics and Computational Mathematics, Beijing, China In this paper, some simple three dimensional man manufactured solutions for code verification are presented for hydrodynamic Analysis software based on Lagrangian type numerical method.
A New Extrapolation-Based Method for Estimating Grid-Related Uncertainties in CFD V&V2012-6040 Tyrone Phillips, Christopher J. Roy, Virginia Tech, Blacksburg, VA, United States
The numerical methods can be divided generally into two types according to the movement of the mesh. One is Eulerian where the mesh is fixed, so material can move in and out of each cell. The other is Lagrangian where the mesh moves with the material velocity, thus no convection computation is needed and the material interface can be described explicitly with the edges of the grid, which is very important for multimaterial problems, such as structure response under strong inpact and implosion problems, where sharp material interfaces and material movements are of great concern. Lagrangian methods are also the key basis for arbitrary Lagrangian Eulerian (ALE) methods that become more and more popular in science and engineering computation community.
Accurate quantification of discretization error requires that simulation results computed on systematically refined meshes converge at near the theoretical rate specified by the discretization scheme (i.e. the solution is in the asymptotic range). The asymptotic range is very difficult to reach for simulations of even modest complexity which introduces additional uncertainty in discretization error estimates. The additional uncertainty is quantified using discretization uncertainty estimators which generally consist of taking the absolute value of the error estimate and multiplying by a factor of safety. The uncertainty estimate is a +/- bound around the simulation result with the goal of bracketing the exact solution for 95 percent of the estimates. A method has been developed for Richardson extrapolation error and uncertainty estimators to quantify the accuracy of the estimates as a function of the distance of the solution from the asymptotic range. The goal of this research is to investigate the accuracy of Richardson extrapolation and a new uncertainty estimator for solutions to the Navier-Stokes and Euler equations for different unstructured grid topologies including quadrilaterals, hexahedra, and tetrahedra. The accuracy of the discretization error and uncertainty estimates is quantified using the effectivity index which is the ratio of the estimated error (or
The space discritization of Lagrangian methods is in lagrangian Space for the so called material derivatives, so the mesh moves and deforms in Eulerian space. Thats to say the Lagrangian methods are in fact moving mesh methods in Eulerian space. With the help of transformation between the two spaces, we have constructed some simple man manufactured solutions for three dimensional compressible Eulerian equations. These solutions are for isentropic flow fields, such as vortical field with free divergence, pure stretching field with free divergence and with non-zero divergence. The advantages for these manufactured solutions are that they are very concise, the additional source terms are simple, so it is very easy for application. Furthermore these solutions strictly
57
THURSDAY, MAY 3
TECHNICAL SESSIONS
uncertainty) divided by the exact error. The exact solution to the governing equations is required to compute the effectivity index. Since the Euler and Navier-Stokes equations have limited exact solutions, the Method of Manufactured Solutions (MMS) is used. The Method of Manufactured Solutions is a code verification procedure where the exact solution is chosen a priori and should be smooth, compliant with boundary conditions, and physically realizable. Included in the study are the Euler equations, laminar Navier-Stokes equations, and the turbulent Navier-Stokes equations using the k-omega and k-epsilon turbulence models. In total, each separate grid topology studied will included at least 30 error estimates. Solutions to the Euler and Navier-Stokes equations are computed using Loci-Chem, an unstructured, finite-volume flow solver, with quadrilateral, hexahedral, prismatic, and tetrahedral mesh topologies. Solutions computed on an in-house structured, finite-volume flow solver are also included. Preliminary results show similar error and uncertainty estimate accuracies when comparing results from the structured solver to results solved using Loci-Chem with mixed grid topologies. The results also show that the proposed uncertainty estimator bounds the exact solution for at least 95 percent of the estimates regardless of solver and grid topology.
cases where RE can still be useful in reducing discretization errors, despite the fact that the numerical solutions are not in the asymptotic convergence range. We insist on the fact that the analysis hereby done is quite general and not confined to our particular CFD case. Thus, it should be of interest for a computer code practitioner eager to assess the quality of his numerical results. On the Use of Method of Manufactured Solutions for Code Verification of RANS Solvers Based on Eddy-Viscosity Models V&V2012-6140 Luís Eça, Instituto Superior Técnico, Lisbon, Portugal, Martin Hoekstra, Guilherme Vaz, MARIN, Wageningen, Netherlands This presentation discusses the use of Manufactured Solutions for Code Verification of Reynolds-Averaged Navier Stokes (RANS) solvers. In this exercise we will focus on time-averaged (statistically steady), incompressible flows. Recently, we have developed several Manufactured Solutions (MS) that mimic a near-wall turbulent flow. The proposed analytical functions cover the mean flow quantities and the dependent variables of several eddy-viscosity turbulence models. Namely, the undamped eddy-viscosity of the Spalart & Allmaras and Menter one-equations models, k1/2L from the one (SKL) and two-equation (KSKL) models proposed by Menter, the turbulence kinetic energy and the turbulence frequency included in two-equation k-w models. The turbulence quantities are defined from automatic wall functions and so they are supposed to reproduce the expected behaviour of these variables. All flow fields satisfy mass conservation, i.e. mean velocity fields are divergence free.
A CFD Benchmark Study with an Analysis of Richardson Extrapolation in the Presence of a Singularity V&V2012-6072 Stéphane Gounand, Commissariat à l’Energie Atomique, Gifsur-Yvette, France, Xavier Nicolas, Université Paris-Est, Marne-la-Vallée Cedex 2, France, Marc Medale, Institut Universitaire des Systèmes Thermiques Industriels, Marseille Cedex 13, France, Stéphane Glockner, Université de Bordeaux, Pessac Cedex, France Richardson Extrapolation (RE) is a 100-year-old numerical tool which has proved to be useful in assessing the quality of numerical solutions obtained on successively refined grids. Its use has been recommended in various contexts, such as: verification and validation of computer codes or best practice guidelines for computer code users.
We address three types of exercises:
Our particular study lies in the field of Computational Fluid Dynamics (CFD) where a benchmark solution to a stationary 3D mixed convection flow in an open-ended cavity is sought. Four contributors, using four different computer codes and various discretization methods, provided numerical results for several quantities of interest: local extrema, boundary fluxes and integral values of the solution. RE was systematically applied on all of these quantities.
3. Calculation of the complete system of equations.
1. Calculation of the continuity and momentum equations with a manufactured eddy-viscosity field. 2 Calculation of the turbulence quantities transport equations with the manufactured mean flow field.
Two main topics are discussed: The effect of the turbulence model on the convergence properties of the RANS solver. The difficulties imposed to the Method of Manufactured Solutions by the fact that physically all turbulence quantities must remain positive.
In our particular case, some difficulties are encountered while interpreting the behaviour of the RE process, especially the observed convergence order behaviour. Depending on the discretization method used and on the observed quantity of interest, the RE process can fail, or give a convergence order which is equal to, or smaller than the consistency order. It is found that these difficulties can be mainly attributed to a singularity in the boundary conditions of the considered case. Singularities are often encountered in practical cases of interest and they are the norm rather than the exception. These singularities can be due to various causes: geometric (reentrant corners), modeling hypothesis (boundary conditions, simplified models) or inherent to the solution (shocks). In this talk, we will show that, using a simple model function, we are able to analyse and reproduce various behaviours of the RE process observed in our singular benchmark case. In turn, the analysis provides some guidelines on the practical use of Richardson Extrapolation and some insights on when one can trust the extrapolated solution. In particular, we show that there are
58
TECHNICAL SESSIONS
THURSDAY, MAY 3
VALIDATION METHODS FOR MATERIALS ENGINEERING
grow and fracture under applied loads, thereby leading to crack initiation. Repeated formation, growth and fracture of hydrided regions may result in crack propagation and, eventually, pressure tube rupture. Consequently, HAC requires major attention in probabilistic assessments of CANDU reactor core performed according to Canadian nuclear standards. Such assessments rely on predictive models with probabilistic capabilities. This presentation illustrates how multi-variable regression analysis may be applied to different aspects of probabilistic modeling of HAC in CANDU reactors.
8-2 VALIDATION METHODS FOR MATERIALS ENGINEERING: PART 2 Sunset 3&4 4:00pm–6:00pm Session Chair: Boris Jeremic, University of California, Davis, CA, United States Session Co-Chair: Krishna Kamojjala, University of Utah, Salt Lake City, UT, United States
Linear multi-variable regression analysis has been applied to develop probabilistic predictive models for HAC growth rate. Radial and axial rates are used to estimate the crack propagation across the tube wall and along the tube length, respectively. Both rates had been previously described by Arrhenius-type relations with temperature as a single explanatory variable. As more experimental data were obtained, it became possible to assess potential effects of other variables. The effects of operating parameters, such as temperature, flux and time, have been found statistically significant, and the observed trends consistent with our fundamental understanding of how changes in Zr-2.5%Nb microstructure during operation would affect the HAC growth rate. The developed model for the axial HAC growth rate has been incorporated into the CSA Standard N285.8-10 as the representative predictive model.
Residual Stress Relief in a Tri-Axially Braided Composite V&V2012-6042 Timothy D. Breitzman, David H. Mollenhauer, Air Force Research Laboratory, WPAFB, OH, United States, Endel V. Iarve, Eric G. Zhou, University of Dayton Research Institute, Dayton, OH, United States One of the current materials grand challenges is how to shorten the discovery-to-use timeline (currently 15-20 years for composites). Reliable prediction of strength and failure of composite materials is at the forefront of accelerating this qualification timeline. Understanding the distribution and magnitude of residual strain is an important milestone on the road to strength and failure prediction in textile and non-textile composite materials. Textile composites present specific challenges in accurate morphology prediction and representation within a computational stress analysis tool due to their complex structures resulting from the weaving process, the lay-up process, and the cure process. One way to determine local residual strains is to cut the composite with a saw, releasing the residual strains along the cut. The resulting deformation can then be related to local strain by various surface measurement techniques such as digital image correlation or moiré interferometry. In the current contribution, actual fiber tow morphologies were determined from X-ray Computer Tomography and were compared to those obtained from predictive simulation. Additionally, the local strain fields in a tri-axially braided composite arising in the vicinity of a saw cut due to release of thermal processing stresses were experimentally measured using the moiré interferometry technique and were compared to those obtained by 3D stress analysis. The digital chain method was used to define 3D solid models of the “as-processed” tow morphology and the independent mesh method (IMM) was used for stress analysis. A significant degree of morphological detail was required to achieve satisfactory agreement with experimental data. Three levels of morphological refinement are presented herein including (i) correct tow path angle and curvature variation based on braid parameters (ii) addition of the effect of compaction during the cure stage and (iii) addition of the surface sanding effects during the moiré preparation stage. The results obtained by using the digital chain method in conjunction with the IMM were able to capture sharp variations of the strain components observed by using the moiré interferometric technique both in terms of spatial distribution and in magnitude and provide accurate evaluation of the residual strain levels in the triaxially braided composites.
Non-linear multi-variable regression analysis has been applied to develop probabilistic predictive models for HAC initiation due to hydrided region overload. Hydrided region overload occurs when the applied stress acting on a flaw with an existing hydrided region exceeds the stress at which the hydrided region is formed. The resistance of Zr-2.5%Nb to crack initiation due to hydrided region overload has been assessed statistically using relevant experimental data under ratcheting hydride formation conditions. The assessment results have been found consistent with our fundamental understanding of hydrided region overload, as well as with previous mechanistic modeling. The results have also been used to develop a comprehensive experimental program to further investigate the overload behavior of CANDU pressure tube material. Simulation Validation of Functionalized Nanoporous Silica for Energy Absorption V&V2012-6062 Aijie Han, University of Texas-Pan American, Edinburg, TX, United States Recently, a novel nanoporous materials with high energy density has drawn increasing attention for defense applications, and materials science and engineering areas. When hydrophobic nanoporous materials, e.g., silicalites or nanoporous silicas, are immersed in water at the atmosphere pressure, the liquid phase cannot enter the nanopores due to the capillary effect. As the pressure increases to a critical value, the pressure induced infiltration can occur. As the pressure is reduced, for reasons that are still under investigation, the confined liquid remains in the nominally energetically unfavorable nanopores and, therefore, the excess solidliquid interfacial energy cannot be released. Since the specific areas of the nanoporous materials, A, are usually in the range of 1001000 m2/g, the efficiency of energy absorption of these systems, E = Dg×A, can be much higher than that of conventional energy-absorbing materials such as reinforced polymers and shape memory alloys, with Dg being the excess solid-liquid interfacial tension. This technique has immediate applicability in the development of car bumpers, soldier armors, protective layers in buildings and bridges, healthcare products, to name a few. To adjust the infiltration pressure in a broader range and increase the recoverability, we investigated the selectivity, reusability and controllability of the system by adding chemical admixtures.
Probabilistic Modeling Of Hydride-Assisted Cracking In CANDU Zr-2.5%Nb Pressure Tubes By Means Of Linear And Non-Linear Multi-Variable Regression Analysis V&V2012-6058 Leonid Gutkin, Doug Scarth, Kinectrics, Toronto, ON, Canada CANDU Zr-2.5%Nb pressure tubes are susceptible to hydrideassisted cracking (HAC) at the locations of stress concentration, such as in-service flaws, where hydrided regions may develop,
59
THURSDAY, MAY 3
TECHNICAL SESSIONS
In previous simulated studies on the fundamental infiltration and defiltration mechanisms of nanofluidics, the liquid molecules were placed in vacuum nanotubes or nanochannels, and the studies were focused on liquid-solid interactions, with the important role of gas phase being ignored. A conceptual design of such an energy conversion/storage system is investigated by using molecular dynamics (MD) simulations. We show that the gas-liquid interaction can be an indispensable factor in nanoenvironments. During pressure-induced infiltration, gas molecules in relatively large nanochannels can be dissolved in the liquid leading to the phenomenon of nonoutflow. While at a reduced pressure, gas molecules tend to form clusters in relatively small nanochannels, which triggers liquid defiltration. The results qualitatively validate the observations on liquid infiltration and defiltration in nanoporous silica gels. It shows analogous liquid behaviors but does not provide a quantitative validation of the numerical results. More work is in progress to better match simulation with experiment.
application of the weld overlay. Multi-pass FEA of the dissimilarmetal weld was also produced to support the measured data; all three results were found to be in good agreement. References: [1] O. Muránsky, C.J. Hamelin, M.C. Smith, P.J. Bendeich, L. Edwards. Computational Materials Science, Vol. 54, 2012, pp. 125-134. [2] O. Muránsky, M.C. Smith, P.J. Bendeich, L. Edwards. Computational Materials Science, Vol. 50, 2011, pp. 2203-2215.
Validation of a Complex Pressure Vessel Integrity Assessment Using In-Service Data V&V2012-6178 Dave Dewees, Robert G. Brown, The Equity Engineering Group, Inc., Shaker Heights, OH, United States Many advanced numerical and evaluation techniques are available for assessment of pressure equipment, and these techniques are often combined when dealing with the most challenging integrity problems. Examples range from non-linear Finite Element Analysis (FEA) and advanced material modeling to determination of critical flaw sizes to avoid sudden brittle fracture. Each of these techniques has uncertainty associated with it, which in isolation should be adequately understood. When these techniques are combined however, direct verification and validation (V&V) of the overall analysis and results are generally not possible. This presentation, rather than focusing on a micro-level V&V of individual parts of the analysis (which is of course important), offers a macrolevel V&V based on extended in-service data of a series of process vessels subject to repeated thermal-mechanical cycling and cracking. Detailed simulation of weld residual stress, local post weld heat treatment, and operating thermal and mechanical cycling is used as input to fatigue, crack growth and fracture assessments, and the results compared with historical crack initiation and growth data over a 10 year operating history.
Complementary Measurement Techniques for the Validation of Numerical Weld Models V&V2012-6110 Cory Hamelin, Ondrej Muránsky, Vladimir Luzin, Philip J. Bendeich, Lyndon Edwards, Australian Nuclear Science and Technology Organisation, Lucas Heights, NSW, Australia, René V. Martins, Joint Research Centre, Petten, Netherlands, Mike C. Smith, EDF Energy Nuclear Generation, Barnwood, Gloucester, United Kingdom Predicting the residual stress field in fabricated components is a common goal in many safety-critical engineering disciplines, particularly when joining methods must be considered. Several research programs are currently underway that focus on the assessment and improvement of such computational methods; however, devising a standard technique for the validation of these analyses remains elusive, primarily due to variations in component size and complexity. In many cases, a combination of residual stress measurement techniques is employed to ensure accurate data acquisition.
Verification of Welding Power Input in Two-Dimensional Welding Simulation V&V2012-6181 Dave Dewees, The Equity Engineering Group, Inc., Shaker Heights, OH, United States
Examples of complementary measurement techniques relating to conventional welding of components for the power generation industry are presented, with the aim of highlighting the benefits and shortcomings of some methods. This overview includes nondestructive surface X-ray, bulk neutron and synchrotron X-ray diffraction techniques, as well as invasive contour and deep hole drilling techniques. Specific examples are provided to highlight the level of accuracy achieved both in the residual stress data acquired through the measurement techniques employed, and in the predictions obtained via numerical weld models.
Welding residual stress and distortion are caused by the nonuniform plastic strain in a structure due to drastic temperature gradients and corresponding non-uniform thermal expansion. With this in mind, proper heat transfer modeling is fundamental to meaningful mechanical predictions. The fundamental input to the heat transfer model is the welding arc power, which is commonly represented as an assigned triple Gaussian function (Goldak double ellipsoid model) or more simply, as a uniform temperature. Methods for verification relative to the intended welding power are presented for the special case of 2D analysis, and results for the two methods are compared. This evaluation finds particular significance when the welding power, or more particularly the welding energy per unit length, is used in an attempt to characterize a given weld.
First, a benchmark study is presented where the residual stress field in a three-pass slot weld was measured via neutron and synchrotron X-ray diffraction techniques [1]. The results from each analysis were found to be in good agreement, suggesting the techniques provide accurate residual stress measurements. The results were then compared with a finite element analysis (FEA) of the welding procedure; it was discovered that the choice of material work hardening behavior used in the FEA can have a profound impact on the predicted residual stress field, depending on the amount of thermo-mechanical cyclic loading applied to the weldment. This work serves as a best-practice case study for the simulation of multi-pass welds in austenitic steels. The second example presented is an analysis of the weld overlay on a safety relief valve (SRV) for EDF Energy [2]. This study was part of an operational justification for the use of such overlays to prevent cracking in the valve. Due to the size of the SRV, incremental center-hole drilling and deep-hole drilling techniques were used to measure residual stresses in mock-up valves prior to and following
60
TECHNICAL SESSIONS
THURSDAY, MAY 3
STANDARDS DEVELOPMENT ACTIVITIES FOR VERIFICATION AND VALIDATION
others, the level of detail for such phenomena may be graded commensurate with the importance of these phenomena. In addition to significant attention to model development and verification activities, quality assurance aspects are also covered during all phases of the software development lifecycle to include: software requirements specification, software design, software implementation, system testing, software release, operation and maintenance, termination of support, integrated throughout the s/w lifecycle, software safety analysis, cyber security analysis, software configuration management, and problem reporting and corrective action.
11-2 STANDARDS DEVELOPMENT ACTIVITIES FOR VERIFICATION AND VALIDATION: PART 2 Celebrity Ballroom 1 4:00pm–6:00pm Session Chair: David Moorcroft, Federal Aviation Administration, Oklahoma City, OK, United States Session Co-Chair: Kevin Dowding, Sandia National Laboratories, Albuquerque, NM, United States Proposed New Standard: ANS-10.7, Non-Real-Time, HighIntegrity Software for the Nuclear Industry V&V2012-6147 Charles Martin, Walter Horton, Defense Nuclear Facilities Safety Board, Washington, DC, United States
The final draft of the standard has been balloted with the full ANS-10 Subcommittee. The ballot passed with comments, and the Working Group is currently resolving those comments. The Working Group anticipates a first consideration ballot with N17 in early 2012. Activities Toward Establishment of Modeling & Simulation and V&V Guidelines in AESJ V&V2012-6208 Fumio Kasahara, Japan Nuclear Energy Safety Organization, Tokyo, Japan, Koshizuka Seiichi, University of Tokyo, Tokyo, Japan, Akitoshi Hotta, Tepsys, Tokyo, Japan
The views expressed are solely those of the author and no official support or endorsement of this talk by the Defense Nuclear Facilities Safety Board or the Federal Government is intended or should be inferred. Proposed ANS 10.7, Non-Real-Time, High-Integrity Software for the Nuclear Industry, is a new draft standard that addresses rigorous, systematic development of high integrity, non-real time safety analysis, design, and simulation software, which includes calculations or simulations that can have critical consequences if errors are not detected, but that are so complex that typical peer reviews are not likely to identify errors. The target class of software may include nuclear design and performance codes, codes used to assign safety classification levels to systems, structures and components at nuclear facilities, computational fluid dynamics or computational solid mechanics codes, complex Monte Carlo simulations, radiation dosimetry analysis codes, and nuclear medical physics analytical codes. Conformance with the mandatory provisions of this standard is intended to demonstrate adequate due diligence for safety-related applications of this class of software for the nuclear industry.
In recent years, computer simulation has been widely accepted in many industries as a tool of pursuing more optimized and efficient design processes. This trend increases a risk of unexpected failures and accidents when disagreement between realities and simulation results becomes fairly large. Discussions of the Modeling & Simulation need to include a methodology of ensuring simulation reliability. A model V&V (Verification and Validation), as named by Oberkampf, is now regarded as an indispensable element of the advanced Modeling & Simulation. In nuclear industries, computer simulation has been applied in many phases of a plant life cycle such as design, construction, operation, maintenance, inspection and decommissioning. As in the license safety evaluation, reliability of computer codes has been strictly checked based on experiments and operating experiences. However, it is only recently that a methodology of formally quantifying the evaluation conservatism was established such as CSAU.
This standard is being developed under the umbrella of ANSI/ANS National Consensus Committee N17, Research Reactors, Reactor Physics, Radiation Shielding and Computational Methods. It addresses the activities necessary to fully verify and validate the model by specifying requirements for model development and validation (but it does not address the actual planning, design and conduct of validation experiments). The development of this standard has built upon a prior extensive, systematic, and welldocumented effort (NUREG/CR-6263, High Integrity Software for Nuclear Power Plants: Candidate Guidelines, Technical Basis and Research Needs) sponsored by the U.S. Nuclear Regulatory Commission and peer reviewed by an expert panel. That effort involved review and evaluation of numerous national and international standards to construct a comprehensive set of guidelines, and to assess their available technical basis.
Not only in-house codes that accompany written development records, but also imported codes from public libraries and commercial vendors have been employed even they dont accompany sufficient evidences of quality assurance. In particular, commercial codes are usually supplied as an executable and their development histories are protected as proprietary information. An excellent user interface enables low ability engineers to produce plausible answers, which increases blind uncertainties or user effects. Under these circumstances, computer analysts need to recognize their responsibility of visualizing reliability of their simulation results to convince a society. In the Computational Science and Engineering Division of AESJ (the Atomic Energy Society of Japan), a working group was started in 2011 to discuss reliability of simulation. A major objective is making a report that will become a basis of the V&V guideline to be applicable not only to the nuclear industries but also to other industries where advanced engineering simulations are necessary. The working group consists of about thirty members from academia, license authorities and industries (utilities, reactor vendors, fuel vendors, software houses, etc.). Through five meetings, relevant subjects were intensively discussed such as a current status of V&V in Japan and western countries, how to extract a common V&V framework covering various fields (structure, material, thermal-hydraulics, neutronics, etc.), difficulties
To be considered in conformance with this standard, the computer model must be evaluated with the intent to demonstrate it is adequate to calculate the real world for which the computer code is to be used. All relevant phenomena determined to be important must be included in the model. A structured process must be used to identify and rank the component modeling phenomena based on their importance and their impact on the figures of merit for the calculation. The key phenomena, including constitutive mathematical equations needed for model closure, must be defined for the calculation to be performed by the computer code. The level of detail in the model must be sufficient to answer the problem of interest. Because some phenomena are not as important as
61
THURSDAY, MAY 3
TECHNICAL SESSIONS
of satisfying V&V requirements of commercial codes.
An Assessment of the Relationship of ASME NQA-1 and ASME V&V 20 Standards to the Draft V&V 30 Standard for Verification and Validation of Software for Nuclear Applications V&V2012-6235 Edwin Harvego, Richard R. Schultz, Idaho National Laboratory, Idaho Falls, ID, United States, Richard Hills, Sandia National Laboratories, Albuquerque, NM, United States, Ryan Crane, ASME, New York, NY, United States
In this presentation, we introduce major discussions in this report, in particular, a basic structure of M&S and V&V activities that include a database of validation experiments and prediction (interpolation or extrapolation). Unlike existent V&V guidelines, prediction will be taken as within a scope of our V&V guideline since computer simulations are normally required when prediction is necessary. This means we need to establish methods to estimate enlarged uncertainties. Although the scaling theory in the thermal-hydraulics is a possible solution, we feel necessity of extending this discussion so that we can cover a wider spectrum of simulation.
To support the development of the draft American Society of Mechanical Engineers (ASME) V&V 30 Standard, an evaluation of related regulatory requirements and other consensus standards is needed to ensure consistency among the various requirements and standards applicable to software used to calculate nuclear system thermal-hydraulic behavior. ASME NQA-1 Quality Assurance Requirements for Nuclear Facility Applications and ASME V&V 20 Standard for Verification and Validation in Computational Fluid Dynamics and Heat Transfer are of particular interest because of their close relationship to topic areas to be included in the draft V&V 30 Standard.
State of the Art and Practice of the Process to Verify and Validate Embedded Software V&V2012-6211 Walter Horton, Defense Nuclear Facilities Safety Board, Washington, DC, United States Automated control equipment or digital instrumentation and control systems are pervasive in a broad range of industries for safety and non-safety systems including, but not limited to, the automotive, food, pharmaceutical, oil & gas, mining, and nuclear industries. This presentation discusses both the state of the art and state of the practice of the process to verify and validate embedded software in automated control equipment in safety applications. The information in the presentation is a compilation of vendor interviews, consensus standard committee work, meetings with safety certification organizations, and assessments of defense nuclear facilities.
ASME NQA-1 defines both requirements and non-mandatory guidance for the establishment and execution of quality assurance programs for nuclear facility applications, and is consistent with the quality assurance requirements for nuclear power plants defined by the Nuclear Regulatory Commission (NRC) in Title 10 of the Code of Federal Regulations (CFR), Part 50, Domestic Licensing of Production and Utilization Facilities, Appendix B, Quality Assurance Criteria for Nuclear Power Plants and Fuel Reprocessing Plants. This combination of requirements and non-mandatory guidance allows flexibility in the application of the entire Standard or portions of the Standard, depending on particular user needs. NQA-1, Part I, sets forth the 18 requirements for the establishment and execution of quality assurance programs that correspond to the requirements contained in Appendix B of 10 CFR Part 50. Although the requirements defined in NQA-1, Part I, are general in nature, the ASME V&V 30 Committee will need to ensure that the current standard under development fully meets the Part I requirements. In contrast, NQA-1, Part II, primarily deals with quality assurance requirements for work-related process and activities, and therefore, for the most part will not be applicable to the draft V&V 30 Standard. Parts III and IV of NQA-1 are appendices that provide non-mandatory guidance for some, but not all requirements defined in Parts I and II. These appendices will likely be the primary area where the V&V 30 Committee may directly interface with the NQA Committee to discuss modifications/additions to the current guidance as they relate to validation of software used to calculate nuclear system thermal-hydraulic behavior.
First, the presentation defines embedded software, non-embedded software, and verification and validation of software to establish a common framework. The definitions used in the presentation are from consensus industry standards. Next, the discussion outlines the similarities and differences in verification and validation of embedded versus non-embedded software. Two important similarities are: similar automated tools for software engineering tasks, and consensus standards are available. Several important differences include: the defect management process has no triage applied to known defects, an automated tool for defect management is a must for embedded software verification and validation, and physical size and speed of memory is a hardware constraint for embedded software, for example CPU cycles and memory resources. Then, the presentation identifies several best practices for the verification and validation of embedded software. Three notable best practices are: evaluate all embedded software modules/subroutines with respect to importance, vulnerability, function, etc. to determine appropriateness of exclusive in-house development, use a single contractor for the design, development, installation, and maintenance of an instrumented system to include the automation products/components, and have only one degree of separation between the buyer and user of instrumented systems.
As noted in V&V 20 “The scope of this Standard is the quantification of the degree of accuracy of simulation of specified validation variables at a specified validation point for cases in which the conditions of the actual experiment are simulated. Consideration of solution accuracy at points within a domain other than the validation points, i.e., a domain of validation is a matter of engineering judgment specific to each family of problems and is beyond the scope of this Standard”. This statement clearly limits the applicability of V&V 20 to the domain defined by the validation points. In contrast, the draft V&V 30 standard addresses the expansion of the domain to include the entire calculation envelope over which the software models must be validated. In other words, the draft V&V 30 standard complements V&V 20 by defining a methodology for experimental validation of a defined calculation envelope, which encompasses the operational and accident domain of the nuclear system of interest.
Next, the presentation identifies the instances where cyber security and safety intersect in the verification and validation of embedded software. Three areas where cyber security and safety intersect include: initial installation of embedded software on the processor, re-version and version matching of the embedded software to include both public and private key infrastructure certificate versions, and patch update (special case of versioning) to correct known defects in the embedded software. Lastly, the discussion concludes with a brief summary of the information presented. The views expressed are solely those of the author and no official support or endorsement of this talk by the Defense Nuclear Facilities Safety Board or the Federal Government is intended or should be inferred.
V&V 20 addresses uncertainties associated with model validation and
62
TECHNICAL SESSIONS
THURSDAY, MAY 3
the model validation exercises, including methodology to characterize the uncertainty associated with using differences between model prediction and validation measurements to represent true model error. V&V 20 does not specify what metrics to use (i.e. what measurement types), nor does V&V 20 specify acceptance criteria (i.e., value for the metric) to declare the model as adequate or valid for a particular application. The draft V&V 30 standard addresses these metrics and acceptance criteria for nuclear applications of thermal-hydraulic computer simulation. V&V 20, combined with V&V 30, will define a formal methodology to assess model solution verification and model validation as cited in NQA-1.
V & V in System Thermal-Hydraulics V&V2012-6048 Francesco D’Auria, University of Pisa, GRNSPG, Pisa, Italy, Dominique Bestion, CEA, Grenoble, France, Jae J. Jeong, Pusan National University, Busan, Korea (Republic), Manwoong Kim, IAEA, Vienna, Austria Verification and Validation (V & V) constitute an essential process for the application of any software within any technology. The present framework is the nuclear reactor thermal-hydraulics and the related simulation tools used for safety evaluation. Designing a nuclear power plant and evaluating its safety level have been tightly coupled since the beginning of the nuclear era for the construction and the operation of Nuclear Power Plants (NPP). This paper deals with the V & V, or the assessment, of system thermal-hydraulics (SYS TH) codes. The system thermal-hydraulic codes are the main numerical tools adopted for Deterministic Safety Analysis (DSA) of power reactors: namely, their use and application is essential for the evaluation of accident scenarios and for the licensing of any nuclear power plant.
Acceptance Criteria for the Validation of Nuclear Safety Computer Codes V&V2012-6045 Noreddine Mesmous, Haldun Tezel, Canadian Nuclear Safety Commission, Ottawa, ON, Canada Safety analysis is the one of main technical tools which deterministically determines nuclear reactor safety for all permissible operating states and postulated accidents. The analyses are generally based on computer models solving conservation of mass, transport of energy and momentum equations, various constitutive relationships and correlations. These tools have to be verified against the phenomena that they are modeled to represent, and validated against experimental data that is collected for the phenomena, and a code accuracy statement has to be prepared.
The reference framework is constituted by the broad series of reports on the DSA issued by IAEA. Related top hierarchy documents are the GSR (General Safety Requirement) - part 4 where the requirement 24 deals with V & V and the SSG-2 (Specific Safety Guide No 2) where the importance of qualification of results is stressed. At a lower hierarchical level two reports constitute the background: these are the SRS (Safety Report Series) No 23and the SRS No 52. The former puts the bases for the application of the codes within the best-estimate approach for accident analyses. The latter describes the existing uncertainty methods which are essential for pursuing the best-estimate approach.
Consistent with the Canadian Nuclear Safety Commission (CNSC) requirements (e.g., CSA N286.7-99, RD_310) and the International Atomic Energy Agency (IAEA) safety standards, a comprehensive set of acceptance criteria for the computer tool validation and the purpose and their technical basis for the following stated requirements will be presented.
SYS TH codes are computational tools having complexity which reflects the complexity of NPP and of related DBA scenarios. The level of complexity and sophistication of the SYS TH codes results from the targeted range of application.
Code validation should be performed using data obtained from systems that are geometrically representative of the NPP, and under conditions similar to postulated accidents.
The key outcome is to provide guidance for establishing an overall procedure for V & V of SYS-TH codes as well as the links with the nodalization issue and with the qualification of the code user. It is also shown why and how a suitable V & V shall be supplemented by sensitivity and uncertainty analysis also addressing the scaling issue, namely within the licensing framework.
Experimental data should cover (a) all permissible plant operating states, Initial and boundary conditions and (b) all phenomena that are expected under accident conditions, (c) Measurement uncertainty for the above should be documented and (d) if for a particular case this is not possible, a justification together with a safety impact assessment should be provided
VALIDATION METHODS 12-3 VALIDATION METHODS: PART 3 Wilshire A 4:00pm–6:00pm
Code/Model accuracy for the important parameters relevant to the intended application should be determined. Code accuracy (bias & variation of bias) - when unacceptably large - is indicative of a lack of proper understanding of the underlying governing phenomena and/or poor/inadequate modeling. Therefore, in order to improve code accuracy, additional experiments should be designed to improve understanding of the phenomena. It should be noted that residual prediction bias will always be present, the magnitude of which depends on the specific validation tools used and the specific modeling and measurement parameters employed. This value should be defined in advance, prior to commencement of validation.
Session Chair: David Hall, SURVICE Engineering Company, Ridgecrest, CA, United States Session Co-Chair: Koji Okamoto, The University of Tokyo, Tokyo, Japan Validation of Computer Simulations Used in Failure Analysis Investigations V&V2012-6216 Nicoli M Ames,Robert A. Sire, Robert Caligiuri, Exponent, Inc., Menlo Park, CA, United States
Documentation is an important part of V&V exercise. This document, among others, should capture a) all the assumptions that have been made in the V&V exercise, b) their technical basis and c) impact on the validation results, and d) the results of code spatial and temporal convergence tests.
The past 15 years has seen a dramatic increase in the use of computer simulation, especially finite element analysis (FEA) in root cause investigations of failures in engineering systems and structures. This has gone hand-in-hand with the increased computational power, sophistication, and ease-of-use of such tools. FEA, for example, is now commonly used to determine stress and strain in key locations in structures and systems with complex geometries that cannot be determined with closed form
63
THURSDAY, MAY 3
TECHNICAL SESSIONS
solutions, and it can provide information critical to assessing causation of failures. Unfortunately, as with any tool, improper use of FEA can mislead an investigation away from the real root cause. Choice of constitutive material models, linear vs. nonlinear analysis, element type, mesh size and refinement, and boundary conditions can greatly influence the relevance and the meaning of the results generated. Thus, it is important to verify input data and validate the appropriateness of the choices made. This paper presents several examples where the failure to adequately validate a FEA simulation led to misleading and/or incorrect assessment of the root cause of failure. This paper also offers some suggested guidelines on how to assess the validity of FEA simulations.
The Ingen system is script-driven using Python and provides a graphical viewer for visualizing and querying the geometry, mesh, and material data. Using Python as the input language for Ingen provides users with the power of a full-featured programming language, simple syntax, and a rich set of third-party libraries. Each module in the Ingen system exposes an application programming interface (API) to allow end users to write custom code to extend the functionality of the system to meet their individual needs. This presentation will provide an overview of the capabilities of the Ingen tools set and is use in the ASC V&V Program at LANL. Combined Real and Virtual Domain Product Validation Using Top-Down Strategies V&V2012-6239 Martin Geier, Steffen Jaeger, Christian Stier, Albert Albers, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
Multi-Physics Simulation Setup Tools at LANL (LA-UR 12-00437) V&V2012-6230 Brian Jean, Los Alamos National Laboratory, Los Alamos, NM, United States
Validation is a central activity within the development of complex products such as automobile drive systems. Starting with a distinguishing discussion of the two terms verification and validation this paper describes the challenges and requirements for validation methods using the example of comfort phenomena in automobile drive systems. In this context the X-in-the-loop validation approach for automobile drive systems (XiL), which is object of research at IPEK - Institute of Product Engineering at KIT, is introduced as a generalization of HiL-, SiL- and MiL-tests.
The goal of the ASC Setup Tools and Technology Project is to develop software that enables analysts to accurately prepare simulations as efficiently as possible. We are currently developing a software system, which allows for the organization, abstraction, adjustment, combination, and output of all the data required to specify and execute a suite of multi-physics simulations. The software is a Python package called Ingen (for INput GENeration). Five modules (materials, gwiz, altair, csv, and suite) compose the Ingen system. The csv module acts as the central materials and physics database for the system. It enables an analyst or group of analysts to build data libraries of materials, physics models, physics code input syntax, and physics code execution parameters. Data from these libraries can be combined, modified, and extended to define parameters for individual simulations and/or entire suites of simulations. Additionally, the physics code input syntax is defined separately from the materials and physics data enabling he same data to be used to generate input for multiple physics codes for a/b code comparisons and/or linked calculations.
A key aspect of the XiL approach is the consequent integration and combination of real and virtual domain methods, both real-time and offline, in order to gather product validation knowledge as early as possible in the development process. Thereby the analysis of part system properties is performed taking into account the interactions with the whole drive system. This paper introduces a top-down strategy in product validation similar to top-down methods in product design processes. This top-down strategy includes a purposeful separation of the whole drive system into different part systems. This separation involves the definition of interface properties between the different part systems. In addition it is defined which validation activities afford the application of real tests and which activities are performed mainly in the virtual domain.
The materials module uses data for the csv database to create material data objects for each material in a given problem. These material data objects may be used for geometric corrections (e.g., part mass adjustments), to control mesh generation (e.g., mesh impedance matching at material boundaries), and/or other calculations required as part of the input specification.
The top-down driven XiL approach for drive system validation is illustrated at the example of powertrain vibrations. First the separation of drive systems into several part systems like internal combustion engine, dual mass flywheel, clutch system, gearbox etc. is discussed. Also the location and specification of interfaces is modeled. Based on these definitions a partly real and virtual validation setup is shown composed of simulation models for combustion engine simulation, gearboxes etc. and high dynamic test benches for drive system elements.
The gwiz (Geometry WIZard) module reads data from one or more libraries of contour data and provides functions for constructing and/or modifying geometry for the simulation(s). Geometry may take the form of contours or constructive solid geometry (CSG) models. Contour data may be used to construct CSG regions and CSG regions may be sliced to produce 2-D contours. Geometric entities can be mass-corrected based on data from the material data objects defined by the materials module.
As a novelty in powertrain validation the unit-under-test is not necessarily the real component attached to the test bench but a virtual system model executed as a real-time simulation. As an example a real clutch can be run with a virtual driveline in order to investigate judder sensitivity for variations of gearbox and driveline damping parameters without the need for high sophisticated clutch friction modeling. In our outlook we also discuss the feasibility of virtual gear rattle prediction models combined with real dual mass flywheels.
Mesh generation (if needed) is performed by the altair module. The mesh produced by altair is a dendritic, boundary-fitted, blockstructured mesh. The mesh boundaries are defined by contour data from the gwiz module; mesh parameters (such as zone size, impedance matching at boundaries, etc.) may be defined based on properties of material data objects from the materials module and or data from the csv module. In addition, CSG regions may be used to “paint” material properties onto the mesh, enabling complex geometry features to be included in the simulation without overly complicating the mesh topology. The suite module provides control functions for defining and executing families of simulations for suite baselines, parameter studies, and test suites.
64
TECHNICAL SESSIONS
THURSDAY, MAY 3
Spatial Weather Forecast Verification V&V2012-6001 Eric Gilleland, Barbara G. Brown, David Ahijevych, National Center for Atmospheric Research, Boulder, CO, United States, Elizabeth E. Ebert, The Centre for Australian Weather and Climate Research, Melbourne, Australia, Barbara Casati, Ouranos, Montreal, QC, Canada
objective was to identify and investigate validation/verification approaches and challenges of the results needed to be addressed when advanced motion system simulation tool is used in the design process of multi-body mechanisms. In particular, the approach and challenges of validation and verification of integrating the simulation tool into a typical mechanical motion system design course as part of an undergraduate mechanical engineering curriculum. In a typical four year mechanical engineering program, senior students are expected to be capable of understand and analyse the mechanical motion system provided that their curriculum includes courses to cover fundamental subjects in areas such as dynamics, control systems, mechanisms, and vibration. In most education institutions, a majority of above subjects is covered in the second and third year of the program. Engineering educators use several approaches and methodologies to insure that students are capable of using their know-how knowledge gained in the program to apply and solve the real problems in the field through Project-BasedLearning (PBL) methodology. Our attempt was to implement simulation-based product design methodology in designing of a typical multi-degree of freedom motion systems through PBL methodology. Students were expected to work on a term-long project. Deliverable materials of each project included 3D solid modeling, kinematic/dynamic analysis, trajectory planning, and manipulator control. SolidWorks and SolidWorks Motion software were selected for solid modeling and motion simulation respectively. Students were expected to validate the simulation results. It was concluded that the main challenge was to incorporate the validation techniques in dynamic simulation of motion systems, in particular, the validation/verification of kinetic simulation results such as reactions and required actuator torques. In the presentation, the challenges, techniques, lesson learned, and future work will be addressed and discussed.
Traditional weather forecast verification concerns with evaluating forecast performance at a given location. Historically, gridded forecasts that could be compared with a gridded verification field were compared grid-point by grid-point. Often higher resolution forecasts that were found to be more useful to weather forecasters were found to be less skillful than the coarser (less useful) counterparts. Small-scale errors below the resolution of the coarser models is one reason for this discrepancy, and double penalties (penalizing both for misses and false alarms for the same timing or spatial displacement error) is another often cited reason. Further, the traditional methods do not yield diagnostic information about how the forecast performed. In response to these issues, numerous new methods have been proposed in the last couple of decades. In order to assist users in determining which methods are appropriate for their specific needs, an inter-comparison project (known as the ICP) was formed. The present talk will discuss results from the ICP including an overview of most of the various proposed methods. 3D-PIV Measurements and Numerical Simulation of Flow Field in a Forebay of Pumping Station V&V2012-6029 Chao Liu, Yulun Zhang, Yangzhou University, Yangzhou, Jiangsu, China Based on the CFD software FLUENT the standard (k-ep) and RNG (k-ep) turbulence model were adopted respectively for the numerical simulation of forebay flow of a pumping station. The three-dimensional velocity fields calculated of the forebay are obtained. Meanwhile, a model test was conducted to verify the numeric simulation results. With the technique of the 3D-PIV Particle Image Velocimetry, the three dimensional flow fields in the region near the outlet of the forebay under the conditions of a large flow rate and design flow rate were measured respectively. Comparison was made between the calculated results and measured results which indicates that under large flow rate RNG kep turbulence model is close to the measurement results, and under design flow rate standard k-ep turbulence model is close to the measurement results. Thereby the accuracies of the numerical simulation are satisfactory under the two turbulence model respectively.
Moving Object Detection on a Moving Platform Using Stereo Feature Points Prediction Method V&V2012-6091 Cheng-Che Chen, Hung-Yin Tsai, National Tsing Hua University, Hsinchu, Taiwan, Dein Shaw, PME/National Tsing Hua University, Hsin Chu, Taiwan, Taiwan In image assisted driving system, it is important to recognize the moving objects from the static background. For a static platform, it is easy to trace out the moving objects by applying image subtraction directly. However, detecting moving objects becomes a difficult task on a moving platform. The direct pixel subtraction becomes impossible to obtain a correct moving object because the background pixels move nonlinearly with the distance from the moving platform and their changes are hard to predict when the platform moving.
Key words: forebay, pumping station, turbulence model, flow field, 3D-PIV
This study proposes a method for detecting moving objects in stereo images which are sequentially captured from a moving platform. We use a sparse feature point method to find moving objects in 3D coordinate by comparing the positions between the predicted feature points and real feature ones. This method consists of four main steps: feature points extraction, feature points matching, ego motion estimation and moving object extraction. At the step of feature point extraction, the Scale Invariant Feature Transformation (SIFT) is utilized for matching feature points with the invariant when the object rotates or its scale changes. To get a better accuracy and to save matching time at the matching step, non-distinctive feature points are excluded at the feature points extraction step. There are two parts at the matching step: Firstly, the 3D coordinate is obtained by matching the feature points from the simultaneous images of left and right cameras with epipolar constraint and a low matching threshold. Secondly, the matching between sequentially same-side images with a higher threshold and
Motion Systems Design Through CAD/CAE Simulation: Verification/Validation Challenges and Lessons Learned V&V2012-6199 Cyrus Raoufi, British Columbia Institute of Technology (BCIT), Burnaby, BC, Canada CAD/CAE simulation-based learning approach was integrated and implemented in an undergraduate Motion System Design and Integration course, a fifteen-week/four hours module for fourth year engineering students, at the British Columbia Institute of Technology (BCIT). The first aim of this integration was to improve students proficiency in using advanced CAD/CAE simulation tools to understand advanced dynamic and control concepts required in design and integration of mechanical motion systems, in particular, multi-body mechanisms such as the industrial robots. The second
65
THURSDAY, MAY 3 / FRIDAY, MAY 4
TECHNICAL SESSIONS
without other geometric constraints is executed. At the next step, the homogeneous transformation matrix (HTM) between the sequentially feature points are estimated from the matching result and the proposed elector-RANSAC method. Finally, the projected distance is defined as the distance of matched feature points between the real image and the predicted image, which is calculated by previous image using HTM, and is computed. The moving objects are found successfully by selecting the matched feature points with large projected distance above a self-defined threshold.
tube. The simulation mimics the movement of a piston by using as input a sinusoidal function of pressure with given frequency and amplitude at the inlet of the tube. The parameters of interest are the length and diameter of the inertance tube, the volume of the reservoir, the average pressure and the frequency of oscillation for these helium working fluid components. The results given clearly show that the small numerical error is not responsible for the differences between the results of the CFD simulation and the experiment. Comparison of the results of the CFD simulations for the two heat transfer boundary conditions show that the isothermal boundary condition is adequate for simulation of oscillating flow in the inertance tube. The results of the turbulence models compare favorably for the cases studied. Future simulations are aimed at finding better correlation between experiment and simulation.
FRIDAY, MAY 4 CONTINENTAL BREAKFAST Celebrity Ballroom 2 7:00am–8:00am REGISTRATION Sunset 1 Foyer
Methods for Verification of Computational Simulation of Rarefied and Micro-Scale Gas Flows V&V2012-6086 R.S. Myong, Gyeongsang National University, Jinju, Gyeongnam, Korea (Republic)
7:00am–12:30pm
VALIDATION METHODS
The verification and validation of computational simulation results of rarefied hypersonic and micro-scale gas flows remains a daunting task due to the multi-scale nature of problem and the lack of experimental data. In particular, flows involving the (kinetic) boundary layer near the solid surface are considered most challenging because the complicated boundary conditions make the verification and validation study even harder. In many cases the DSMC code is used for computer simulation study of rarefied and micro-scale gas flows, since it is believed to be more accurate in comparison with CFD codes based on the Navier-Stokes-Fourier equations.
VERIFICATION FOR FLUID DYNAMICS AND HEAT TRANSFER 7-2 VERIFICATION FOR FLUID DYNAMICS AND HEAT TRANSFER: PART 2 Sunset 5&6 8:00am–10:00am Session Chair: Urmila Ghia, University of Cincinnati, Cincinnati, OH, United States Session Co-Chair: Christine Scotti, W.L. Gore & Associates, Flagstaff, AZ, United States
However, it must be noted that the accuracy of the DSMC, in case of flows involving the (kinetic) boundary layer, is highly subject to the numerical boundary condition employed and this boundary condition is far from perfect or at least more akin to experiment than rigorous theory. In addition, it is known that the exact values of slip velocity and temperature jump in the DSMC method are prone to how these properties are obtained from the simulation results; the direct microscopic sampling of the molecular properties of particles that strike the wall surface, or the macroscopic approach that accounts for all molecules in the adjacent cell. Even worse, there is even no consensus what the proper master kinetic equations would be for describing diatomic gases like nitrogen in thermal non-equilibrium, all making verification and validation of simulation results so difficult.
Verification and Validation studies in Oscillating Flow in Inertance Tubes V&V2012-6079 Christopher Dodson, Air Force Research Lab, Kirtland AFB, NM, United States, Arsalan Razani, University of New Mexico, Albuquerque, NM, United States Inertance tubes are used in cryogenic pulse tube refrigerators (PTRs) to control the phase shift between the mass flow and pressure and to increase the performance of the refrigerator. Typically costly design iterations must be done to optimize the efficiency of a new PTR, so it is desired from the cryocooler community that computational fluid dynamics (CFD) simulations be capable of minimizing design iterations of the new PTR for the newly requested cooling temperature and load. Efficiency is a function of multidimensional oscillating flow fluid losses, so proper resolution of these losses requires techniques of verification and validation (V&V) to make quantitative statements of errors with respect to the observed phenomena. Using commercial software, a CFD simulation of oscillatory fluid flow in the inertance tube is presented as a benchmark for V&V along with the results of a parameterized reduced order model and compared to experimental results.
In this work the issue of verification and validation of computational simulation of rarefied and micro-scale gas flows is touched upon. In particular, a recent method based on the fundamental laws of physics is considered in detail. The basic idea comes from an observation that the conservation laws must be satisfied irrespective of computational models. Such a method can be easily applied to the pure one-dimensional problem. Here the method is applied to the DSMC solutions by checking the relative internal error of its solutions for one-dimensional benchmark problems: the force-driven compressible Poiseuille gas flow. When the computational error of the DSMC is considered, it was observed that the relative error of the DSMC increases from the center to the solid wall and reaches non-negligible value near the solid wall. Finally, a discrepancy between macroscopic and microscopic approaches in the DSMC solutions of the two-dimensional liddriven cavity gas flow is discussed.
The CFD software numerically solves the Navier-Stokes equations of compressible oscillatory flow to find important integral quantities of interest in the inertance tube for two customary turbulent flow models of and as well as for three thermal boundary conditions: isothermal, adiabatic and mixed. Grid convergence index (GCI) techniques are used for V&V of the mass flow, pressure, and temperature which are the basic quantities of interest and to the integral quantities of interest which are the acoustic power and the phase shift between the mass flow and pressure at the inlet of the
66
TECHNICAL SESSIONS
FRIDAY, MAY 4
Verification of A CFD Benchmark Solution of Transient Low Mach Number Flows with Richardson Extrapolation Procedure V&V2012-6126 Sonia Benteboula, Stéphane Gounand, Alberto Beccantini, Etienne Studer, Commissariat à l’Energie Atomique, Gif-surYvette, France
in 1994. Experimental data (Maumee Research & Engineering, 1994) from tests carried out at JACADS with 4.2 inch mortar using the agent simulant Dowanol DM is shown in Figure 1. The tray was fully loaded with 96 mortars. The simulant mass contained in each mortar was 0.67 kg. The total simulant mass on the tray was 64.32 kg. The predicted peak vaporization rate from the PVR model was 234 kg/hr and the total vaporization time was 42 minutes. The vaporization rates shown in Figure 1 are almost identical for the experimental data and the PVR model predictions.
This work deals with the reliability assessment of numerical benchmark solution of the unsteady Navier-Stokes equations written in the low Mach number approximation.
CR&Es CFD Model: The calibrated PVR model is suitable for munitions with varying quantities of liquid chemical agent. The PVR model cannot be used to predict the vaporization of a mix of solid and liquid chemical agent. Some of the munitions to be destroyed were found to contain a mix of both solid and liquid agent. CR&E professionals have developed a Computational Fluid Dynamics (CFD) melting and vaporization model to simulate heating, melting, and vaporization of mixed liquid and solid chemical agent.
In particular, the present study analyzes the flow evolution during the injection of buoyant fluid inside an axisymmetric cavity, and it is representative of the containment pressurization due to loss-ofcoolant accident in a nuclear reactor containment. The main goal is to obtain a reference numerical solution describing the flow features. For that, the benchmark exercise is carried out using two different Computational Fluid Dynamics (CFD) codes. The first code solves the discretized asymptotic model of the N-S equations written in the conservative form with a fractional step method. The space discretization is performed with a second-order finite differences scheme on staggered grid, and a second order predictor-corrector scheme is applied for time integration.
CR&Es CFD model for liquid/solids vaporization uses two scalars one to measure the solid mass fraction and one to measure the liquid mass fraction. When the temperature of a computational cell reaches the melting point of the solid component, the solid will begin to melt. The heat flux of the cell is calculated and the solid mass fraction is adjusted accordingly during the heating and melting process. The energy source term in the energy equation and momentum source terms in the k-e equation are adjusted by the vaporization control code of the model. The temperature of the cell remains at the melting point of the solid until the solid mass fraction in that cell reaches zero. The liquid begins to vaporize when the cell temperature reaches the liquid boiling point. The cell temperature remains at the boiling point until the liquid mass fraction in that cell reaches zero.
In the second code the governing equations written in the nonconservative form are solved by using a finite element method, and the time integration is performed with a backward finite difference approach. The quality of each numerical solution is evaluated through the Richardson extrapolation procedure which is advocated for the verification of numerical solutions. The procedure consists in computing solutions on three uniform meshes with different levels of refinement.
The CFD model for liquid/solids vaporization was calibrated to the PVR model. The predicted vaporization rate curves for 100% liquid HD from the CFD model and from the PVR model for 4.2 inch mortars are provided in Figure 2. The peak vaporization rate from 4.2 inch mortars with 100% liquid agent HD predicted by the PVR model is 496.7 kg/hr. The peak vaporization rate from 4.2 inch mortars with 100% liquid agent HD predicted by the CFD model is 496.2 kg/hr. The total vaporization time from the PVR model is 60.5 minutes. The total vaporization time from the CFD model is 63 minutes. Both the peak vaporization rates and total vaporization times compare well between the two models.
The numerical values of some relevant flow quantities are used to calculate an asymptotic solution which is an estimate of the exact one. Accordingly, the numerical errors and the method convergence order can also be estimated. However, in our test case the applied boundary conditions and the flow evolution present some singularities, thus making it difficult to apply the Richardson extrapolation on the whole space and time computational domain. This issue is analyzed and discussed. Numerical Investigation of Heating Process with Modeling Melting and Vaporization V&V2012-6152
Conclusion: CR&Es PVR model and the CFD model for liquid/solids vaporization both give confident predictions for the vaporization of liquid agent from munitions. The calibrated CFD model can be used to accurately simulate the necessary heat and mass transfer for the solidified agent found in the munitions. Furthermore, the CR&Es CFD model can be used to simulate the heating process including melting and vaporization for any liquid-solid system.
Yunhan Zheng, Alfred Webster, Mike Vanoni, Continental Research and Engineering, Centennial, CO, United States CR&Es PVR Model: Continental Research and Engineering (CR&E) is the leading provider of engineering, research and operational support for the demilitarization of Chemical Weapons by providing innovative and cost effective methods for processing complex waste streams. CR&E professionals started developing numerical models in the late 1980s to predict the appropriate processing conditions for complex problems involving combustion reactions, turbulence flow, heat, and mass transfer. The Peak Vaporization Rate (PVR) model was the principal in-house code previously used to predict the vaporization of chemical agents from munitions as a function of the furnace operating temperature and munitions fill level.
Experiences Regarding the Verification of a Transport Code V&V2012-6205 Kaveh Zamani, Fabian A. Bombardelli, University of California at Davis, Davis, CA, United States Scientists and engineers rely on the assistance of numerical solvers for the ADR equation. Verification of solvers is required and only a complete test suite can uncover all potential limitations and bugs in a transport solver. This paper summarizes the experiences gained during a comprehensive verification process which was done for a transport solver. The solver was developed for sediment and contaminant transport in a tidal channel network. A test suite was
The PVR model was updated and calibrated to test data obtained from the Johnston Atoll Chemical Agent Disposal System (JACADS)
67
FRIDAY, MAY 4
TECHNICAL SESSIONS
designed to accompany the transport solver which consisted of unit tests and algorithmic tests. Tests were layered in complexity in several dimensions. The test suite was designed in a way it probes all possible defects in the discretization of transport equation including: nonlinearity in the flow field, nonlinearity in the dispersion coefficient, nonlinearity in the source term, spatial and temporal variation of the dispersion coefficient and, finally, soundness and efficiency of the time marching. Acceptance criteria were defined for the desirable capabilities of the transport code such as order of accuracy, mass conservation, stiff source term, spurious oscillation, and initial shape preservation. The steps followed included well-known meshconvergence tests, but these tests did not uncover all bugs. Two bugs which were concealed during the mesh-convergence study were uncovered by removing the symmetry in one tidal test (where flow goes in one direction and then reverses) and by identifying the reduction in order of convergence due to a bug in imposing of boundary conditions. Assisting subroutines were also designed to check and post-process conservation of mass and oscillatory behavior (wiggles). Finally, the capability of the solver was also checked for a stiff source term. Overall, the above test suite not only constitutes a successful tool of error detection but also provides a thorough assessment of ADR solvers, signaling their strengths and limitations. Such information is the crux of any rigorous numerical modeling in surface/subsurface pollution transport.
solution, multi-objective optimization is defined in terms of Pareto optimality and the goal is to find a set of the Pareto-optimal solutions. The new Parallel MOPSO with Adaptive Search-space (P-MOPSO-AS) is developed to optimally solve the pump scheduling problem. Adjusting the boundaries of the search-space has a vital role in landing the particles into the Pareto-optimal front. In this technique, the size of the search-space is dynamically adjusted to ensure that the particles progressively find the Paretooptimal front whenever needed during the search process. The best solutions obtained by the optimization strategy showed that a saving of 7.5% from the total energy cost was achieved while satisfying both the total number of pump switches and the reservoir level variation criteria.
VALIDATION METHODS FOR BIOENGINEERING 10-1 VALIDATION METHODS FOR BIOENGINEERING Sunset 3&4 8:00am–10:00am Session Chair: Marc Horner, ANSYS, Inc., Evanston, IL, United States Session Co-Chair: Jagadeep Thota, University of Nevada, Las Vegas, Las Vegas, NV, United States High Resolution Numerical Simulation and Experimental Studies of Left Ventricular Hemodynamics V&V2012-6083 Trung Le, Saint Anthony Falls Lab, Minneapolis, MN, United States, Brandon Chaffins, Lucia Mirabella, Arvind Santhanakrishnan, Ajit Yoganathan, Georgia Institute of Technology, Atlanta, GA, United States, Fotis Sotiropoulos, University of Minnesota, Minneapolis, MN, United States
Trade-off Between Energy Cost, Pump Maintenance, and Reliability for a Rural Water Distribution System V&V2012-6061 Dhafar Al-Ani, Saeid Habibi, McMaster University, Hamilton, ON, Canada This main aim of this paper is to identify the payoff characteristics of energy optimization strategies through the application of a new Parallel Multi-objective Particle Swarm Optimization algorithm with Adaptive Search-space (P-MOPSO-AS), which is able to trade-off between the energy cost, the maintenance cost, and the reliability of a water distribution system, namely the Saskatoon West network (Saskatchewan, Canada) as a case study. A new problem formulation is considered where the electrical energy cost, maintenance cost, and network reliability are used to mathematically express the multiple (i.e., competitive) objectives of the cost function. In this formulation, the network reliability is defined in terms of the reservoir(s) water level variation during the optimization period of time (i.e., the duration between the beginning and ending of the optimization process). Moreover, the total number of pump switches, which is changing the state of a pump from OFF to ON, is used as a surrogate variable for describing the pump maintenance cost. A complete hydraulic model for the Saskatoon West Water Distribution System (SW-WDS) is built using two wellknown hydraulic solvers that are: WaterCAD and EPANET. To ensure the flexibility of obtaining optimal sets of pump operations, the system is designed and operated to satisfy system constraints, loading conditions, and target hydraulic performance requirements. This requires consideration of an extending period type of simulation. Therefore, the scheduling the pump operations are obtained over a 168-hours period (i.e., one week). The cost of the solution is composed of only the energy consumed charge ($/KW.h) during a specified period, since there is no meaningful formula that can quantified the cost of pump switches and the network reliability into a dollar value. Optimization strategies on water distribution systems usually tend to reduce the total operational cost by minimizing the energy consumed by the pumps (i.e., minimizing the energy cost), the total number of pump switches (i.e., minimizing the maintenance cost), and the difference of reservoir water levels between the starting and ending of the optimization period of time (i.e., maximizing the network reliability). In contract to the single objective optimization whereas the goal is to find one optimal
Recent trends in bioengineering, also supported by FDA (Stewart, et al., ASAIO J, 2009) highlight the importance of experimental validation of numerical solvers used in medicine, in order to avoid potential patient harm due to inaccurate simulations. In this study, we aim to develop a computational tool to couple medical imaging to CFD through high resolution experiments and computations performed on an idealized model of the left ventricle (LV). Comparison between experimental measurements and numerical simulations are reported for flow dynamics inside the LV chamber during diastole. The experimental apparatus consists of a truncated tetrahedron representing a simplified LV chamber with a single deformable surface made of silicone. The silicone surface simulates the lateral wall of the LV. The deformation of the LV is controlled by pressurizing the fluid surrounding the LV chamber via a Vivitro Superpump (Vivitro Systems; British Columbia). The experimental model simulates physiological flow rates and pressures that occur in the LV. The three-dimensional motion of the deformable surface is tracked using high speed cameras and reconstructed using a direct linear transformation. A LaVision twodimensional DPIV system (LaVision GmbH; Goettingen, Germany) is used to acquire time resolved fluid velocity measurements of the intra-ventricular flow field during diastole. The deformable wall kinematics and the resulting fluid mechanics within the LV have been characterized for diastole. Using the same ventricle model geometry, the motion of the deformable wall, and the time-varying mitral inflow as inputs the CFD simulation is carried out to simulate the full three-dimensional flow fields inside the chamber. The CURVilinear Immersed Boundary (CURVIB) solver ( Ge et al. , J. Comp. Physics, 2007) is used to simulate the interaction between the moving membrane and the vortex formation at the mitral orifice. The computational domain is discretized with a structured mesh of 8 million grid points. Good agreement between the experimental data and the computation has been achieved. In particular, the formation of asymmetric vortex ring at the mitral orifice during early
68
TECHNICAL SESSIONS
FRIDAY, MAY 4
diastole is observed in both the simulation as well as the DPIV flow fields. The vortex ring propagates downward towards the apex and interacts with the membrane. The vortex ring impinges on the surface of the solid LV chamber and breaks into smaller structures. The experimental and computational tools demonstrate their capabilities to further investigate the left intra-ventricular flow in realistic LV anatomies.
Amira was used to convert these images into three-dimensional models. Geomagic Qualify was used to refine the surface geometry of the virtual models. These models were then exported into file formats suitable for 3D printing and importing into Abaqus. Physical models of bones were printed using a powder printer. Hardness of the resulting model was similar to native bone while density was 87%. The ligaments, joints, and membranes for the physical models were molded with RTV urethanes. Material testing was performed to determine the most suitable material for each application, Material properties for native ligaments and joints were applied to the FE models. For the bones, the density in FE model was matched to the physical models.
A Case Study of Model Verification and Validation in Biomedical Engineering: siRNA Infusion to the Brain V&V2012-6104 Jeff Bodner, Medtronic Corporation, Minneapolis, MN, United States, David Stiles, Medtronic Neuromodulation, Minneapolis, MN, United States
Results: Functional testing of the 20:1 physical model showed that it conducted sound on a frequency bandwidth of 0 - 220 Hz. Preliminary, simplified mathematical models suggest this correlates to approximately 3,300Hz in a native-sized ear. Assessment of the 10:1 physical model, as well as both 10:1 and 20:1 FE models, is ongoing.
Medtronic is working with its partner Alnylam Pharmaceuticals, with the support of the Cure Huntington’s Disease Initiative, to develop a treatment for Huntington’s Disease. The treatment involves chronic infusion of a small interfering RNA molecule (siRNA) directly into the basal ganglia of the brain using an implanted catheter and infusion pump. A computational model of siRNA infusion has been developed to understand how the molecule will be distributed in the brain tissues. The model was designed to assist researchers in determining optimal placement of the infusion catheter as well as optimal infusion rate and concentration.
Conclusion: The method developed in this study has the potential to enhance medical science and clinical practice on middle ear reconstructive surgery. Data collection and processing for physical models has not yet been completed to compare to the computational model Validated Finite Element Model of an Intramedullary Nail Targeting Guide V&V2012-6228 Danny Levine, Jeff Bischoff, Kaifeng Liu, Roger K Kenyon, Zimmer, Inc., Warsaw, IN, United States
This model will be presented as a case study of the challenges that are often encountered in the verification and validation of models for biomedical engineering applications. These challenges include: the quantification of uncertainties associated with geometry construction from MRI image segmentation, unquantified uncertainties in model inputs taken from the literature and difficulties with performing precise validation experiments. In addition, validation experiments are often performed in a species different from the one of interest, a problem that is unique to this field. These challenges will be discussed in the context of the existing ASME V&V20 framework. The authors will discuss the qualitative validation methodology that they utilized in lieu of the more standard quantitative approach currently in the standards.
Intramedullary (IM) nails are commonly used to support healing of bone fractures in femur, tibia and humerus. Locking nail designs have multiple holes at each end through which screws are passed, thus engaging both nail and bone to align the fragments and to stabilize fractures axially and in torsion. Accurately placing the screws such that they pass through bone and align with the nail holes is an important requirement for these treatments and can be facilitated by a targeting guide, which is attached to the IM nail, and serves as both an insertion device for the nail and an aiming device for screws. The guide assembly stiffness must be adequate to prevent the drill sleeve tip from being deflected from its proper position (skiving) due to oblique bone and soft tissue contact and handling forces.
Physical Validation of Scaled Up Finite Element Models of the Human Middle Ear V&V2012-6233 Alex Bell, Ravi Samy, Vasile Nistor, University of Cincinnati, Cincinnati, OH, United States
Finite element analysis was used to support the development of a femoral IM nail targeting guide assembly by examining drill sleeve tip deflection under applied loads parallel and perpendicular to the nail, using clinically available predicate targeting guides to define acceptable reference stiffness values. A corresponding experimental study demonstrated the ability of the finite element modeling technique employed to properly rank order stiffness values for the new design and the predicate designs, thus validating the FE work.
Background: The middle ear includes the tympanic membrane, malleus, incus stapes, oval window and associated suspensory ligaments and joints. Together, these structures are the body’s mechanism for overcoming the impedance mismatch between air and cochlear perilymph, allowing transfer of airborne vibrations to the inner ear. Functional middle ear models are required on a variety of scales for different applications. For practicing surgical methods, the structures must be modeled on a 1:1 anatomical scale. For the testing of prostheses and a clearer visualization of the intricate physiological movements of the structures, scales of 10:1 or 20:1 are used. As the dimensions of the structures are scaled up, the natural frequency range of the middle ear system scales down accordingly.
Finite element models were analyzed for 5 designs using ANSYS Revision 12.1 (ANSYS, Inc., Canonsburg, PA). Load magnitudes applied in the finite element study were determined through prior laboratory testing designed to deflect a drill sleeve tip sufficiently to miss the target nail hole for an early design iteration of the new guide. Verification of model stiffness predictions was performed through mesh refinement using a displacement convergence criterion of 1%.
Objectives: The objective of the present study was to use anatomically accurate, functional physical models of a human middle ear at 10:1 and 20:1 scale to validate the functional frequency range for each of these predicted by finite element (FE) analysis conducted of virtual models at the same scales.
Laboratory test fixtures were designed to support the targeting guide assemblies and apply loading in a manner comparable to that performed in FE models. Device stiffness was defined by the experimental load/deflection curve slope.
Methods: MicroCT imaging of a cadaveric human temporal bone yielded images of middle ear structures with 53µm voxel resolution.
69
FRIDAY, MAY 4
TECHNICAL SESSIONS
Model validation focused on prediction of guide stiffness, which was deemed to be the meaningful and quantifiable metric of its intended function. The devices analyzed are of various sizes and shapes and have a range of different materials; but the modeling approach proved sufficient to captured stiffness trends measured experimentally. Stiffness values predicted by FE are similar in magnitude to those determined experimentally, although FE values are higher than the experimental values for some models.
12-4 VALIDATION METHODS: PART 4 Wilshire A 8:00am–10:00am Session Chair: Kevin Dowding, Sandia National Laboratories, Albuquerque, NM, United States Session Co-Chair: Pattabhi Sitaram, University of Mount Union, Alliance, OH, United States Validation of Finite Element Models of Transportation Packages for Nuclear and Radioactive Materials V&V2012-6187 Zenghu Han, Argonne National Lab, Argonne, IL, United States
Assessing Uncertainty Contributions in the Model Validation Process V&V2012-6141 David Riha, John McFarland, Todd Bredbenner, Southwest Research Institute, San Antonio, TX, United States, Stephen Harris, University of Texas Health Science Center at San Antonio, San Antonio, TX, United States
Transportation packages for nuclear and radioactive materials must be designed and certified to meet stringent technical and regulatory requirements to ensure safety, public health and protection of environment. Packaging design and certification is generally based on field testing and numerical simulations whereby a package is subjected to sequential tests of 30-ft free drop, puncture, 30-minute all-engulfing fire of 800 oC, and water immersion, as prescribed in Title 10 Code of Federal Regulations Part 71.73 hypothetical accident conditions (HAC). A certified package should maintain structural integrity for containment of radioactivity, provide shielding for radiation protection, and control of criticality for fissile contents, following the hypothetical accident sequence of tests. Nonlinear dynamic finite element computer codes, such as ABAQUS and LSDYNA, have been used for evaluating the structural performance of the transportation packages of nuclear and radioactive materials. Validation of the finite-element model is essential to lend confidence to the numerical simulations. Instrumented field testing and properly documented test data on acceleration, strain and extent of plastic deformation are required for model validation, before the model is used for comprehensive evaluation of the packages performance. In this paper, we will discuss the validation of the ABAQUS finite-element models for three transportation packages of nuclear and radioactive materials: Model 9516, 9979 and ES3100. Each finite-element model was used to simulate the HAC field tests and the simulation results were compared to the measured data to validate the model. The validated models were then used to investigate the impact behavior of the critical packaging components, such as impact limiter and bolted closure, and to evaluate the structural performance of the packages in different drop orientations, including secondary impact or slapdown, to assess the most unfavorable drop orientation and the worst test sequence that would challenge the structural integrity of the package. The paper also presents justifications for making more transient measurements and using additional instrumentation for measuring acceleration, strain, and deformation of the package internals so that a more comprehensive model validation can be performed. Experience has shown that the accelerometer data is useful to determine the drop orientation that produces the highest g-load on package components, and the strain gauge data can be used to validate the failure criteria of packaging materials, while the deformation data for package internals would shed light on whether the criticality function is maintained. Several examples are extracted from literature to illustrate that obtaining additional test data is feasible to enable validation of finite-element models, which plays a key role in the certification of the design for transportation packages for nuclear and radioactive materials.
Physics-based models are now routinely used to predict the behavior of complex structural and mechanical systems. These models may be highly complex where the model includes all features, materials, and interfaces or be simplified models using a subset of the actual system. These models require some level of validation when the predictions are used to make engineering decisions. Model validation is usually performed by comparing model predictions to experimental results using some quantitative measure. To accurately compare the model predictions to experimental results in the model validation process, the uncertainties in the experiment must be included in the model predictions. These uncertainties may include variations in material properties and geometry in the test article and uncertainties in the experiment that may include boundary conditions and load magnitudes. There are also other uncertainties in these physicsbased models, such as material model form (e.g., linear or nonlinear), numerical algorithms (e.g., mesh discretization), and data uncertainty (e.g., limited experimental data to characterize the variations for a material parameter). Most predictive models consist of different components such as material models, boundary conditions, and loads. Some of these model components are well understood and others may contain significant uncertainty. The model validation effort is usually an evolutionary process requiring model improvements to meet accuracy and precision goals. The improvement of each model component requires different levels of effort and have different contributions to the overall model predictive uncertainty. The ability to quantify the contribution of uncertainty of each model component to the model uncertainty provides information about the best allocation of resources to improve the precision and accuracy of the model. Variance-based global sensitivity analysis provides a powerful approach to understand the importance of model input variables or groups of variables in driving model output variation. However, input variance is often attributable to both aleatory (inherent variation) and epistemic (lack of knowledge/data) uncertainties. Understanding the role of these different types of uncertainties can have important decision-making implications during the model validation process. A validated model is being developed for a mouse ulna to predict strains for an in vivo loading protocol. Uncertainties for boundary conditions, geometry, and material properties are included in the model. A variance-based global sensitivity analysis is used to guide decisions about additional experiments that can reduce the uncertainty in the model predictions. This paper will describe this model validation effort with a focus on the uncertainty quantification methods and results.
70
TECHNICAL SESSIONS
FRIDAY, MAY 4
A Fractional-Step h-Adaptive Finite Element Method Validation for Incompressible Flow Regimes V&V2012-6018 David Carrington, Los Alamos National Laboratory, Los Alamos, NM, United States, Darrell W. Pepper, UNLV, Academy, CO, United States
conditions. This approach is particularly useful for simulations of homogeneous charge compression ignition (HCCI), a sparkless engine operating mode with potential for significant fuel economy improvement. To date, use of detailed gasoline mechanisms has had mixed success in predicting test results for modest boost pressures and low equivalence ratio operation of HCCI. Of particular interest is the low temperature combustion (LTC) occurring during the compression stroke in the neighborhood of 25 to 10 degrees before top-dead-center. Surrogate kinetic mechanisms have had a hard time matching both this low temperature heat release and the larger main heat release that occurs through top-dead-center.
The validation of a new Predictor-Corrector Split (PCS) projection method combining h-adaptive mesh refinement in a Finite Element Method (FEM) for combustion modeling is developed in this paper. This PCS system advances the accuracy and range of applicability of the KIVA combustion model and software. In fact, the algorithm combined with current KIVA spray and chemistry models and a moving parts algorithm in development will be the new KIVA generation of software from Los Alamos National Laboratory.
Surrogate kinetic mechanisms can take many forms. We use a surrogate that is sufficiently compact for use with multi-zone (typically from 4 to 40 zones) engine simulations that realistically compute cylinder compressive work, heat loss, combustion driven mixing and chemical heat release. Current highly detailed mechanisms have more than 1300 species and more than 6000 reaction steps. Such detailed mechanisms then have as many as 7000 different kinetic parameters that influence the simulation results. These parameters are estimated from a variety of laboratory scale tests, fundamental theory and other methods that are at times well matched to engine conditions and at other times poorly matched. Further, LTC occurs at significantly lower pressure and temperature than main combustion so that current mechanisms may be overly influenced by main combustion conditions.
This paper describes the PCS h-adaptive FEM model for turbulent reactive flow spanning all velocity regimes and fluids. The method is applicable to Newtonian and non-Newtonian fluids and also for incompressible solids and fluid structure interaction problems. The method produces a minimal amount of computational effort as compared to fully resolved grids at the same accuracy. The solver with h-adaption is validated here for incompressible benchmark problems in the subsonic flow regime as follows: 1) 2-D backward-facing step, 2) 2-D driven cavity, 3) 2-D natural convection in a differentially heat cavity. The PCS formulation uses a Petrov-Galerkin (P-G) weighting for advection (similar to the Streamline Upwinding Petrov-Galerkin (SUPG)). The method is particularly well suited to changes in implicitness, from nearly implicit to fully explicit. The latter mode easily applied to the newest computers and parallel computing using one or a great many multi-core processors. In fact, the explicit mode is easily parallelized for multi-core processors and has been demonstrated to have super-linear scaling in the CBS stabilization.
We have performed engine simulations using our basesurrogate kinetic mechanism and compared the results to well instrumented experiments performed on a research engine running in HCCI mode at a compression ratio of 16.7 and an engine speed of 1200 rpm. Intake pressure was swept from 60 to 190 kpa with equivalence ratio fixed at 0.2 and equivalence ratio was swept from .08 to .30 with intake pressure fixed at 100 kpa. Our simulations did not adequately capture the character of the dependence of combustion timing with varying intake pressure. Improving our kinetic mechanisms ability to capture the LTC heat release while retaining fidelity to the main combustion heat release is required. We have used a variety of sensitivity analyses techniques to this end.
The discretization is a conservative system for the compressible and incompressible momentum transport equation along with other transport equations for reactive flow. Error measurement allows the grid to adjust, increasing the spatial accuracy and bringing it under some specified amount. The conservative form also allows for the determination of the exact locations of the shocks. The hadaptive method along with conservative P-G upwinding provides for good shock capturing. We also employ a gradient method shock capturing scheme for the supersonic/transonic flow regimes.
Since the base surrogate model contains so many parameters, we wish to identify a subset of these parameters which most strongly impact the output attributes of the engine simulation. This screening study was done in two phases. In phase one, we applied the Morris method to find the one hundred most important kinetic parameters. In phase two, In phase two, additional effort were applied to reduce the number of parameters to about 20. With the results from phase two, we will vary the values of the top twenty kinetic parameters to calibrate the model to a subset of the test data. This presentation will focus on the results of the screening study. (LLNL-ABS-521838)
The method described in this paper is generally semi-implicit but, also can be run in explicit mode. In semi-implicit mode, pressure will range from implicit to explicit. The algorithm uses equal-order approximation for the dependent variables similar to much of our research in the field. The solution to the turbulent Navier-Stokes equations is similar to of previous work, using the k-W model. The system solves turbulent Navier-Stokes equations in a multicomponent formulation as described by Carrington [1].
Liquid Column Breaking Experiments for Validating Two-Phase Flow Simulation V&V2012-6195 Tomoaki Kunugi, Keisuke Hara, Zensaku Kawara, Kyoto University, Kyoto, Japan, Taku Nagatake, Japan Atomic Energy Agency, Tokai-Mura, Ibaraki, Japan
[1] Carrington, D.B., (2011) A Fractional step hp-adaptive finite element method for turbulent reactive flow, Los Alamos National Laboratory Report, LA-UR-11-00466.
Sensitivity Analysis of Large System of Chemical Kinetic Parameters for Engine Combustion Simulation V&V2012-6107 Shang-Rou Hsieh, Mark Havstad, Dan Flowers, Guillaume Petitpas, Lawrence Livermore National Laboratory, Livermore, CA, United States, Josep Sanz Argent, Universidad de Castilla-La Mancha, Ciudad Real, Ciudad Real, Spain
The numerical simulation has a big potential to solve a very complex problem which cannot be solved analytically. When the numerical simulation is carried out, the verification & validation and the uncertainty quantification are very important in general. As for the validation, it is essential to compare the results obtained by the numerical simulation to the experimental one.
Surrogate gasoline combustion kinetic mechanisms are used in engine simulations to help understand performance changes with equivalence ratio, boost pressure, intake temperature and other
A liquid column breaking (so-called, Dam-Break) problem is one of
71
FRIDAY, MAY 4
TECHNICAL SESSIONS
the well-known benchmark problems to validate the various numerical methods for multiphase flows after the first work done by Martin J.C. and Moyce W. J. in 1952. In their experiments, a liquid column was built in the container and the collapse of the liquid column was observed. The experimental results have been used to validate many numerical methods for long time. However, it has been reported that there were some discrepancies of the results between the numerical simulation and the experiment because the partition paper or plate supporting the liquid column strongly affected the liquid flow behavior after taking off the partition. On the other hand, in the numerical simulation the partition movement was not considered. These may be the main reasons of the discrepancy between them.
velocimeter. This study provides a concrete example of some of the tradeoffs that can be involved in trying to provide analysts with data they find suitable for code validation. Chemical Combustion Simulation and Proof by Calorimetric Tests V&V2012-6082 Radu D. Rugescu, Ciprian Dumitrache, University “Politehnica” of Bucharest, Bucharest, Bucharest, Romania A series of authors have drawn attention to unexplained deviations in chemical composition of combustion products from the expected chemical equilibrium at a range of temperatures. Recent reports in this area are coming from a team of researchers from the San Diego University, California. Ordinarily, experimental errors are only charged for this misfit in the recorded data, with no known indepth investigation of the chemistry of combustion gases. None of the teams of researchers from the chemical kinetics group of the Stanford University has yet investigated the chemistry of constant volume combustion under excess fuel condition. Most of the experimental facts are directed to constant pressure, excess air combustion of different fuels, where some deviations, at a much smaller scale, were also observed.
By using a ultra-fast digital video cameras, the very short timeframe and dense images can be obtained. These images could allow us to establish a very detailed database even for the twophase flows. Therefore, the revival experiments of this dam-break problem will be useful to validate the two-phase flow numerical methods. In this study, the experiments regarding the dam-break problem by using the ultra-fast digital video camera have been conducted for two different-size test sections. The effects of the pulling-up speeds of the partition plate and the liquid column sizes on the liquid front position are investigated. Resulting of the experiments, the freesurface shape at the beginning of the partition plate pulling-up is very complicated because of the liquid leakage from the underneath of the partition plate. To avoid this problem, the freesurface shape at the beginning has been modified for tracking the water front. Another approach to validate the numerical simulation is to track the evolution of the liquid mass-center not the water front. The 3D-numerical results are also compared to the experimental one in this procedure. Finally, it is discussed on whether the water front and the liquid mass-center are a suitable parameter for the dam-break problem to validate the numerical simulation.
The observation that the chemical equilibrium between the combustion products of chemical propellant samples within static calorimeters is unexpectedly freezing at high temperatures is proved through a general numerical simulation of the constant volume cooling with chemical reactions between the gaseous products. The commonly used O-N-C-H elemental system is focused, appropriate for the majority of fossil and chemical propellants in use. A proprietary, direct linearization method of thermochemical computation is developed that follows any chemical reaction in equilibrium with high convergence and accuracy. By comparing the factual data from the experiments with numerical simulations of the process the observed chemical freezing within calorimeters is proved.
PIV Accuracy and Extending the Field of View V&V2012-6196 Steve Lomperski, Craig Gerardi, Dave Pointer, Argonne National Laboratory, Argonne, IL, United States
The process of combustion of propellant samples within a small constant volume calorimeter is considered as a typical slow evolution. The motion of the developed gas mixture is fairly limited to the closed vessel space and no fast moving gasses are ordinarily considered to appear. After the inner volume is fast filled with the gas mixture at high temperature and pressure a very slow cooling of the content follows, almost down to the surrounding water, preserving at the room temperature. There is no apparent reason to consider that the chemical equilibrium is not fulfilled during this process.
The versatility of particle image velocimetry (PIV) has fostered its spread into numerous fluid mechanics applications both large and small in scale. Our particular efforts involve benchmarking computational tools that model fluid flow in nuclear reactor power systems. We have special interest in data that captures a large range of spatial and temporal flow scales for validation of so-called multi-scale tools. As a result, a primary objective is to measure a flow field with the largest possible field of view (FOV). There is concern, however, that more than just spatial resolution is lost as the FOV is increased. In this paper we consider a common question of PIV users (how large a flow field can I measure?) by examining variations in PIV measurement accuracy with FOV.
It was observed however, along frequent calorimeter tests and chemical composition analyses, that the composition of the residual gas from the calorimeter does not follow the chemical equilibrium to final room temperature during cooling and rather preserves the composition of an unexpectedly high temperature equilibrium instead. The striking observation led to intensive numerical simulations of the cooling process, showing that the composition of the residual gases in the vessel remains frozen to a temperature as high as 1650ºK after hours of final room temperature cooling. Repeated tests with various propellant combinations, from methilol and gaseous oxygen to solid propellants and explosives has shown identical performance. A theory is proposed to accommodate the numerical computation with the physical facts.
There are many analytical studies that consider sources of PIV error. While useful as guidelines for optimizing measurement systems and analysis, they are difficult to apply in quantifying errors associated with a collection of actual PIV hardware. Unfortunately, studies detailing accuracy limits of real systems for various hardware configurations and flow types are relatively uncommon. We present a collection of PIV measurements of an air flow field in the wake of a 10 mm tube used as a vortex generator. The main components of the PIV system are a Litron LDY303 Nd:YLF dual cavity pulsed laser and an IDT Y7 camera with a 1920x1080 array of 7.24 x 7.24 ?m pixels. Measurement accuracy is assessed by comparing results to benchmark data obtained with a laser Doppler
72
TECHNICAL SESSIONS
FRIDAY, MAY 4
Solar Draught Tower Simulation and Proof by NumericalAnalytical Methods V&V2012-6191 Radu D. Rugescu, Ciprian Dumitrache, University “Politehnica” of Bucharest, Bucharest, Bucharest, Romania
as described, e.g., by Oberkampf et al. [2] or Roy and Oberkampf [3]. Richardson extrapolation is a common approach to characterizing the uncertainty associated with discretization error effects for numerical simulations. Various approaches based on Richardson extrapolation have been broadly applied to cases where computed solutions on sufficiently many grids are available to completely evaluate an extrapolated solution. Implicit in such analyses is that there is numerical evidence to justify the assumption that the computed solutions are in the domain of asymptotic convergence, i.e., that the mesh discretization is sufficiently fine for the assumption of some power law dependence of the discretization error as a function of the mesh length scale to be valid. We focus instead on the all-to-common extremely uncertain case where only two calculations are available. Consequently, there is no direct evidence that these solutions are in the domain of asymptotic convergence and the typical power law dependence of the discretization error is mathematically under-determined. Although this situation is unsatisfying theoretically, it nonetheless corresponds to the practical reality commonly faced in many engineering-scale simulations of complex, multi-physics phenomena, for which high-consequence decisions will be made. Because of the limited amount of information available in this information-poor case, the usual Richardson extrapolation analysis cannot be rigorously justified, much less directly utilized.
The acceleration of heated air in tall towers through gravity draught is a technology with a long history of application. More recently, the air heating by solar energy has emerged, with the most successful form of the mirror array for solar light concentration (mirror-gravity towersMGT). Along MGT, a short zone of external heating at tower base is followed by the adiabatic gravity draught within the tall tower. In order to efficiently simulate the MGT accelerator, these different conditions pretend equally different equations, with a steep jump from the nonisentropic heating chamber to the unsteady tower flow. A specific 1-D numerical code was developed, built on the basically multidimensional character of the flow at a small scale that performs into the solar gravity tower. It demonstrated to offer useful computational timesaving and a convenient numerical handling. The essentially 1-D character of the motion is physically motivated for all slender configurations of tall solar towers. When a turbine is inserted into the airflow to extract mechanical energy from the gravitationally boosted air for energy production, extra local discontinuities within the 1-D model develop. After the unsteady start of the flow the steady state mode is installing. A first proof of the numerical results is to compare the steady state motion of the air inside the tower, given by the numerical unsteady code, with a steady state, integral model and this is the scope of the present work.
Here, we investigate several approaches to this problem using several novel approaches: (1) using optimization framework to render the problem solvable providing a family of solution based on rationally chosen objective functions and (2) the application of infogap theory [1], which provides a method for supporting model-based decisions under severe uncertainty. We describe how these analyses work on a sequence of problems beginning with idealized test cases and scaling up to apply it to an engineeringscale problem involving metrics derived from the output from a multi-dimensional, multi-physics simulation code.
The dynamic behavior begins when the top lid of the tower is removed and the buoyant force boosts the air upwards. Due to the local speed the inner pressure at the upper air exit drops according to Bernoulli’s law. A complex, at least bi-dimensional airflow grows at the upper exit, with a sensible speed field all around the duct edge, but the one-dimensional approximation forces us to consider a unique inner pressure along the whole exit area. The main stack problem is to decide at which pressure level the exit establishes and in that respect several hypotheses are considered. When the turbine is introduced next to the solar receiver, the heat from the flowing air is transformed into mechanical energy with the payoff of a supplementary air rarefaction and cooling in the turbine. The best energy extraction will take place when the air recovers entirely the ambient temperature before the solar heating, although this desire remains for the moment rather hypothetical. Results from the comparison of the two computational methods are presented in the conclusion of the paper.
References: [1] Y. Ben-Haim. Info-Gap Decision Theory, Decisions under Severe Uncertainty. Elsevier, Oxford, UK, 2006. [2] W. L. Oberkampf, T. G. Trucano, and C. Hirsch, Verification, validation, and predictive capability in computational engineering and physics, Applied Mechanics Reviews, 57:345384, 2004. [3] C.J. Roy and W. L. Oberkampf, A comprehensive framework for verification, validation and uncertainty quantification in scientific computing, Computational Methods in Applied Mechanical Engineering, 200:21312144, 2011.
Verification of Shell Elements by Eigenanalysis of Vibration Problems V&V2012-6046 Takahiro Yamada, Kazumi Matsui, Yokohama National University, Yokohama, Japan, Kenjiro Terada, Tohoku University, Sendai, Japan, Masami Sato, Takaya Kobayashi, Mechanical Design & Analysis Corporation, Chofu, Tokyo, Japan, Makoto Tsukino, Keizo Ishii, Quint Corporation, Fuchu, Japan, Sayaka Endoh, Yasuyoshi Umezu, Takahiko Miyachi, JSOL Corporation, Tokyo, Japan
VERIFICATION METHODS 13-1 VERIFICATION METHODS Wilshire B 8:00am–10:00am Session Chair: Christopher J. Roy, Virginia Tech, Blacksburg, VA, United States Session Co-Chair: David Moorcroft, Federal Aviation Administration, Oklahoma City, OK, United States Quantification of Numerical Discretization Error under Severe Uncertainty V&V2012-6035 William Rider, James Kamm, Sandia National Laboratories, Albuquerque, Albuquerque, NM, United States
The thin-walled structures are widely used in the various engineering fields. In their stress analyses, the finite element method with some shell element is employed in general, and commercial finite element codes always provide shell elements. There are several approaches to derive shell elements and more than one type of shell element can be implemented in one code. In most commercial codes, the degenerated shell element, which is derived from the three - dimensional continuum by shell-like kinematic assumption and parameterization, is implemented. In some codes, the discrete Kirchhoff element with the Kirchhoff
Quantification of the uncertainty associated with the truncation error implicit in the numerical solution of discretized differential equations remains an outstanding and important problem in computational engineering and science. Such studies comprise one element of a complete evaluation of the development and use of simulation codes, entailing verification, validation, and uncertainty quantification,
73
FRIDAY, MAY 4
TECHNICAL SESSIONS
assumption imposed in a discrete way is also provided. These shell elements give reasonable results in scope of their application, which is not defined clearly. For general users, it is difficult to choose a type of shell element and construct an appropriate computational model. Therefore, numerical properties of these shell elements implemented in commercial codes need to be evaluated and disclosed. In this work, numerical properties of typical shell elements in commercial codes of MSC nastran, Abaqus, ADINA, Ansys and LS-DYNA are evaluated by representative problems.
established by the design rules. Application of design rules typically involves the solution of deterministic mathematical problems and extraction of data specified by the design rules. Designers are obligated to verify that all applicable design criteria have been satisfied for the relevant design conditions. This implies that the errors in the numerical approximation of the data of interest must be verified to be within permissible bounds. A key technical requirement to achieving solution verification is that the definition of mathematical problem must be treated separately from the numerical approximation to the mathematical problem. This is because the choice of the mathematical problem, and the data to be determined from the solution of the mathematical problem, are governed by consideration of the physical reality being modeled and are usually mandated by the design rules. Numerical approximation, on the other hand, is concerned with finding an approximate solution for the mathematical problem, extracting the data of interest from the approximate solution, and verifying that the numerical errors in the data of interest are within small tolerances.
In the literature on development of shell elements, typical benchmark problems such as Scordelis-Lo roof, hemispherical pinched shell and pinched cylinder with diaphragm are used. Most of them are linear static and designed to determine whether locking phenomena is avoided. In this work, eigenanalysis of vibration problems is employed as series of benchmark problems to evaluate performance of elements in practical problems. Specifically free vibration problems of cylinder and spherical shells that have analytical solutions are adopted, and numerical experiments are carried out by commercial codes mentioned above. Obtained results characterize shell elements and give an indication of resolution associated with mesh size.
Most software tools used in current engineering practice were not designed to support solution verification in practical applications. To illustrate this point, and clarify some of the key technical requirements of simulation governance, a small set of challenge problems, representative of problems encountered in structural, mechanical and aerospace engineering practice, will be presented. Readers are encouraged to solve one or more of these problems using software tools of their choice.
It is remarked that this research work was carried out as one of the activities of the Japan Association for Nonlinear CAE (JANCAE), which is a non-profit organization established in 2001 to promote professional development in practical use of the nonlinear computational mechanics and the CAE technologies. JANCAE is organized in collaboration with members in academia, software venders, simulation consulting firms and industries.
A Code Verification Exercise for a Boundary Element Method based on Unstructured Grids V&V2012-6139 Luís Eça, João B. Pedro, J.A.C. Falcão de Campos, Instituto Superior Técnico, TU Lisbon, Lisbon, Portugal, Martin Hoekstra, MARIN, Wageningen, Netherlands
Modal, Harmonic, and Random Vibration Analysis Techniques for Conducting and Verifying Finite Element Analysis V&V2012-6073 Anthony DiCarlo, Paul Normandy, MITRE Corp., Bedford, MA, United States
This presentation describes a Code Verification exercise for a Boundary Element Method (BEM) developed for the calculation of the potential flow around marine propellers. The original method based on the Morino formulation (dipoles and sources distributions) used structured grids, but recently the code was updated to allow the use of unstructured grids.
This paper provides analytical vibration techniques that can and should be utilized in conjunction with finite element analysis to investigate the vibrational behavior of structural components. The models presented are derived from a practical perspective to analyze active electronic components or chassis mounted to/in an airborne platform. These models have been generalized to provide a cohesive progression of the complexity of the theory and modeling techniques. An outline, which includes a complete workthrough starting with simple oscillators and ending with pseudo continuous modeling, for simulating and verifying modal, harmonic, and random vibration models for fixed base excitation as well as unconstrained structures including the large mass method, is presented.
The flow around a thin ellipsoid with planform aspect ratio of two was selected to assess the convergence properties of the method using grids of triangles. The grid refinement studies were performed for three different grids sets: one based on elliptical coordinates that guarantees geometrical similarity and two using Delaunay triangulation techniques that do not respect constant refinement ratio for the complete surface of the ellipsoid. One of the goals of the exercise is to discuss the estimation of discretization errors based on power series expansions for grid sets that are not geometrically similar. In particular, some alternatives for the definition of the typical cell size are tested.
Solution Verification in Computational Solid Mechanics V&V2012-6123 Barna Szabo, Ricardo L. Actis, Engineering Software Research and Development, Inc., St. Louis, MO, United States, William L. Oberkampf, Consultant, Georgetown, TX, United States
Hydrocode Verification for Thermoelastoviscoplastic Problems V&V2012-6219 Romesh Batra, Virginia Polytechnic Institute and State University, Blacksburg, VA, United States
Solution verification is an essential part of simulation governance, a term that refers to procedures established for the purposes of ensuring and enhancing the reliability of predictions based on numerical simulation. Therefore, software used in simulation governance must have technical capabilities that support solution verification, making it possible for analysts to quantify the accuracy of the data of interest computed from the numerical solution.
Hydrocode Verification for Thermoelastoviscoplastic Problems Alireza Chadegani2, Kaushik A. Iyer1, Pazhayannur K. Swaminathan1, Robert C. Brown1, Douglas S. Mehoke1, Romesh C. Batra2
In mechanical design and certification it is necessary to apply design rules and evaluate designs with reference to the criteria
74
TECHNICAL SESSIONS
FRIDAY, MAY 4
References: 1 JHU/APL, 11101 Johns Hopkins Rd., Laurel, MD, 20723, USA
COFFEE BREAK
2 Virginia Polytechnic Institute and State University, Blacksburg, VA 24061-0249
Celebrity Ballroom Foyer
A hydrocode is usually developed to numerically solve equations describing the conservation of mass, linear momentum, moment of momentum, and internal energy coupled with constitutive relations, and initial and boundary conditions for a body being deformed at a high strain rate. With increasing emphasis being placed on numerical experiments for designing critical components, it is imperative to ensure that the code correctly solves the conservation laws, constitutive relations, and initial and boundary conditions. For linear elastic problems, one can verify the code by comparing the computed solution with the analytical solution of the problem. However, for nonlinear transient problems, the task is more challenging. Here we follow the approach proposed by Batra and Liang [1] and verify the hydrocode, VTDYNA, developed at Virginia Tech to analyze transient thermomechanical deformations of thermo-elasto-viscoplastic materials. The procedure has come to be known as the method of manufactured solutions. It is applicable to other constitutive relations provided that the source code is available. It entails assuming a closed form expression for the basic variables, such as the displacement and temperature fields in a thermomechanical problem, and finding the body force and the source of internal energy required to satisfy the conservation laws, and the initial and the boundary conditions for the body in a reference configuration. The assumed solution generally requires that the body have non-zero initial stresses, plastic strains, damage, and temperature. Thus the problem formulation and the software development should incorporate these non-homogeneous initial conditions. With the initial and boundary conditions, the body force and the source of internal energy determined from the assumed analytical solution of the transient problem, one solves it numerically with the hydrocode. The code is verified if the error norm between the computed and the assumed solution is negligible. In [1] Batra and Liang used this technique for a dynamic piezoelectric problem. Batra and Love [2] adopted this procedure to verify a hydrocode for analyzing 2-dimensional thermo-elasto-visco-plastic problems; however, they did not document example problems studied. Here we verify the code for 3-dimensional transient problems. The key feature is to start by assuming closed-form expressions for the effective plastic strain and the temperature fields, and find other variables by satisfying governing equations. In order to analytically solve the problem, one verifies various aspects of the hydrocode rather than all coupled aspects at once. The presentation will include details of problems used to verify the hydrocode, VTDYNA.
10:00am–10:30am
PANEL SESSION 14-1 V&V FROM A GOVERNMENT AND REGULATORY PERSPECTIVE Celebrity Ballroom 1 10:30am–12:30pm Session Chair: Scott Doebling, Los Alamos National Laboratory, Los Alamos, NM, United States Session Co-Chairs: Christopher Freitas, Southwest Research Institute, San Antonio, TX, United States, Ryan Crane, ASME, New York, NY, United States
References: [1] Batra, R.C. and Liang, X., Computational Mechanics, 20, 427-438, 1997. [2] Batra, R.C. and Love, B.M., J. Thermal Stresses, 27, 1101-1123, 2004.
75
AUTHOR INDEX Author
Session Number
Abboud, Najib Actis, Ricardo L. Adams, Jonathan Ahijevych, David Akita, Takeshi Al-Ani, Dhafar Albers, Albert Alfano, David Allemang, Randall Ames, Nicoli M Anderl, Reiner Antiescu, Mihai Argent, Josep Sanz Arjunon, Shiva Aslam, Tariq Atamturktur, Sez Augenbroe, Godfried Baglietto, Emilio Baillargeon, Brian P. Baker, Duane Bardot, Dawn Barros Filho, José A. Batra, Romesh Beccantini, Alberto Beeson, Don Bell, Alex Bendeich, Philip J. Benteboula, Sonia Bernstein, Jr., Raymond F. Bestelmeyer, Anita Bestion, Dominique Bischoff, Jeff Bodner, Jeff Bombardelli, Fabian A. Boscary, Jean Brannon, Rebecca M Bredbenner, Todd Breitzman, Timothy D. Brewster, Robert Brown, Barbara G. Brown, Robert G. Bryan, William Burger, David Butkiewicz`, Mark Caligiuri, Robert Camberos, Jose Campanelli, Mark Carrington, David Casati, Barbara Castelo, Adalberto Catovic, Zlatko Celik, Ismail Chaffins, Brandon Chang, Ming-Jui Chauliac, Christian Chaurey, Sudhir Chen, Cheng-Che Chiesa, Giancarlo Cho, Heejin Choi, Dong-Hoon Choi, Han-Lim Choi, Woo-Seok Chou, Joy Choudhary, Ruchi
3-3 13-1 6-2 12-3 3-2 2-2 12-3 8-2 3-2 12-3 12-1 2-1 12-4 11-3 4-3 2-2 9-1, 9-2 6-2 11-3 4-3 11-3 4-2 13-1 7-2 4-1 10-1 8-1 7-2 12-2 11-3 11-2 10-1 10-1, 11-3 7-2 4-2 3-4 2-3 8-1 6-2 12-3 8-1 11-1 2-1 4-1 12-3 2-3 2-4 12-4 12-3 4-1 2-1 7-1 10-1 6-3 6-2 12-1 12-3 4-3 9-1, 9-2 3-1 2-4 5-1 6-3 9-2
Author
Session Number
Choules, Brian Chow, Ricky Clark, Alex Clark, Rodney Collins, R. Lewis Crane, Ryan Crepeau, Joseph E. Daddazio, Raymond Dalbey, Keith D’Auria, Francesco DeGiorgi, Virginia Derradji-Aouat, Ahmed Dewees, Dave Dewees, Dave DiCarlo, Anthony Dodson, Christopher Doty, John Drysdale, Andrew Dumitrache, Ciprian Eason, Thomas G. Ebert, Elizabeth E. Edwards, Lyndon Eggen, Michael Elele, James Endoh, Sayaka Enright, Michael Erdman, Arthur Eswaran, Senthil K. Eça, Luís Ézsöl, György Fagan, Robert Falcão de Campos, J.A.C. Ferng, Yuh-Ming Filippova, Olga Flowers, Dan Ford, Steven Foster, Tony Freitag, Steffen Freitas, Christopher Frias, Dan Fu, Yan Geier, Martin Gel, Aytekin Gerardi, Craig Gilleland, Eric Giniyatullin, Artur Gitterman, Yefim Glockner, Stéphane Gomez, Leo Gong, Xiao-yan S. Gorelik, Michael Gounand, Stéphane Grimm, Matthew V. Gu, Xin Guariento, Alessandro C. Guba, Attila Guenther, Chris Gupta, Atul Gutkin, Leonid Habibi, Saeid Haibing, Zhou Hajimirza, Shima Hall, David Hallee, Brian
76
11-3 11-3 3-1 3-1 4-2 11-2 5-1 3-3 12-2 11-2 4-3 11-1 11-1 8-1 13-1 7-2 2-3 3-3 12-4 3-2 12-3 8-1 3-1 12-2 13-1 9-1 1-1 11-3 7-1, 13-1 6-2 3-2 13-1 6-3 6-1 12-4 11-3 3-3 2-2 4-1 9-1 2-2, 2-3 12-3 2-3 12-4 12-3 4-1 5-1 7-1 3-3 11-3 9-1 7-1, 7-2 5-1 6-1 6-1 6-2 2-3 11-3 8-1 2-2 7-1 2-4 12-2 6-2
Author Hamelin, Cory Han, Aijie Han, Zenghu Hapij, Adam Happ, Henry J. Hara, Keisuke Hariharan, Prasanna Harikumar, Jayashree Harris, Jeffrey H. Harris, Stephen Harvego, Edwin Hassan, Yassin Hasselman, Timothy Havstad, Mark He, Shouling Helton, Jon Hemez, Francois Heo, Yeonsook Her, Ming Xi Hills, Richard Hinz, Brandon J. Hoekstra, Martin Horner, Marc Horton, Walter Hotta, Akitoshi Howell, John R. Hsieh, Huai-En Hsieh, Shang-Rou Hua, Li Huang, Jinhua Hulbert, Gregory Hung, Zhen-Yu Hunter, Tim Iarve, Endel V. Ido, Hiroto Irlinger, Franz Ishii, Keizo Ishimura, Kosei Iwasa, Takashi Jaeger, Steffen James, Randy Janik, Tadeusz Jatale, Anchal Jean, Brian Jeon, Je-Eon Jeong, Jae J. Jeremic, Boris Jew, Michael D. Jin, Hui Kaliatka, Algirdas Kamm, James Kamojjala, Krishna Kang, Daeil Kargar, Soudabeh Kasahara, Fumio Kawara, Zensaku Kenyon, Roger K Keyser, David Khazaii, Javad Kiltie, Ian Kim, Han-Gon Kim, Ki-Young Kim, Manwoong Knupp, Patrick
Session Number 8-1 8-1 12-4 3-3 5-1 12-4 11-3 12-2 4-2 2-3 6-3, 11-2 6-3 2-2, 2-3 12-4 3-3 2-2 2-1, 2-2 9-2 6-3 2-2, 11-2 5-1 7-1, 13-1 7-1, 11-3 11-2 11-2 2-4 6-3 12-4 2-1, 7-1, 12-1 11-3 12-1 6-3 3-1 8-1 11-1 12-2 13-1 3-2 3-2 12-3 3-1 9-2 4-2 12-3 5-1 11-2 3-4 3-2 3-1, 11-3 6-3 13-1 3-4 2-4 11-3 11-2 12-4 10-1 4-1 9-2 6-2 6-2 5-1 11-2 12-2
AUTHOR INDEX Author Kobayashi, Takaya Kogiso, Nozomu Kokkolaras, Michael Kothe, Douglas Kulkarni, Sanjeev Kunugi, Tomoaki Kwon, Young Joo Lacaze, Sylvain Le, Trung Lee, Sanghoon Levine, Danny Li, Tingwen Lim, Sang-Gyu Lin, Hao-Tzu Lin, Kuan Yuan Lin, Paul Littlefield, Andrew Liu, Chao Liu, Kaifeng Liu, Songyu Liu, Xiangyi (Cheryl) Lloyd, George Léth, Tim C. Loebl, Andrew Lomperski, Steve Luck, Rogelio Luitjens, Jeffrey Lumsdaine, Arnold Luo, Hu Luzin, Vladimir Macri, Michael Maistrello, Mario Mann III, J. Adin Marghitu, Dan Martin, Charles Martins, René V. Matsui, Kazumi Mattie, Patrick McFarland, John Medale, Marc Mendoza, Shair Mesmous, Noreddine Millage, Kyle K. Mirabella, Lucia Missoum, Samy Miyachi, Takahiko Miyazaki, Yasuyuki Mollenhauer, David H. Mollineaux, Mark G. Moorcroft, David Morrison, Tina M. Moser, Robert Moyer, E. Thomas Muci-Küchler, Karim H. Muhanna, Rafi L. Muránsky, Ondrej Myong, R.S. Nagatake, Taku Nair, Arun Nakajima, Norihiro Nanaware, Ganesh Nattermann, Roland Navarro, Moysés A. Needham, Charles E.
Session Number 11-1, 13-1 3-2 12-1 1-2 11-3 12-4 3-4 2-3 10-1 5-1 10-1 2-3 6-2 6-1 6-1 3-2 8-2 12-3 10-1 6-3 2-1, 11-3 2-2, 2-3 12-2 12-1 12-4 9-2 6-2 4-2 6-2 8-1 8-2 4-3 6-1 12-1 11-2 8-1 13-1 2-2 2-3, 9-1 7-1 3-2 11-2 5-1 10-1 2-3 13-1 3-2 8-1 2-2 11-1 11-3 2-1 3-3 5-1 2-2 8-1 7-2 12-4 11-3 3-4 3-3 12-1 4-2, 6-1 5-1
Author
Session Number
Nelson, Stacy Nguyen, Diem Nicolas, Xavier Nishida, Akemi Nishikawa, Naoki Nistor, Vasile Normandy, Paul Obayomi, Jacob Oberkampf, William L. Ogi, Yoshiro Ohki, Hiroshi Ohno, Shuji Ohshima, Hiroyuki Okamoto, Koji Oliver, Todd Olson, William A. O’Toole, Brendan Ottnad, Thomas Paez, Thomas Pan, Hao Parekh, Tanay Park, Chang-Hyun Park, Jin Seok Park, Joel Park, JongSeuk Parlatan, Yuksel Peacock, Alan Pedriani, Charles Pedro, João B. Pei, Bau-Shi Pellettiere, Joseph Pepper, Darrell W. Perry, Kenneth Petitpas, Guillaume Petruzzi, Alessandro Phillips, Tyrone Plikas, Tom Pointer, Dave Popelar, Carl Powers, Joseph Quinn, David Rais-Rohani, Masoud Raoufi, Cyrus Rau, Andrew Razani, Arsalan Rebelo, Nuno Rebelo, Nuno Ren, Weiju Rezende, Hugo C. Rider, William Riha, David Riha, David Rimkevicius, Sigitas Roache, Patrick Roderick, Oleg Rohatgi, Upendra S. Romero, Vicente Romick, Christopher Rossman, Timothy Rotta, Elisabetta Roy, Christopher J. Rudland, David Rugescu, Radu D. Ruggles, Arthur
77
5-1 4-2 7-1 3-4 12-2 10-1 13-1 9-1 1-1, 13-1 3-2 6-2 6-2 6-2 12-2 2-1 11-3 5-1 12-2 2-3 12-1 12-1 3-1 3-3 11-1 2-4 2-1 4-2 12-2 13-1 6-3 11-1 12-4 11-3 12-4 6-1 7-1 4-3 12-4 11-3 4-3 11-3 2-2 12-3 11-3 7-2 11-3 2-1 3-1 6-1 13-1 2-3 9-1 6-3 1-2, 4-1 2-1 2-1 2-4 4-3 11-3 4-3 2-4, 7-1 2-2 12-4 6-1
Author
Session Number
Ruili, Wang 2-1, 12-1, 7-1 Ryan, Emily 9-2 Saba, Brent 3-2 Saffari, Payman 11-3 Sakamoto, Hiraku 3-2 Salehghaffari, Shahab 2-2 Sallaberry, Cedric 2-2 Samy, Ravi 10-1 Santhanakrishnan, Arvind 10-1 Santos, Andre A.C. 4-2, 6-1 Saravanan, Vaira 4-1 Sato, Masami 13-1 Scarth, Doug 8-1 Schroeder, Ben 4-2 Schultz, Richard R. 6-3, 11-2 Scotti, Christine 11-3 Seiichi, Koshizuka 11-2 Seo, Ki-Seog 5-1 Shah, Umesh 4-3 Shahnam, Mehrdad 2-3 Shaw, Dein 3-1, 12-3 Shen, Hung-Yi 6-3 Shiari, Behrouz 8-2 Shields, Michael D. 3-3 Shifu, Xiao 3-2, 3-3 Shih, Chunkuan 6-1 Shock, Rick 6-1 Shudao, Zhang 2-1, 7-1, 12-1 Simmons, Chris 2-1 Sire, Robert A. 12-3 Sitaram, Pattabhi 3-4 Slovisky, Jack 9-1 Smith, Barton 11-1 Smith, Jeffrey 12-2 Smith, Jeremy 12-2 Smith, Mike C. 8-1 Smith, Philip 4-2 Smith, Sean 4-2 Someya, Satoshi 12-2 Song, Daehun 12-2 Sonnenberg, Garrett 11-1 Sotiropoulos, Fotis 10-1 Spencer, Nathan 3-2 Spottswood, Stephen Michael 3-2 Steele, W. Glenn 9-2 Stier, Christian 12-3 Stiles, David 10-1 Studer, Etienne 7-2 Stull, Christopher 2-1 Su, Wei-Lin 3-1 Subramaniyan, Arun K. 4-1 Sun, Xin 9-2 Sundermeyer, Jeff 2-1 Suzuki, Yoshio 3-4 Swan, Mathew Scot 3-4 Swift, Richard 11-3 Szabados, László 6-2 Szabo, Barna 13-1 Tagade, Piyush 2-4 Tajima, Yuji 6-2 Takizawa, Hideo 11-1 Tanaka, Hiroaki 3-2 Tarasevich, Stanislav 4-1 Teferra, Kirubel 3-3
AUTHOR INDEX Author Terada, Kenjiro Terejanu, Gabriel Tezel, Haldun Thornock, Jeremy Thota, Jagadeep Tipton, Jr., Joseph B. Trabia, Mohamed Tryon, Bob Tsai, Hung-Yin Tselepidakis, Dimitri Tsukino, Makoto Umezu, Yasuyoshi Uspuras, Eugenijus Uzawa, Ken Van Buren, Kendra Vanoni, Mike Vaz, Guilherme Vigil, Dena Waddington, Geoff Walker, Kelly Wan, Qiang Wang, Jong-Rong
Session Number 11-1, 13-1 2-1 11-2 4-2 5-1 4-2 5-1 11-3 12-3 4-2 13-1 13-1 6-3 3-4 2-2 7-2 7-1 12-2 6-3 12-1 3-2 6-1
Author
Session Number
Wang, Liping Weber, Paul Webster, Alfred Wessel, Rick Westwater, Gregory D. Wiggs, Gene Willer, Neal Williams, Jeff Wolff, Mitch Woods, Brian G. Wu, Qiao Wu, Thomas Xi, Zhimin Xie, Guozhen Xin’en, Liu Xu, Qiang Xueqian, Chen Yakovlev, Anatoly Yamada, Takahiro
78
4-1 5-1 7-2 9-2 6-1 4-1 6-1 3-4 2-3 6-3 6-2 2-3 2-3 4-3 3-3 12-2 3-3 4-1 13-1
Author Yang, Ren-Jye Yerkes, Kirk Yoganathan, Ajit Yoshida, Junji You, Sung-Chang Zamani, Kaveh Zhan, Zhenfei Zhang, Jianping Zhang, Lirong Zhang, Yulun Zhao, Tina Zhao, Yong Zheng, Yunhan Zhibo, Ma Zhou, Eric G. Zhou, Yong Zlobin, Andrey Zumberge, Jon
Session Number 2-2, 2-3 2-3 10-1 11-1 6-2 7-2 2-2 4-3 4-3 12-3 11-3 11-3 7-2 2-1, 7-1, 12-1 8-1 6-1 4-1 2-3
SESSION ORGANIZERS Verification and Validation for Fluid Dynamics and Heat Transfer 4-1 Verification and Validation for Fluid Dynamics and Heat Transfer: Part 1 Session Chair: W. Glenn Steele Session Co-Chair: Hugh Coleman
General 1-1 Plenary Session: Part 1 Session Chair: Christopher Freitas Session Co-Chairs: Scott Doebling, Ryan Crane 1-2 Plenary Session: Part 2 Session Chair: Scott Doebling Session Co-Chair: Christopher Freitas, Ryan Crane
4-2 Verification and Validation for Fluid Dynamics and Heat Transfer: Part 2 Session Chair: Arthur Ruggles Session Co-Chair: Prasanna Hariharan
Uncertainty Quantification, Sensitivity Analysis, and Prediction 2-1 Uncertainty Quantification, Sensitivity Analysis, and Prediction: Part 1 Session Chair: Sanjeev Kulkarni Session Co-Chair: Ben Thacker
4-3 Verification and Validation for Fluid Dynamics and Heat Transfer: Part 3 Session Chair: Prasanna Hariharan Session Co-Chair: Joel Peltier
2-2 Uncertainty Quantification, Sensitivity Analysis, and Prediction: Part 2 Session Chair: James O’Daniel Session Co-Chair: Edwin Harvego
Validation Methods for Impact and Blast 5-1 Validation Methods for Impact and Blast Session Chair: Vicente Romero Session Co-Chair: Dawn Bardot
2-3 Uncertainty Quantification, Sensitivity Analysis, and Prediction: Part 3 Session Chair: William Bryan Session Co-Chair: Francesco D’Auria
Verification and Validation for Simulation of Nuclear Applications 6-1 Verification and Validation for Simulation of Nuclear Applications: Part 1 Session Chair: Richard R. Schultz Session Co-Chair: Yassin Hassan
2-4 Uncertainty Quantification, Sensitivity Analysis, and Prediction: Part 4 Session Chair: Hugh Coleman Session Co-Chair: Robert Ferencz
6-2 Verification and Validation for Simulation of Nuclear Applications: Part 2 Session Chair: Hyung Lee Session Co-Chair: Richard R. Schultz
Validation Methods for Solid Mechanics and Structures 3-1 Validation Methods for Solid Mechanics and Structures: Part 1 Session Chair: Don Simons Session Co-Chair: Ben Thacker
6-3 Verification and Validation for Simulation of Nuclear Applications: Part 3 Session Chair: Joel Peltier Session Co-Chair: Hyung Lee
3-2 Validation Methods for Solid Mechanics and Structures: Part 2 Session Chair: Atul Gupta Session Co-Chair: Boris Jeremic
Verification for Fluid Dynamics and Heat Transfer 7-1 Verification for Fluid Dynamics and Heat Transfer: Part 1 Session Chair: Yassin Hassan Session Co-Chair: Dimitri Tselepidakis
3-3 Validation Methods for Solid Mechanics and Structures: Part 3 Session Chair: Richard Swift Session Co-Chair: Krishna Kamojjala
7-2 Verification for Fluid Dynamics and Heat Transfer: Part 2 Session Chair: Urmila Ghia Session Co-Chair: Christine Scotti
3-4 Validation Methods for Solid Mechanics and Structures: Part 4 Session Chair: Robert Ferencz Session Co-Chair: Richard Swift
Validation Methods for Materials Engineering 8-1 Validation Methods for Materials Engineering: Part 1 Session Chair: Dawn Bardot Session Co-Chair: Nuno Rebelo 8-2 Validation Methods for Materials Engineering: Part 2 Session Chair: Boris Jeremic Session Co-Chair: Krishna Kamojjala
79
SESSION ORGANIZERS Validation Methods 12-1 Validation Methods: Part 1 Session Chair: Edwin Harvego Session Co-Chair: David Hall
Verification and Validation for Energy, Power, Building, and Environmental Systems 9-1 Verification and Validation for Energy, Power, Building, and Environmental Systems Session Chair: Godfried Augenbroe Session Co-Chair: Heejin Cho
12-2 Validation Methods: Part 2 Session Chair: Christine Scotti Session Co-Chair: Atul Gupta
9-2 Panel Session: Uncertainty Analysis of Building Performance Assessments Session Chair: Godfried Augenbroe Session Co-Chair: Heejin Cho
12-3 Validation Methods: Part 3 Session Chair: David Hall Session Co-Chair: Koji Okamoto
Validation Methods for Bioengineering 10-1 Validation Methods for Bioengineering Session Chair: Marc Horner Session Co-Chair: Jagadeep Thota
12-4 Validation Methods: Part 4 Session Chair: Kevin Dowding Session Co-Chair: Pattabhi Sitaram Verification Methods 13-1 Verification Methods Session Chair: Christopher J. Roy Session Co-Chair: David Moorcroft
Standards Development Activities for Verification and Validation 11-1 Standards Development Activities for Verification and Validation: Part 1 Session Chair: William L. Oberkampf Session Co-Chair: Francesco D’Auria
Panel Sessions 14-1 V&V from a Government and Regulatory Perspective Session Chair: Scott Doebling Session Co-Chair: Christopher Freitas, Ryan Crane
11-2 Standards Development Activities for Verification and Validation: Part 2 Session Chair: David Moorcroft Session Co-Chair: Kevin Dowding 11-3 Panel Session: ASME Committee on Verification and Validation in Computational Modeling of Medical Devices Session Chair: Carl Popelar Session Co-Chairs: Andrew Rau, Ryan Crane
80
EXHIBITORS AND SPONSORS WE GRATEFULLY ACKNOWLEDGE THE FOLLOWING COMPANIES FOR THEIR SUPPORT:
Caterpillar, Inc. Track 2 Sponsor (Uncertainty Quantification, Sensitivity Analysis and Prediction)
Safe Technology & Wolf Star Technologies fe-safe/True-Load™ Exhibitor
For more than 85 years, Caterpillar Inc. has been making sustainable progress possible and driving positive change on every continent. With 2011 sales and revenues of $60.138 billion, Caterpillar is the world’s leading manufacturer of construction and mining equipment, diesel and natural gas engines, industrial gas turbines and diesel-electric locomotives. The company also is a leading services provider through Caterpillar Financial Services, Caterpillar Remanufacturing Services, Caterpillar Logistics Services and Progress Rail Services. For additional company information, go to our website: www.caterpillar.com
Safe Technology & Wolf Star Technologies Software distributor of FEA fatigues software and strain based load software. fe-safe/True-Load™ is a unique software solution for accurate in-situ load measurement. Good design for strength and durability requires accurate loads - fe-safe/True-Load™ offers the solution • Turns complex components into multi-channel load cells • Calculates loading histories from measured strain histories • Optimises the locations of strain gauges • Calculates loading histories with remarkable accuracy • Load histories can be used for testing and for fatigue analysis from FEA • Integrates with Abaqus CAE • True-Load/Pre - optimises strain gauge locations by working with the FEA model • True-Load/Post - calculates loading histories from measured strain histories
American Society of Engineering Education The ASEE SMART Program Team Exhibitor
For more information go to: www.wolfstartech.com and www.safetechnology.com
The Science, Mathematics And Research for Transformation (SMART) Scholarship for Service Program is an opportunity for students pursuing an undergraduate or graduate degree in Science, Technology, Engineering, and Mathematics (STEM) disciplines to receive a full scholarship and be employed upon degree completion at a DoD research facility. Scholarships awarded include a cash award of $25,000 to $41,000 a year, full tuition, and other benefits. Website: http://smart.asee.org
ANSYS, Inc. Bronze Sponsor and Exhibitor 275 Technology Dr Canonsburg, PA 15317-9565 www.ansys.com ANSYS provides the engineering and design process insight to help you be first to market with products that realize their promise and revolutionize your business. We develop, market and support engineering simulation software used to predict how products will behave and how manufacturing processes will operate in real-world environments. We offer the most comprehensive suite of simulation solvers in the world so that we can confidently predict your product’s success.
81
EXHIBITORS AND SPONSORS
NAFEMS Exhibitor
ASME Standards Technology, LLC Conference Bag Sponsor
Engineers rely on computer modeling and simulation methods and tools as vital components of the product development process. As these methods develop at an ever-increasing pace, the need for an independent, international authority on the use of this technology has never been more apparent. NAFEMS is the only worldwide independent association dedicated to this technology.
ASME Standards Technology, LLC (ASME ST-LLC) is a subsidiary not-for-profit company under ASME. ASME ST-LLC performs work related to newly commercialized technology, by identifying and conducting R&D related projects. ASME ST-LLC helps you bridge technology gaps by identifying and conducting your R&D projects. Our scientists and engineers possess the hands-on experience required to develop and perform the most challenging R&D project.
NAFEMS is the one association dedicated to the engineering analysis community. In fact, NAFEMS is the only independent association dedicated to FEA and CFD worldwide. With over 1000 member companies ranging from major manufacturers across all industries, to software developers, consultancies and academic institutions, all of our members share a common interest in design and simulation.
You can be confident in the results; every ASME Standards Technology, LLC project goes through a rigorous qualification, validation, and peer review process. Our Successful Approach • Standards development supports new regulations • Project work that focuses on anticipating standards needs and bridging gaps between technology development and standards development. • ASME Standards & Certification involvement in R&D projects helps ensure results will be relevant to standards committees • Directed R&D focuses limited resources on priority areas • Collaborative R&D projects minimize individual investment while maximizing benefits • International partnerships between government, industry, and academia help build consensus leading to technically relevant standards.
We are the only association which provides vital ‘best practice’ information specifically for those involved in FEA, CFD and CAE, ensuring safe and efficient analysis methods. By becoming a member of NAFEMS, you and your company can access our industry-recognised training programs, independent events and acclaimed publication - all of which are designed with the needs of the analysis community in mind. For more company information go to www.nafems.org.
Exclusive Peer Review Process Once the research is completed, ASME ST-LLC generates a draft report that is reviewed by a team of industry-specific experts who conduct a thorough evaluation of the results. Their recommendations are typically addressed either by additional research or by incorporation into the final publication. For more information, go to our website: http://stllc.asme.org
82
ABOUT ASME ASME
ASME V&V STANDARDS DEVELOPMENT COMMITTEES
ASME helps the global engineering community develop solutions to real world challenges facing all people and our planet. We actively enable inspired collaboration, knowledge sharing and skill development across all engineering disciplines, all around the world, while promoting the vital role of the engineer in society.
As part of this effort, the following ASME committees coordinate, promote, and foster the development of standards that provide procedures for assessing and quantifying the accuracy and credibility of computational models and simulations.
ASME products and services include our renowned codes and standards, certification and accreditation programs, professional publications, technical conferences, riskmanagement tools, government/regulatory advisory, continuing education and professional development programs. These efforts, guided by ASME leadership, and powered by our volunteer networks and staff, help make the world a safer and better place, today, and for future generations.
ASME V&V Standards Committee - Verification and Validation in Computational Modeling and Simulation Subcommittees ASME V&V 10 - Verification and Validation in Computational Solid Mechanics ASME V&V 20 - Verification and Validation in Computational Fluid Dynamics and Heat ASME V&V 30 - Verification and Validation in Computational Simulation of Nuclear System Thermal Fluids Behavior
Vist www.asme.org.
ASME STANDARDS & CERTIFICATION ASME is the leading international developer of codes and standards associated with the art, science, and practice of mechanical engineering. Starting with the first issuance of its legendary Boiler & Pressure Vessel Code in 1914, ASME’s codes and standards have grown to nearly 600 offerings currently in print. These offerings cover a breadth of topics, including pressure technology, nuclear plants, elevators / escalators, construction, engineering design, standardization, performance testing, and computer simulation and modeling.
ASME V&V 40 - Verification and Validation in Computational Modeling of Medical Devices
2011-2012 ASME OFFICERS Victoria Rockwell, President Marc W. Goldsmith, President-Elect Robert T. Simmons, Past-President Thomas G. Loughlin, Executive Director
Developing and revising ASME codes and standards occurs year-round. More than 4,000 dedicated volunteers— engineers, scientists, government officials, and others, but not necessarily members of the Society—contribute their technical expertise to protect public safety, while reflecting best practices of industry. The results of their efforts are being used in over 100 nations; thus setting the standard for code-development worldwide.
ASME STAFF Ryan Crane, P.E., Project Engineering Manager, Standards & Certification Mary D. Jakubowski, CMP, Meetings Manager, Events Management Stacey Cooper, Coordinator, Web Tool/Electronic Proceedings, Publishing - Conference Pubs James Campbell, Marketing Manager, Marketing and Sales
Visit www.asme.org/kb/standards.
83
NOTES
84
HOTEL FLOOR PLAN
85