Applying automotive robustness validation to reduce the number of ...

5 downloads 109213 Views 338KB Size Report
Jul 4, 2017 - Official Full-Text Paper (PDF): Applying automotive robustness validation to reduce the ... is experienced in process design, certification and.
Applying Automotive Robustness Validation to Reduce the Number of Unplanned Reliability Testing Cycles Andre Kleyner, PhD, Delphi Electronics & Safety Alexander Nebeling, Delphi Electronics & Safety Key words: Robustness Validation, SAE-J1211, HALT, RFMPT, accelerated testing, intelligent testing SUMMARY & CONCLUSIONS This paper presents an approach to product development and reliability testing utilizing Robustness Validation (RV), an automotive electronics development recommended practice, specified in SAE-J1211 (surface vehicles) and SAEJ1879 (semiconductor devices in automotive applications). Intelligent testing and Robustness Validation approaches can become an integral part of the design for reliability process (DfR) and if implemented correctly can optimize a design process flow, provide significant cost savings, reduce the number of design validation iterations and hence, the overall number of required test cycles. Furthermore, intelligent and customized testing including RV can reduce test costs, contribute to error prevention, and demonstrate achieved reliability. An appropriately applied RV methodology offers a sustainable competitive advantage; however, a general change in approach to product validation testing is needed, requiring a joint effort by the design and validation teams involved in product development. Robustness Validation will add common sense, efficiency, better test to field correlation and optimization to the process of automotive design and validation. Consequently, a case study is presented of an automotive power electronics unit that will show how Robustness Validation can help to minimize the validation cycle and reduce the waste of multiple validation iterations. This paper will also discuss the challenges and pitfalls to applying this process and the ways to address them. Hence this practice can be applied to most industries, well outside the automotive engineering. 1 INTRODUCTION The electronics industry in general and the automotive sector in particular make continuous efforts to develop new products quickly and bring highly reliable products to the market in ever decreasing development times. However there are two product development trends which continuously challenge these objectives. The first trend is the continuously increasing complexity of the automotive electronics products such as navigation, wireless networking, climate controls, safety systems, hybrid and electric vehicle technologies and others, which require longer times to develop and more time

‹,(((

to test all the functionality and assure that the product will perform without failure under various (sometimes extreme) environmental conditions. Moreover, some electronic systems designed for hybrid and electric vehicles have longer operating hours (e.g. battery chargers) which in turn require substantially longer test times in order to replicate the effect of a product’s mission life. The second trend is that automotive OEMs are increasingly reliant on international test standards, such as IEC 60068-2 [1], ISO 16750 [2] and others to define their product validation process and practices. Although such standards present well-developed guidance based on established engineering practice, they also promote a ‘cookbook’ approach to product validation testing often unintentionally removing ‘engineering’ out of ‘reliability engineering’. Consequently, some individual automotive OEM specifications are often digressing from science, physics of failure and acceleration model based guidelines to a compilation of tabulated specifications, inflexible and overly conservative environmental requirements and testing edicts, which are not always well correlated to the individual product field environments. Although this approach makes it easier for product engineers devoid of reliability training and experience to put together a test plan, it removes the flexibility and good sense engineering out of this process and often generates unnecessary testing, waste, and inefficiency. Furthermore, to make matters worse, the practice of job rotations is becoming very popular in the automotive manufacturers world, where an engineer stays on the job for a few years and then moves on to a different engineering job. Although this practice certainly has its benefits, the downside is that it does not help develop deep engineering skills. Subsequently, the lack of reliability expertise often encourages the path of the least resistance – extensive use of engineering standards and other non-product specific documents. 1.1 Acronyms and Abbreviations AC AF DC DUT

Alternating Current Acceleration Factor Direct Current Device under Test

DV ECU EEM EMC EV FET FMEA FRACAS HALT HASS HEV IC IGBT OEM PCB PV RFMPTTM RIF SAE

Design Validation Electronic Control Unit Electrical/Electronic Module Electromagnetic Compatibility Electric Vehicle Field Effect Transistors Failure Mode and Effect Analysis Failure Reporting and Corrective Action System Highly Accelerated Life Test Highly Accelerated Stress Screening Hybrid Electric Vehicle Integrated Circuit Insulated-Gate Bipolar Transistor Original Equipment Manufacturer (in this context, vehicle manufacturers) Printed Circuit Board Product Validation Rapid Failure Mode Precipitation Testing Robustness Indication Figure Society of Automotive Engineers

1.2 Robustness Validation and Design for Reliability The SAE-J1211, Surface Vehicle Recommended Practice [3] was written with the intent to offer a logical common sense approach to automotive testing and in a way which adds more intelligence into the product development, verification and validation processes. A similar document [4] was developed for applying RV principles to automotive semiconductor devices. The RV can also be considered as an important contributor to the Design for Reliability process (DfR), which is increasingly becoming an accepted design practice in many industries (for example [5] and [6]). Although RV has a stronger focus on efficient product testing, the DfR encompasses holistically the whole design process. Both RV and DfR can be integrated into one process with an ultimate objective to achieve highest reliability and to make the product development process more efficient. Reliability cannot be achieved by just extensive testing at the end of product design cycle, hence it needs to be incorporated into design process from the very beginning [7]. One of the RV and DfR targets is reducing the number of design-test iteration cycles consequently saving the overall development time and cost. For example, one round of full validation for an automotive electronics product can take between three and six months, therefore needless to say that reducing the number of these validation cycles can be very beneficial. 1.3 Product Validation Phases In the automotive and some other industries a product validation process is divided in Design Validation (DV) and Product (or Process) Validation (PV). DV is typically performed on a fully functional prototype parts and PV is usually performed on production intent parts. Also there is an expectation of pre-DV testing, often referred as evaluation or verification. These types of testing are usually less specified than DV or PV and are often conducted by a specific request

from a design team. A common product development business model usually incorporates an unspecified amount of pre-DV testing, but only budgets for one DV and one PV in the development cycle. However this model rarely holds true due to multiple product changes often caused by design modifications, requirements updates and corrective actions triggered by product test failures. This suggests that unplanned iterations of both DV and PV present serious waste and a huge resource drain in terms of additional testing, equipment time consumption, engineering resources, cost to build additional prototypes, etc. Additional (i.e. unplanned) DV or PV runs may take almost the same amount of time (three to six months) as the first validation run. Robustness validation presents an opportunity to optimize the product development process and reduce the number of these unplanned DVs and PVs. 2 BASIC INFORMATION ABOUT SAE-J1211 According to SAE-J1211 Robustness Validation is a process to demonstrate that a product performs its intended function(s) with sufficient robustness margin under a defined mission profile for its specified lifetime. It should be used to communicate, analyze, design, simulate, produce and to test an EEM in such a manner, that the influence of noise (or an unforeseeable event) on an EEM is minimized. The RV approach emphasizes knowledge based engineering analysis and testing a product to failure or a predefined degradation level without introducing invalid failure mechanisms. SAE-J1211 introduces and defines a Knowledge Matrix as a repository for systematic failures, i.e., failures that are systemic or inherent in the product by design or technology. The Knowledge Matrix is a collection of lessons learned by the organization using the Robustness Validation process. Knowledge Matrix is defined with a structure to enable easy navigation of the possible failure modes and causes by taking a module in combination with the intended customer use and breaking it down to the components and technology used to assemble a module. Furthermore, the document explains the concept of intelligent testing. As mentioned before, the aim of intelligent testing beyond basic validation is to identify the robustness margin early in the development phase. Intelligent testing is focusing on the unknown as opposed to just following the general requirements based on standards. Intelligent testing approach armed with data from FMEA, simulation, HALT and verification testing creates a tailored validation plan in order to focus on the product weaknesses and minimize the amount of no-value added testing. In summary, according to SAE-J1211, the implementation of the ‘state of the art’ capability and durability testing combined with failure mode and technology specific testing at the right time is the key for intelligent testing in the RV process. Needless to say that this approach offers additional flexibility in developing the appropriate validation test plans at any stage of product development. The results of Intelligent Testing activities can be used to

calculate the Robustness Indication Figure (RIF). RIF is defined as the ratio between the product’s estimated strength and the strength required by the product specification. The RIF can be calculated for every category and/or every (reliability) influence factor such as vibration, thermal cycling, humidity, processes or intelligent testing. It is not useful to generate a RIF for "soft factors" like "communication". It is important to note that SAE-J1211 is slowly gaining popularity around the globe, especially in the European Union. A German national association of electronics manufacturers, ZVEI fully accepted RV as one of the guiding product development principles, see [8] and [9] and actively

promotes it in its handbook for robustness validation [10]. 3 ROBUSTNESS VALIDATION IN PRACTICE There are different ways the RV testing and the overall SAE-J1211 methodology can be applied in practice. Figure 1 details a potential product test and development flow. The first steps require that all the available information about the product, its predecessors or other products with similar technology should be utilized prior to the testing. The examples include FMEA, FRACAS data with the relevant field failure information; lessons learned, known manufacturing issues, etc.

Figure 1 – The intelligent testing and RV flow diagram 3.1 Knowledge Matrix As mentioned before, the knowledge matrix may contain the information about failures that are systemic or inherent in the product by design or technology. It can also contain the information ranging from the component level (e.g. resistor, capacitor, IC, etc.) to the vehicle level, which can be utilized to better understand how the unit will perform in a specific environment and mission profile. The objective being to include most if not all lessons learned in the knowledge matrix and eventually in the design and validation process. All that available information can be used for the DfR process similar to the way it is done in [7] and also to plan the HALT and verification testing. The process shown in Figure 1 emphasizes the importance of verification activities prior to DV and PV. HALT is an important part of RV testing and there have been a lot of material written on HALT, see for example [11] and [12], however there is still a lot of misunderstanding about the role of HALT in the design and development process. For example, it is not uncommon to see an automotive OEM requirement for HALT testing as a part of DV test flow. In those cases it is not clear what can be

achieved here, since it is typically too late to make any significant changes to the product at the DV stage. In fact, including HALT into a DV test flow almost guarantees another round of validation, since the goal of HALT is to fail the product and consequently introduce design modification. At this stage a test called RFMPTTM [13]) can be considered as an alternative to HALT. RFMPT is a new and different type of testing, offering in some situations a reasonable compromise between HALT and field environment correlated durability testing. Although RFMPT might take longer time than HALT, the information it produces typically has more correlation with the field and hence may provide more useful information regarding the product’s design margins. In order to generate a comprehensive knowledge matrix the operating limits, destruct limits and foolish failure limits need to be identified and documented for every failure mode. Also finding the failure trigger (either operational or environmental) is critical. The test modes, equipment and methodology for monitoring and analysis need to be selected for each potential failure mode and put in the failure matrix. The results of this stage along with other inputs, such as

comprehensive mission profile, field data on similar products; physics of failure, modeling results, etc. (see Figure 1) form the basis to plan intelligent testing and robustness validation. As mentioned before, intelligent testing is focusing on the unknown, which at this point should be defined from the results of the previous testing and the inputs shown on the left side of Figure 1. 3.2 Mission Profile Back in the 1980’s and 1990’s most automotive OEMs collected large amounts of field data including ambient temperatures, vibration profiles, vehicle operating modes, usage frequency, user severity and other appropriate vehicle/system parameters. However, collecting these kinds of data is an expensive proposition, as a result many automotive OEMs stopped those activities and began requesting suppliers to do this work or rely on industry standards and other engineering documents. However, with a large number of new products and automotive technologies the need for such types of data still exists. A comprehensive mission profile, which can be utilized for an intelligent testing often needs to include information beyond what’s currently documented in the internationally accepted specifications. For example, air flow, convection, heat radiation or any other information beyond upper and lower temperature limits can be very helpful. If an electronic unit has liquid cooling, a diagram showing the unit’s temperature as a function of coolant flow can be very helpful. If no field data is available a mission profile can be generated from computer simulation combined with some generally available environmental data. For example, climate data for vehicles similar to that presented in [14] can be combined with finite element analysis generated internal heating for an electronic unit. Mission profile data can be collected at the component (IC) level and later combined with a system and subsystem level data, Figure 2. For example temperature testing at an engine controller level would include analysis at a chip level and data collection by thermocouples on critical drivers, controllers or other self-heating components. Thermographic analysis of the unit’s circuit board assembly is the common tool to identify critical areas and components. Often the information in test specification may not be sufficient. For example a specification may call for the maximum temperature of 95°C for a passenger compartment mounted module, however depending on the specific location of the unit that temperature may actually be 85°C or 90°C. However in some cases, if the unit experiences a large amount of internal heating then temperature may be higher.

Figure 2 - Mission profile development Another application of field data is determining the user severity profiles. It is impossible to make a cost effective

design counting on an absolutely worst case stress scenario, e.g. lowest temperature on record or the worst ever vibration profile. Instead we need to look at the percentile data (commonly 95th or 99th percentile), and that is where field data can also be very helpful. Unfortunately, as mentioned before, the OEMs are often relying too much on the existing standards and are reluctant to spend time and resources on collecting new field data. In those cases suppliers are forced to either collect their own data or to go along with the OEM’s requirements and recommendations. Needless to say that most automotive suppliers have fewer resources than OEMs and are typically less involved in vehicle level testing, thus making the use of industry standards a common practice 3.3 Intelligent Testing When the first hardware is produced and the knowledge matrix is compiled, early verification testing can begin focusing on specific concerns of the design team. These can include mechanical tests such as shock or vibration, thermal tests like temperature cycling or high temperature endurance, humidity tests or any combination thereof. Tests results analysis and design improvements can be coordinated along the particular engineering disciplines: mechanical, electrical, functional, systems, etc. For example, mechanical engineers might be able to improve the airflow in the housing, electronic designers can add an additional heat sinks, EE can work on component selection and performance, etc. The main objective being to address all (or nearly all) the problems prior to DV. The problems found during DV and especially PV stages are very disruptive and costly to the entire development process, therefore the goal is to finalize the design prior to that. This is not always easy to do because project management is often under pressure to start DV according to the initial schedule and some remaining design issues make it to DV. Indisputably, this sets the product up for failure and subsequently almost guarantees another (unplanned) iteration of DV. Whenever possible, intelligent testing should include test to failure to assess product robustness, however it is not always practical, especially at the DV and PV stages. Success based testing, where all the tested parts are expected to pass the test equivalent of one mission life is a common practice in the automotive industry. It is typically a shorter test than test to failure but unfortunately it does not generate information regarding the product’s design margins. For example, in order to precipitate sufficient number of failures for a statistical analysis a temperature cycling or vibration can take twice or three times longer than a success based test. Therefore, extending a success based in order to achieve a desired RIF can be considered as a reasonable compromise. Another example of intelligent approach to testing is a derivation of a random vibration acceleration model. Most OEM specifications use the S-N curve to calculate the acceleration factor (see for example [7] or [15], where acceleration model follows the inverse power law:

AF

§ GRMS test ¨¨ © GRMS  field

· ¸¸ ¹

where G RMS is the RMS level of random vibration and b is the fatigue exponent. Earlier studies of high cycle fatigue by D. Steinberg repeatedly showed that for electronic products b is in the range of (6.4 – 6.8) [15]. However, most of the OEM specifications lowered this number to 5.0 or even 4.0 reducing the acceleration factor and hence artificially extending the test time and consequently forcing suppliers to overdesign their products. Multiple testing and experimentations at Delphi showed that the fatigue exponents for various failure modes are still above 6.0 proving again that most OEMs forcing their suppliers to overstress their products during vibration testing. Applying the proper acceleration model would help to optimize the testing, reduce the test time and avoid ‘foolish failures’. 4 CASE STUDY

on both sides in order to develop a comprehensive mission profile. Usable field data and lessons learned from potential building blocks have been identified. Documents like FMEA, verification plans and validation plan were initiated and set as living documents. The knowledge matrix was developed for the subsystems including: x Housing ƒ Fixtures, attachments, mounting (vibration: broken clips, broken holders, etc.) ƒ Thermal (weak plastics at high temperature) x Connectors ƒ Connection AC (high current with high temperature, multi connections: fretting corrosion, increased voltage drop) ƒ Connection HV x AC inputs and High Voltage outputs x DC/DC Stage x Components (resistors, capacitors, semiconductors, transformers, etc.) A detailed matrix was prepared defining potential failure modes, mechanisms and causes. A large Excel spreadsheet with hundreds of rows was compiled to get all needed information collected for test preparation. Additional information was extracted from the warranty database for similar products. A short excerpt from the knowledge matrix is shown in Table 1.

A case study of an automotive power electronics inverter for HEV is presented. The main function for this product is to convert DC power into multi-phase AC power required to drive three-phase machines used in HEV propulsion. Cooperation between the customer and the supplier began at the quoting process and advanced through development phase. Due to the novelty of this product extensive data collection was required by engineering experts Table 1 - Knowledge matrix example Part

Failure Mode

Failure Mechanism

Failure Cause

Testing

Microprocessor

Loss of function

Degradation

High temperature endurance

PCB

Via cracking

Fatigue/overstress

Overheating. Low coolant flow Thermal expansion

Housing

Mounting problems

Mechanical tolerances out of spec

Thermal expansion and contraction

Thermal cycling

High power connector

Loss of connection

Detachment, fretting corrosion

Vibration

Random vibration

Film capacitor

Short or parameter drift

Corrosion

Excessive humidity + voltage

Steady state and cyclic humidity

Electrolytic capacitor

Capacitor open

Broken lead

Excessive stress or fatigue

Vibration

Electrolytic capacitor

Parameter drift

Degradation of electrolyte

Extended exposure to high temperature

High temperature endurance

Additional information was received from the OEM about the operating profiles of the unit. That information included various driving profiles (e.g. city rush hour, rural aggressive, mountain road, highway, etc.) providing the currents and voltages variations with time similar to that shown in Figure 3. Also HEV electronics is subject to a large number of on/off power switching causing up and down cycles of the internal temperature (see example in Figure 4). The temperature changes and frequency of those temperature excursions provides important information for developing an appropriate thermal cycling test profile and duration.

Thermal cycling

Figure 3 - Unit voltage vs time (rural driving profile)

Figure 4 - Temperature changes during on/off cycles This information along with FMEA and the Delphi design standards were applied to the product design and development. After the hardware was built and initial functional testing and verification was completed the product was ready for the first phase reliability testing – RFMPT. Several units have been instrumented and thermocoupled to study the unit’s response to the ambient temperature and the effect of the thermal mass on the temperature lag time. The RFMPT was started with a vibration run similar to that described in [13]. One of the differences between HALT and RFMPT is that the latter can provide a random vibration profile, which replicates the random vibration in the field, whereas a HALT vibration profile has a significant high frequency content and very limited control options. Several issues have been found during the vibration runs including cracked transformers, electrolytic capacitors broken legs and a few connector issues. The second RFMPT run included combined vibration and temperature cycling. Several problems with FETs, microprocessor overheating, and mechanical tolerances drift were found. Special attention was paid to the quality of connection and prevention of loss of power while operated at the conditions of extreme temperatures, vibration and voltage. Additionally a few shorted resistors caused lost signals and electric parameter drifts. From the results of the RFMPT various product improvements were implemented and the information learned helped to improve the DV test planning. The mission profile information obtained from the OEMs, similar to that shown in Figure 3 and Figure 4 helped to calculate test to field correlation and the acceleration models used for test durations in DV. An additional benefit of running a highly accelerated testing like HALT or RFMPT is that the upper and lower thermal, vibration and operation limits can be established helping to better plan DV and PV by understanding how far the tests can be accelerated. Higher acceleration factors can shorten the tests while understanding of product limits can help to avoid foolish failures during validation phases. Several failures found during RFMPT took some additional time to fix introducing a delay in the start of DV.

Despite the strong intent to start DV as originally planned the design and validation teams took additional time to fix the problems and thus assuring that the product is indeed ready for DV. The adherence to the original validation schedule would have kept the project management happy, but almost certainly would result in product failures and consequently would trigger another iteration of DV (DV2 and possibly DV3). This scenario would disrupt the original product development schedule more significantly than the original delay in starting the DV. Following the guidelines of SAE-J1211 the temperature cycling test duration was calculated using the Coffin-Manson model (see for example [3] or [7]). One mission life required 624 cycles of [-40: +125]qC with 20 min dwell times at each extreme. Due to the lack of time this test was done as a success based testing. The test was completed with 22 samples with no failures. Additional steps were made to estimate the robustness of the design and 60 extra cycles were run as part of the evaluation routine. Based on those extra cycles the design robustness was estimated at RIF= (624+60)/624 =1.10. Based on the binomial distribution 22 test samples with no failures has demonstrated 97.0% reliability with 50.0% confidence [7]. However since the stress conditions reflected 99th percentile user severity for the field temperature cycling it corresponded to 99.6% population reliability using the conversion method described in [16]. This case has proven to be a good example of cooperation between suppliers and OEMs, early expert involvement, application of lessons learned, team work, understanding of new technology, and a solid engineering approach to product development. Overall, the adherence to the key principles of SAE-J1211 Robustness Validation helped improve the product design and validation process and save cost by optimizing the process and avoiding the unplanned iterations of DV and PV. ACKNOWLEDGEMENTS The authors would like to thank Derek Braden, European Validation Manager and Jason Shahan, Validation and Global Hardware Operations Manager (both Delphi) for reviewing the draft and making constructive suggestions. REFERENCES 1. 2.

3. 4.

Environmental Testing. IEC 60068-2-1 through IEC 60068-2-82 International Electrotechnical Commission https://webstore.iec.ch ISO 16750 Road vehicles. “Environmental conditions and testing for electrical and electronic equipment”. International Standards Organization 2010. http://www.iso.org SAE J1211 Handbook for robustness validation of Automotive electrical/electronic 1211 modules 2009/2012 SAE J1879 Handbook robustness validation for of semiconductor devices in automotive of applications 2007/2014

5. 6.

7. 8. 9. 10. 11. 12. 13.

14.

15. 16.

D. Raheja and L. Gullo “Design for Reliability”. Wiley, Hoboken, NJ., 2012 M. Silverman and A. Kleyner “What is design for reliability and what is not?” Reliability and Maintainability Symposium (RAMS), Reno, NV, 2012 Proceedings by IEEE P. O’Connor and A. Kleyner “Practical Reliability Engineering” Ed.5 Wiley, Chichester, 2012. A. Nebeling “Reduzierter Testaufwand’ Funktionale Sicherheit, Elektronik Automotive”, July 2015 (in German). Can be accessed on www.electroniknet.de Handbook for robustness validation of Automotive electrical/electronic modules, ZVEI 2008/2013 http://www.zvei.org/en/ Robustness validation manual - how to use the Handbook in product engineering, ZVEI-2009 http://www.zvei.org/en/ H. McLean “HALT, HASS, and HASA Explained”. ASQ Quality Press, Milwaukee 2009 C. Peterson, “Hands-on HALT and HASS” TEST Engineering and Management. August/September 2007, pp. 10-13. T. Achatz and A. Kleyner, “What is RFMPT™? A rapid and effective method to improve product designs early” TEST Engineering and Management. October/November 2012 issue, pp. 2 – 5. J. Hu and K. Salisbury Hu “Temperature Spectrums of an Automotive Environment for Fatigue Reliability Analysis”. Journal of the IES: November 1994, Vol. 37, No. 6, pp. 19-25. D. Steinberg “Vibration Analysis for Electronic Equipment” 3rd Ed. John Wiley & Sons 2000. A. Kleyner “Effect of Field Stress Variance on Test to Field Correlation in Accelerated Reliability Demonstration Testing”. Quality and Reliability Engineering International, 31:783–788, 2015

BIOGRAPHIES Andre Kleyner, PhD 2151 E. Lincoln Rd. CT4E Kokomo, IN 46902, USA e-mail: [email protected] Andre Kleyner has 30 years of engineering, research, consulting, and managerial experience specializing in reliability of electronic and mechanical systems designed to operate in severe environments. He received the doctorate in Mechanical Engineering from University of Maryland, and Master of Business Administration from Ball State University. Dr. Kleyner is a Global Reliability Engineering Leader with Delphi Electronics & Safety and an adjunct professor at Purdue University. He is a Fellow of the American Society for Quality (ASQ), a Certified Reliability Engineer, Certified Quality Engineer. He also holds several US and foreign patents and authored multiple professional publications including three books on the topics of reliability, statistics, warranty management, and lifecycle cost analysis. Andre Kleyner is also the editor of the Wiley Series in Quality and Reliability Engineering (John Wiley & Sons). Alexander Nebeling Benzenbergstrasse 46 Germany, 57482 Wenden e-mail: [email protected] Alexander Nebeling has 20 years of engineering, research automation and management experience specializing in validation and reliability of automotive electronic and mechanical systems. He received the Engineer of industrial electronics from FH Cologne. As Quality Manager (DGQ) he is experienced in process design, certification and accreditation. Alexander Nebeling is Validation Engineering Manager with Delphi Deutschland GmbH Electronics & Safety at site Wiehl leading validations labs for EMC, Product Homologation and Environmental testing. He is a Member of the ZVEI Automotive Group continuously developing the Robustness Validation in cooperation with SAE.

Suggest Documents