Reactive-Replay approach for verification and ...

4 downloads 0 Views 227KB Size Report
28/03/2017. Reactive-Replay approach for verification and validation of closed-loop control systems in early development. Johannes Bach1. , Marc Holzäpfel2 ...
Reactive-Replay approach for verification and validation of closed-loop control systems in early development Johannes Bach1, Marc Holzäpfel2, Stefan Otten1 and Eric Sax1 1) FZI Forschungszentrum Informatik, 2) Porsche AG

1. Introduction The transition from Advanced Driving Assistance Systems (ADAS) to automated driving is accompanied by an increasing number and variety of sensor and communication systems incorporated in road vehicles. These open diverse new possibilities for control system design, which render early validation of system concepts in preliminary development increasingly important. Therefore, consistent and appropriate system stimuli are required, which can pose a challenge when testing closed-loop control systems. A common approach for the assessment of new technologies and to perform a proof of principle is Page 1 of 7 28/03/2017

While real world validation is indispensable, it hampers reproducibility. The preliminary development phase is characterized by tight resources, frequent changes in design and an iterative modus operandi. Scant experience with the system properties leads to regular realignments and refactorings. To achieve satisfying results within an appropriate time span, besides the RP approach further stimuli for iterative test of control systems are required. These range from test vectors modelling certain interesting waveforms and replay of recorded data to more sophisticated approaches such as Time Partition Testing (TPT) [2], which facilitates systematic testing of embedded systems with continuous behavior, or X-in-the-loop (XiL) simulation [3-5]. XiL represents a simulation based methodology for the test of embedded systems over all development stages of the automotive industry. The selection of the appropriate stimuli depends on the specific characteristics of the System under Test (SuT). As shown in Figure 1, closed-loop control systems require stimulation by methods such as XiL, RP or TPT, which offer a sufficient degree of reactiveness. Unfortunately, these elaborate methods require significant resources for deployment and maintenance. Closed-loop Rapid prototyping

XiL simulation

Reactiveness

Open-loop Naturalistic data

Replay of recorded data

Time partition testing Test vectors Synthetic data

Validation

Enhanced technological capabilities render the application of various, increasingly complex, functional concepts for automated driving possible. In the process, the significance of automotive software for a satisfactory driving experience is growing. To benefit from these new opportunities, thorough assessment in early development stages is highly important. It enables manufacturers to focus resources on the most promising concepts. For early assessment, a common approach is to set up vehicles with additional prototyping hardware and perform real world testing. While this approach is essential to assess the look-and-feel of newly developed concepts, its drawbacks are reduced reproducibility and high expenses to achieve a sufficient and balanced sample. To overcome these drawbacks, new flexible, realistic and preferably automated virtual test methods to complement real world verification and validation are especially required during early development phases. In this contribution, we present a method for automated system assessments based on the reuse of recorded driving data in closed-loop simulation and its application in early development of a predictive cruise control system. Firstly, we identify the requirements for early assessment of closed-loop system concepts, analyze the eligibility of established methods regarding the identified requirements and describe open challenges for development of automotive software systems with focus on early development stages. Our previously introduced Reactive-Replay approach addresses these challenges by enabling reuse of recorded driving data in closed-loop simulation. We complement this approach by introducing automated assessments for evaluation of software increments. By integrating periodic assessments into the development process, we achieve continuous tracking of software quality with very small effort. It is shown that the provision of a broad data pool for simulation based evaluation of new and refined concepts contributes to a substantial reduction of real world test mileage in early development stages.

to set up Rapid Prototyping (RP) systems with high-performance hardware [1] for demonstration. This allows execution and validation of the developed software in the targeted environment in very early stages.

Verification

Abstract

Simplicity

Figure 1: Overview of established stimuli for verification and validation of automotive software

A further crucial classification can be made based on the origin of the data. Verification tries to answer the question "Are we building the product right?" [6]. For this purpose a synthetic stimulus fed in via a

stub is an appropriate solution. Synthetic data usually is comprehensible, packed and easy to reconfigure. However, real world stimuli are better suited for validation, which strives to answer the question "Are we building the right product?" [6]. This question reflects underlying uncertainties and open questions concerning the application scenarios and surroundings of the SuT. Reuse of naturalistic driving data recorded with RP systems offers a substantial contribution to the preliminary development of open-loop systems because sparse additional resources are needed for the provision of realistic, repeatable and consistent stimuli. Closed-loop systems require the simulation of a plant model for feedback and timeconsuming scenario definition, leading to limited variants and realism. To address these challenges, this work presents the application of a novel approach to support the preliminary development of a closedloop control system, using the example of Predictive Cruise Control (PCC) [7]. The approach links replay of naturalistic driving data with established plant model based simulation and we demonstrate its application for early assessment of ADAS and automated driving concepts and continuous regression testing during preliminary development. Usage of naturalistic driving data saves resource-intensive specification of synthetic scenarios and semi-automated assessments of simulation results enable extensive test sets. In Section 2, we match requirements related to closed-loop system development with established methods and identify remaining challenges. Section 3 picks up our previously introduced Reactive-Replay approach [8] and outlines its benefits regarding the identified challenges. We present application examples of Reactive-Replay for regression testing and concept evaluation in Section 4. The contribution concludes in Section 5 with a short summary and an outlook on prospective enhancements.

2. Requirements, Established Methods and Open Challenges for Verification and Validation of Automotive Control Systems Lately developed automotive control systems for ADAS or automated driving process various system inputs by multi-layered complex algorithms in order to drive distributed actuators [9, 10]. Besides questions regarding the implementation of the desired functionality, research and development of automating functions poses new challenges concerning the utilized tools as well as the applied methods. These have to evolve with the algorithmic advancement to suit increasing requirements. The presented approach was designed in the context of the development of a PCC system, which performs an optimal longitudinal control of a vehicle based on vehicle state, road topology, speed limits and surrounding traffic [11]. The system fuses its different inputs into a singular representation of the vehicle's state and the surrounding and a Model Predictive Control approach performs optimal control of the vehicles velocity. To achieve a sufficient control behavior with the applied multistage control system architecture, intensive tests and experiments are required.

For tracking of a project's progress and the degree of maturity of the implemented code, repeatable and consistent quality measures are essential. Usage of coding or modeling guidelines and applying static analysis tools offers a stable foundation. For an advanced project, the implementation of unit tests provides an adequate contribution to quality assurance. However, in very early stages, dispatching dedicated resources on quality measures could mean wastage. Whereas established series-development processes, based on Automotive SPICE [12], distinguish between development and test engineers, distinction is rather difficult during preliminary development due to very small project teams. For fast development progress, direct feedback for developers is inevitable. This requires a possibility for developers to run tests and perform evaluations in their accustomed working environment. After executing tests, the assessment of results represents a further challenge. Especially extensive test cases with a vast amount of resulting data need automated assessments to save resources. In addition, the tools applied should be lean and provide ease of use to allow developers to focus on function-enhancement. Fulfilling these requirements provides a suitable basis to achieve working, fault free software increments and to satisfy the expectations of stakeholders. Utilizing a RP vehicle allows testing of the integral system, but these tests are time consuming and expensive. Ensuring repeatable and consistent conditions in real world is very difficult or even impossible to achieve. A very straightforward possibility constitutes replay of data recorded during test drives with the RP system. Replaying naturalistic test data allows regression based testing in a straightforward way. A broad variety of experienced situations for replay contributes to preventing involuntary deterioration of implemented features during on-going development. For example, evaluation and assessment of Computer Vision (CV) algorithms very commonly relies on recorded and labeled images with associated metadata [13]. The application of replay techniques performs best in development of open-loop systems. The absence of a feedback loop, that affects the system's inputs, enables one to specify the expected outcome of the defined test cases in advance. Figure 2 depicts an abstract open-loop test approach in reference to Lehmann [2]. Besides defining the desired stimulus for the SuT, the expected outcome is specified. This ground truth is used for automated assessment of the test result after execution. Instead of recorded and labeled data synthetic data designed for a specific verification purpose could also be applied as stimulus. Both definition of the expected outcome and the design of synthetic test data implies certain initial effort, but provides a high level of automation and enables reiterations for the succeeding development stages. test definition stimuli testcase

test execution SuT

test evaluation

result assessment

expected outcome Figure 2: Information flow of an open-loop test approach utilizing labeled data

The look-and-feel of an automotive software function accounts for a substantial part of the decision made about a project's continued existence. For this reason, a major requirement for the assessment of innovative functions affecting the behavior and dynamics of a vehicle is real world application. A common example is the use of a series car equipped with additional RP hardware. Page 2 of 7 28/03/2017

The described open-loop test process is insufficient for test and evaluation of the PCC system. The closed-loop control approach requires the test environment to adapt the input stimulus in feedback to the SuT's output. This is commonly achieved utilizing a plant model for

simulation of the system's environment [14]. Figure 3 shows an abstract closed-loop test process. For development and verification of ADAS and automated driving, diverse vehicle and environment simulations providing full XiL functionality are available [3, 15]. These are especially useful in series-development with clearly defined requirements and explicitly assigned resources for verification and validation purposes.

proaches requires a broad range of stimuli to achieve a significant rating. Repeatable test cases are vital for a continuous tracking of the function's degree of maturity. Despite the availability of naturalistic driving data collected with functional prototypes during indispensable real world validation, we miss an approach to reuse the data for verification and validation of closed-loop systems.

3. Reactive-Replay Approach test definition

testcase

test execution

initial conditioning

SuT

parameters / scenario

plant model

test evaluation result assessment

observable criteria Figure 3: Information flow of a closed-loop test approach utilizing a plant model and scenario configuration

The definition of test cases for XiL includes the initial conditioning of the SuT and parametrization of the plant model. To parametrize vehicle and traffic simulations, definition of detailed scenarios is a common strategy [QUELLE]. Depending on the number of traffic participants and the duration of a scenario, the specification of a sufficient set of scenarios requires a high effort. In addition, manual specification depends on personal experience and creativity. Thereby, the process risks missing unusual but highly relevant scenarios. Abstraction of naturalistic data could support future verification and validation activities [16]. Utilizing naturalistic data during the preliminary development of the closed-loop system presents an opportunity for uncomplicated and resource saving test case generation, provided that an approach to introduce reactiveness of the data toward the system's output is available. Automated assessment of test results in a simulation based closedloop test process requires the definition of observable criteria. In preliminary development the definition of scenarios and criteria poses a challenge. Depending on the novelty of the utilized approach, a lack of experience hampers formal specification of the system's expected behavior. Finding an approach to reuse the subjective criteria applied to the system during real world validation with the RP system, could facilitate the process and provide a subtle contribution to quality assurance during preliminary development. To increase efficiency of verification and validation, TPT represents an eligible addition to the described methods. Introduced by Lehmann [2], TPT provides a "systematic approach for test of the continuous behavior of embedded systems". It encompasses diverse modeling techniques and combines predefined signals with automatons describing transitions in finite-state-machines. The presented distinction between test execution and assessment is beneficial. The current version of the dedicated commercial product also offers integration of measured data [17] for open-loop tests. While the idea of combining various approaches is promising, in case of preliminary development, specification of synthetic test cases for closed-loop testing does not represent a suitable option. In summary, it can be stated that scarce resources and high agility of features in preliminary development represent the prime impediment hindering the application of sophisticated simulation and test methods. Likewise, the assessment of newly developed functional apPage 3 of 7 28/03/2017

To address the open challenge of data reuse in context of closed-loop system development, we designed what we call Reactive-Replay. The approach combines a plant model of the vehicle dynamics with recorded test data [8], adding a sufficient level of reactiveness to the recorded data to simulate the closed-loop system's functionality. The core principle of the Reactive-Replay approach is the distinction between inputs with direct feedback to a respective system output and inputs that causally depend on the latter. In the case of PCC, the longitudinal dynamics of the vehicle are part of the basic feedback loop, whereas, for example, the upcoming road topology data of the predictive map input solely depends on the vehicles position. Therefore, modeling the vehicle's longitudinal dynamics for simulation is a basic requirement and the upcoming road topology may be fed in from a record if adapted to the simulated vehicle position. By mapping the time-based record onto the driven track, we render it possible to replay recorded data in concordance to the simulation of the vehicles longitudinal dynamics. The approach is facilitated because the control algorithm only affects the vehicles longitudinal position, thereby restricting the relevant dependencies to a single dimension. Whereas spatially static information, such as the measured slope, can be treated analogous to time-based continuous signals, traffic objects performing an ego-motion need a more elaborate approach. Within the measured data, preceding traffic objects are represented by data objects with a relative distance to the ego-vehicle as well as their respective velocity and acceleration. The data stream is divided into sequences with traffic objects being present and sequences without preceding traffic. For each occurrence of a traffic object, an independent sequence is derived. The ego-vehicle's position at the first occurrence of the object marks the sequence's start condition. The relative position of each data frame of the sequence is mapped to an absolute track position using the respective ego-position at the time of the measurement. Additionally, the time difference between the start of the sequence and the particular measurement is appended to each frame. This ensures traffic object motion corresponding to the recorded test drive and independence from differing ego-vehicle's movement in the virtual environment. During execution of the simulation, a sequence is triggered when the ego-vehicle's position reaches or exceeds the respective track position. From this point on, the recorded frames are fed into the simulation based on their time stamp. The input signal is reconstructed by calculating the relative distance between the simulated position of the ego-vehicle and the absolute position of the particular traffic object frame. That way, the behavior of the traffic objects is preserved and the environment reacts to the PCC's output. Zofka et al. [18] show a likely concept utilizing a LIDAR-sensor focused on rendering critical traffic scenarios from recorded data.

The Reactive-Replay approach is integrated into a lean proprietarily developed software test bench running on standard office PCs. It utilizes a detailed model of the test vehicle's longitudinal dynamics, including its engine, gearbox and powertrain. For simulation of the road topology, predictive map data is applied. The procedure to prepare new data recorded with the RP vehicle for Reactive-Replay is fully automated. Special situations or scenarios are available for resimulation within one individual operation, thereby consuming almost no resources. This approach provides a very efficient way for debugging and verification of the implemented software as well as initial validation of the concepts. The seamless and consistent procedure enables resimulation of any situation or scenario experienced during a test drive with the prototype vehicle. While this constitutes a potent approach for development and test during preliminary development, the approach has clearly defined limits. Only the vehicle's longitudinal dynamics are simulated and possess full closed-loop properties. All other data is simply replayed in a more subtle way. For example, this implies that traffic objects won't react to tailgating and the angle of steering won't change without a driver model in accordance with varying velocity as it is also replayed according to the vehicle's position. The most important deficiency arises if the recorded and the simulated velocity of the vehicle differ too much. This leads either to over- or undersampling of the recorded data and causes inconsistent and unrealistic input signals as important information might be skipped and the dynamics of input signals differ strongly from expected values.

In Section 2, we identified the need for automated assessment of test results and introduced the idea of reusing the subjective criteria applied during real world validation. Following the labeling approach used for CV evaluation, we present a process to derive the expected outcome, which we call ground truth, to assess the resimulation of a recorded test drive. This process enables efficient regression testing by introducing a substantial degree of automation. A further important use case of Reactive-Replay depicts the exploitation of naturalistic driving data for evaluation of closed-loop system concepts. To provide a broad data pool, consequent recording and storage of RP test drives is necessary. In doing so, developers can resort to an increasing number of naturalistic driving scenarios contained in the data pool. These naturalistic scenarios enable evaluation of novel concepts in desktop environment before implementing an advanced release into the prototype car. This significantly reduces necessary test kilometers with the RP vehicle.

4.1 Regression testing

4. Application of Reactive-Replay

Figure 4 depicts the application of Reactive-Replay for regression testing during preliminary development. In the first step, the developer selects a set of recorded test drives from the data pool. The selected set should provide a balanced mix of scenarios relevant for the examined functionality. This is so far solely based on experience, but for PCC typically includes highway and rural road sections with and without traffic.

Despite these drawbacks, if applied within its conceptual limits, Reactive-Replay represents a powerful method for preliminary development as it supplies reproducible test cases from real world test drives without added costs.

The approach postulates validity of the latest release of the software at the beginning of each development cycle. Based on this assumption we use this latest, validated release to derive the ground truth of the test set by executing the Reactive-Replay simulation. To reduce

Definition

Selection

manual supported automated

Evaluation Criteria

Test set

Development Cycle Development Validation in RP-Vehicle

t

Latest Release

Reactive Replay

Ground Truth

Enhanced Code

Reactive Replay

Simulation Result

Assessment

Assessment Report

Enhanced Code

Reactive Replay

Simulation Result

Assessment

Assessment Report

Figure 4: Application of Reactive-Replay for continuous regression testing during preliminary development

Page 4 of 7 28/03/2017

the risk of unwanted side effects after a development step, for example fixing a bug or improving a feature, the enhanced code is executed in the simulation with the same test set. The result of this simulation is compared with the previously derived ground truth. The continuous result is filtered by a moving window and the comparison is applied to the respective excerpt. Therefore, comparison criteria need to be defined at the beginning of a development cycle suitable for the features under development. A criterion involves the signal to be compared, the size of the window used, the comparison operator and a threshold specifying the violation of the criterion. The implemented comparison operators are basic mathematical operations, such as difference and accumulation. In our case, typical signals include the vehicles velocity, acceleration or the gear requested by the PCC system. Following this approach, we are able to automate a substantial amount of the assessment. After manual selection of the test set and definition of the criteria, the only further manual task is the review and rating of sections where at least one criterion was violated. This enables additional short iterations between validations in the RP vehicle. The fast assessment of minor changes leads to higher code quality and better system understanding within the development team. Figure 5 shows an exemplary assessment report. For this assessment two criteria were defined. The first criterion evaluates the velocity difference between ground truth and simulation result. It limits the velocity difference between the ground truth and the simulation result to 1 m/s. As can be seen in in the upper graph, the criterion is violated twice. The second criterion evaluates the requested gear strategy. To identify deviations from ground truth with a duration of more than 50 meters, a corresponding moving window is applied to integrate the distance with differing gear prompts. In the lower graph, four violations are highlighted. In theory, instead of deriving the ground truth by simulating the latest release, one could use the recorded real world system output. This doesn't naturally apply due to two constraints. Firstly, the accuracy of the used plant model needs to be sufficient for direct comparison. In our case this does not apply to the used road model. Secondly, system

behavior changes with ongoing development. Therefore, system output at the time of record and system output at the time of resimulation will deliberately differ. Intended changes accumulate over time and would render the comparison approach useless if not compared to the latest and up to date release. Deliberate changes constitute a general limitation for the application of this approach. Intended changes in system behavior will most certainly violate defined criteria. A possibility to mitigate this problem is the exclusion of sequences from automated comparison, for which the necessary changes apply. Integrated into the preliminary development process, the described method complements established verification and validation strategies. While static code analysis, reviews and unit tests enable constant verification, Reactive-Replay provides means for constant virtual validation. In conjunction with selected real world assessments, functional quality is upheld and, at the same time, failures in expensive real world tests are reduced. Repeatable test cases enable continuous tracking of the functions’ degrees of maturity and prevent hidden deterioration of features.

4.2 Early Concept Assessment In preliminary development, decisions on shifting to a new system concept, changing a component or performing crucial refinements have to be made on a regular basis. Before finalizing the decisionmaking based on real world experience, Reactive-Replay supports a thorough assessment with naturalistic driving data in a closed-loop simulation environment. To improve and assess integrated features of the SuT, scenarios with a particular signal range or curve are of special interest. Providing advanced search capabilities to browse the available data pool for relevant scenarios and situations supports extraction of specific test sets to evaluate new features on an equal and realistic basis. Therefore, our implemented data management tool provides developers with the ability to integrate arbitrary filter functions and extract full records or fitting excerpts.

Velocity (m/s)

30 20 10 0 0

250

500

750

1000 Distance (m)

1250

1500

1750

2000

0

250

500

750

1000 Distance (m)

1250

1500

1750

2000

Gearbox State

16 12 8 4 0

Figure 5: Automated comparison report of velocity based criterion and of gear based criterion for manual evaluation

Page 5 of 7 28/03/2017

Application of this method allows developers, for one thing, to improve the newly developed features to the desired degree of maturity before implementing them into the RP vehicle. Additionally, the extracted test sets enable comparison of implemented alternatives on a wide variety of relevant and naturalistic scenarios. Figure 6 shows an excerpt of the aggregated data during PCC development, which is composed of a mix of motorway, rural road and inner city sections and is ready for resimulation. A selected track is highlighted in red. To facilitate manual selection or review of test sets, chosen signals can be displayed in a parallel view. Figure 7 visualizes the ego-vehicle's velocity (red) at the time of recording and the included traffic sequences (purple) of the selected track.

Section 4.1, involuntary changes of system properties regarding other features can be omitted by usage of the identified test set.

5. Conclusion In this contribution, existing methods for validation and verification of automotive software systems were evaluated by applicability during early development phases of closed-loop control systems. We identified open challenges regarding the application of powerful and elaborate verification methods due to sparse resources in early stages. In doing so, the necessity for a method facilitating the transfer of recorded real world data to a simulation environment arose. The Reactive-Replay approach allows use of naturalistic driving data for development and testing of closed-loop control systems. The approach joins established replay and simulation methods and provides a cost effective and easy to use test and assessment method for the development of closed-loop systems. It supports developers by enabling detection of system faults in a desktop environment. An essential overvalue is the high level of automation that can be applied, especially during early development phases. The aggregation of a rich data pool of simulation scenarios, as a side product of real world validation, represents an additional benefit. It allows a thorough evaluation of novel system concepts and features based on realistic and consistent stimuli. A systematic application of Reactive-Replay supports continuous tracking of software quality and its degree of maturity. The integration of simulation based Reactive-Replay and real world based rapid prototyping in our development process lead to a substantial reduction of real world test drives.

Figure 6: Track selected (red) from available tracks (blue) in the data pool based on a geographical representation.

For example, the PCC system's behavior entering a motorway shall be improved by implementing a new prediction feature. If approval of the new feature for further development requires demonstration of capability on ten varying on-ramps, utilizing established methods would imply either modeling ten scenario variants, including appropriate traffic, or repeatedly driving roughly one hundred kilometers per development iteration (presuming a mean distance of about ten kilometers between exits and a consistent traffic situation). The presented approach enables extraction of these scenarios within minutes from the aggregated data pool. Based on this selection a thorough evaluation and improvement process of the newly implemented onramp feature is made feasible. Combined with regression testing in

So far, the utilized assessment criteria are basic comparisons of single control outputs with the outputs of a previously validated software version. By combinations of signals, additional and more significant criteria are possible. Furthermore, advanced assessment and data mining techniques, such as pattern recognition and space transformations, should be considered. Modularization of existing simulation models enables straightforward integration of Reactive-Replay into established tools to support early concept validation based on naturalistic driving data. An application of the presented development approach on further systems for evaluation is intended.

References 1.

Kammel, S, Pitzer, B., Vacek, S., Schroeder, J. et al., “Darpa urban challenge, team annieway, technical system description.”

Velocity (m/s)

50 40 30 20 10 0

0

3375

6750

10125

13500

16875

20250

23625

27000

Figure 7: Selection from data pool based on included signals. Here the velocity of the ego vehicle Distance (m) (red) and of recorded traffic objects (purple) mapped onto the distance of the selected track is shown.

Page 6 of 7 28/03/2017

Defense Advanced Research Projects Agency (DARPA), 2007. http://archive.darpa.mil/grandchallenge/TechPapers/Team_Annieway.pdf 2. Lehmann, E., “Time Partition Testing - Systematischer Test des kontinuierlichen Verhaltens von eingebetteten Systemen,” Ph.D. dissertation, Technische Universität Berlin, Fakultät IV - Elektrotechnik und Informatik, Berlin, 2004. 3. S. Schwab, T. Leichsenring, M. Zofka, and T. Bär, “Consistent test method for assistance systems,” ATZ worldwide, vol. 116, no. 9, pp. 38–43, 2014, doi:10.1007/s38311-014-0216-x 4. Oral, H., "An Effective Modeling Architecture for MIL, HIL and VDIL Testing," SAE Int. J. Passeng. Cars – Electron. Electr. Syst. 6(1):34-45, 2013, doi:10.4271/2013-01-0154 5. Albers, A., Düser, T., Sander, O., Roth, C. et al., “X-in-theLoop-Framework für Fahrzeuge, Steuergeräte und Kommunikationssysteme,” ATZelektronik, vol. 5, no. 5, pp. 60–65, Oct 2010, doi:10.1007/BF03224034 6. Boehm, B., Software Risk Management, Los Alamitos, CA: IEEE Computer Society Press, 1989, ISBN:0-8186-8906-4 7. Markschläger, P., Wahl, H.-G., Weberbauer, F. and Lederer, M., “Assistance system for higher fuel efficiency,” ATZ worldwide, vol 114, no. 11, pp. 8-13, 2012, doi:10.1007/s38311-012-0241-6 8. Bach, J., Bauer, K.-L., Holzäpfel, M., Hillenbrand, M. et al., “Control based driving assistant functions test using recorded in field data,” presented at 7. Tagung Fahrerassistenzsysteme, 2015, https://mediatum.ub.tum.de/node?id=1285215 9. Ardelt, M.,Coester, C., and Kaempchen, N. “Highly automated driving on freeways in real traffic using a probabilistic framework,” IEEE Transactions on Intelligent Transportation Systems, vol. 13, no. 4, pp. 1576–1585, 2012. 10. Nilsson, J., Brännström, M., Fredriksson, J. and Coelingh, E., “Longitudinal and lateral control for automated yielding maneuvers,” IEEE Transactions on Intelligent Transportation Systems, vol. 17, no. 5, pp. 1404–1414, 2016.

Page 7 of 7 28/03/2017

11. Bauer, K.-L. and Gauterin, F., “A two-layer approach for predictive optimal cruise control,” SAE Technical Paper 2016-010634, 2016, doi:10.4271/2016-01-0634 12. Automotive SPICE Process Assessment / Reference Model, 3rd ed., VDA QMC Working Group 13 and Automotive SIG, Berlin, Germany, 7 2015. [Online]. Available: http://www.automotivespice.com/ 13. Cordts, M., Omran, M., Ramos, S., Rehfeld, T. et al., “The cityscapes dataset for semantic urban scene understanding,” presented at IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, https://www.cityscapes-dataset.com/ 14. Lomonaco, J., Ganesan, S., and Cheok, K., "Model-Based Embedded Controls Test and Verification," SAE Technical Paper 2010-01-0487, 2010, doi:10.4271/2010-01-0487 15. Neumann-Cosel, K., “Virtual Test Drive - Simulation umfeldbasierter Fahrzeugfunktionen,” Ph.D. dissertation, Technische Universität München, Fakultät für Informatik, 2013. 16. Bach, J., Otten, S., and Sax, E., “A model-based scenario specification method to support development and test of automated driving functions,” presented at IEEE Intelligent Vehicles Symposium (IV), 2016. 17. Piketec GmbH, “TPT assessment manual - Version 8”, Berlin, 2015. 18. Zofka, M., Kuhnt, F., Kohlhaas, R., Rist, C. et al., “Data-driven simulation and parametrization of traffic scenarios for the development of advanced driver assistance systems,” presented at 18th international conference on information fusion (fusion), 2015.

Contact Information Johannes Bach [email protected]