We call for a coherent scientific foundation for autonomous robot software systems design, and we discuss a few key demands on such a foundation: the need.
Autonomous Robot Software Design Challenge Saddek Bensalem† , F´elix Ingrand∗ , Joseph Sifakis† ∗ LAAS/CNRS, † Verimag
Unversit´e Toulouse Laboratory, Universit´e Grenoble I, CNRS
Abstract— We summarize some current trends in autonomous robot software systems design and point out some of their characteristics, such as the chasm between analytical and computational models, and the gap between safety critical and besteffort engineering practices. We call for a coherent scientific foundation for autonomous robot software systems design, and we discuss a few key demands on such a foundation: the need for encompassing several manifestations of heterogeneity, and the need for constructivity in design. We believe that a longterm and continuous research effort is necessary to develop a framework for the rigorous construction of robust autonomous robot software systems.
I. M OTIVATION Autonomous robot systems are designed to perform tasks independently, or with very limited external control. They are needed in situations where human control is either infeasible or not cost-effective. 1) they operate in highly variable, uncertain, and timechanging environments; 2) they must meet real-time constraints to work properly; and 3) they are often interconnected with other agents, both humans and other machines. For example, service home robots will need to contend with all the complexities of sensing, planning, acting in real time in an uncertain, dynamic environment; to interact intelligently with humans and other robot systems; and to guarantee their safety and the one of the people they encounter. Some examples, such as tour robots, or nurse robots, have demonstrated their reliability through extensive experimentations, but in limited environments [1, 2]. We are far from giving formal assurances of safety that would be needed before deploying more widely such robots. In such applications, the need for guarantees of safety, reliability, and overall system correctness is acute. The degree of assurance we can provide today is based on extensive simulation and testing. The goal of simulation is to catch errors as early as possible in the design phase to reduce the need for more costly testing of the implemented system. Both simulation and testing suffer from being incomplete: each simulation run and each test evaluates the system performance for a single set of operating conditions and input signals. For complex autonomous and embedded systems it is impossible to cover even a small fraction of the total operating space with simulations. Finally, testing is already too expensive; today building a test harness to simulate a component’s environment is often more expensive than building the component itself. Formal verification is the process of determining whether a system satisfies a given property of interest. Today the best
known verification methods are model checking and theorem proving, both of which have sophisticated tool support and have been used in non-trivial case studies, including the design and debugging of microprocessors, cache coherence protocols, internetworking protocols, smartcards, and air traffic collision avoidance systems . Model checking in particular has enjoyed huge success in industry for verifying hardware designs. Formal verification can be used to provide guarantees of system correctness. It is an attractive alternative to traditional methods of testing and simulation, which for autonomous and embedded systems, as argued above, tend to be expensive, time consuming, and hopelessly inadequate. By formal verification we mean not just the traditional notion of program verification, where the correctness of code is at question. We more broadly mean design verification, where an abstract model of a system is checked for desired behavioral properties. Finding a bug in a design is more cost-effective than finding the manifestation of the design flaw in the code. Unfortunately, after decades of research formal verification has not become part of standard engineering practice. One reason is that these techniques do not scale: code size is too large for practical program verification; the underlying mathematical formalisms (i.e., logics) do not handle all features of the programming language or all behavioral aspects of the system; and proof methods lack compositionality. Another reason is that the tools do not scale: model checkers are limited by the size of the state spaces they can handle; theorem provers require too much human time and effort for too little perceived gain; and the tools are not integrated to work with others found already in the engineer’s workbench. The software is an integral part of autonomous robot systems. The shortcomings of current design, validation, and maintenance processes make software, paradoxically, the most costly and least reliable part of the systems used in critical application. In the following we will lay out what we see as an Autonomous Robot Software Design Challenge. In our opinion, this challenge raises not only a technology questions, but more importantly, it requires the building of a foundation that systematically and even-handedly integrates, from the bottom up, computation and physicality [3]. II. C URRENT E NGINEERING P RACTICES FOR AUTONOMOUS ROBOTS S OFTWARE D ESIGN AND THEIR L IMITATIONS Designing and developing software for an autonomous robot is quite a challenging and complex task. One has to take into account the following requirements:
there is a wide range of software “types” to integrate (from low level servo loop, to data processing, up to high level automated action planning and plan execution). • the temporal requirements of these software components vary a lot (from hard real-time, to polynomial, and up to NP complete decisional algorithm). • the various software components are developed by different programmers, with different background who in most case know little about the other components involved. Nevertheless, to successfully achieve the deployment of autonomous robots, the robotic community has relied on architectures and tools to enforce a number of good software engineering practices. • The software components are organized in levels or layers. Most of the time, these layers correspond to different temporal requirements, or to different abstraction requirements. • The architecture and tools provide some control flow mechanisms to support requests or commands with arguments passed from one component to another, as well as reports sent back to the requester upon completion. • Similarly, some data flow mechanisms are provided to offers access to data produced by a component to another component. Some architectures go further and provide: • Interoperability library to convert data from one framework to another (e.g. your low level functional components may be written in C or C++, while your high level planner or execution controller use a symbolic representation) • Software tools which encapsulate the components and provide a clear API of what each component provides as services, or exported data structures (e.g. GenoM). • Software development environment to map particular services in threads, processes and even CPUs. • Seamless integration with higher level tools for autonomy (action planner, plan execution control, FDIR, etc) This has resulted in a number of successful architectures and tools (LAAS [4], CLARAty [5], etc), each of them with some pros and cons. Despite some efforts to compare them, nobody has really been able to seriously do it (apart from some communication performances, or memory footprint [6]). In any case, as of today, these architectures and tools have achieved a lot, and they have allowed the deployment of numerous successful robotics experiments. However, none of them is able to unambiguously answer simple questions such as: • Can you prove that your nursebot will not start full throttle while an elderly person is walking while leaning on it? • Can you guarantee that the arm of your service robot is not going to open its gripper while holding a cup of coffee and drop it on the carpet? • Can you prove that there is no deadlock in the initialization sequence of your robot? •
Can you prove that there is no race condition in a perception action loop? • etc These are difficult questions, even for regular software, and a fortiori even more for autonomous robots software. But one must admit that little has been done to seriously address them on a complete robotics system. Meantime, robots are becoming more and more pervasive, and the time will soon come where a certification body will require robot software developpers to exhibit what is being done to address such serious security and dependability issues. It is not clear if just having good software engineering practices will be enough. •
III. W HAT IS NEEDED ? At some abstract level, Autonomous robot systems take sensor inputs and a goal (or set of goals) and decide to perform some actions that affects the environment (and/or the internal state of the robot). The primary problem in developing such systems is ensuring that they correctly respond, with the proper actions, in a timely fashion, to every possible set of sensor inputs in order to achieve their goals. We can separate the overall problem of reacting to inputs into real-time and decisional problems. Real-time refers to the system responding in time. We further categorize systems according to their hard and soft real-time deadlines. For hard real-time, missing a deadline can have disastrous effects (e.g., failing to enter orbit around a planet). For soft real-time, missing a deadline just reduces overall utility (e.g., failing to transmit scientific data during a given communication window). Typically, hard real-time performance is harder to obtain than soft real-time performance. Decisional problems, on the other hand, refer to the system making the right decision for a given situation. For instance, an autonomous Mars rover must decide how to steer given perceived stereo input of the terrain. Typically, decisional problems are more pervasive, but somewhat easier to diagnose and fix, than real-time problems. Autonomous Robot System designers deal with a large variety of components, each having different characteristics, from many different viewpoints, each highlighting different dimensions of a system. Two central problems are: 1) meaningful composition of heterogeneous components, 2) constructivity to guarantee global system properties from properties of its components. Heterogeneity is the property of systems to be built from components with different characteristics. Heterogeneity has several sources and manifestations, and the existing body of knowledge is largely fragmented into unrelated models and corresponding results. Constructivity is the possibility to build complex systems that meet given requirements, from building blocks and glue components with known properties. Constructivity can be achieved by algorithms (compilation and synthesis), but also by architectures and design disciplines. The two demands of heterogeneity and constructivity pull in different directions. Encompassing heterogeneity looks outward, towards the integration of theories to provide a unifying
view for bridging the gaps between analytical and computational models, and between critical and best-effort techniques. Achieving constructivity looks inward, towards developing a tractable theory for system construction. Since constructivity is most easily achieved in restricted settings, a Systems Design framework must provide the means for intelligently balancing and trading off both ambitions. Our goal is to provide a formal framework for component based design of autonomous robot systems. This framework will: 1) Enable formal integration of heterogeneous components, such as with different models of communication or execution; 2) Provide complete encapsulation both functional and extra-functional properties and develop foundations and methods ensuring composability of components; 3) Enable prediction of emergent key system characteristics such as performance and robustness (timing, safety) from such characterizations of its subcomponents; 4) Provide certificates for guarantees of such key system characteristics when deployed on a distributed HWarchitectures. Going over the list above, we can see that item (1) will allow to cover the complete model space of autonomous robot systems, both regarding the range of supported viewpoints (from performance to timeliness to safety views of the systems), and the level of abstraction (from HW systems to level systemlevel models). Item(2) guarantees that assembling systems from components will not destroy component characteristics. Item (3) builds on compositional analysis, allowing to derive guarantees of systems from interface specifications of their constituents, such as modular analysis methods are instrumental for addressing complex systems, which can easily involve gigabytes of embedded software. Item (4) allows bridging the gap between specification and implementation, in relating system-level extra-functional requirements to extra-functional characteristics of deployment architectures. To achieve these objectives, we need to develop a design theory for autonomous robot systems, fully covering heterogeneity, interface specifications, composability, compositionality, and refinements for functional and extra-functional properties. IV. C OMPONENT- BASED D ESIGN Component-based design is essential to any engineering discipline when complexity dictates methodologies that leverage reuse and correct-by-construction approaches. A central idea in systems engineering, such as robot software, is that complex systems are built by assembling components (building blocks) [7, 8]. This is essential for the development of largescale, evolvable systems in a timely and affordable manner. Component-based design confers many advantages with respect to monolithic design, such as reuse of solutions, modular analysis and validation, reconfigurability and controllability. Components are systems characterized by an abstraction that is adequate for composition and re-use, provided via an interface.
An interface specifies how a component is viewed by its potential users. Composition and its properties are essential for mastering the component construction process. Componentbased design relies on a separation between coordination and computation. Systems are built from units processing sequential code insulated from concurrent execution issues. The isolation of coordination mechanisms allows their global treatment and analysis. One of the main limitations of the current state-of-the-art is the lack of unified frameworks for describing and analyzing the coordination between components. This is particularly true for robotic systems where the coordination is usually enforced by a high level model, but not from a clean bottom up approach. Such frameworks would allow system designers and implementers to formulate their solutions in terms of tangible, well-founded and organized concepts, instead of using dispersed low-level coordination mechanisms including semaphores, monitors, message passing, remote call, protocols etc. Unified frameworks should allow a comparison and evaluation of otherwise unrelated architectural solutions, as well as derivation of implementations in terms of specific coordination mechanisms. The component-based design problem can be formulated as follows: “build a system meeting a given set of requirements from a given set of components that are known to satisfy another set of requirements.” This is an essential problem in any engineering discipline. It lies at the basis of various system-design activities, including modeling, architecting, programming, synthesis, upgrading, and reuse. Component-based design has been used in hardware. During the past decade, IT developers and end-users have benefited from the commoditization of commercial-off-the-shelf (COTS) hardware (such as CPUs and storage devices) and networking elements (such as IP routers). For VLSI circuit design, component-based design methodologies, supported by CAD tools, have been in use for System-on-Chip products albeit much remains to be done to achieve the level of maturity needed to make this approach a standard in the industry. The recent maturation of programming languages (such as Java and C++), operating environments (such as POSIX and Java Virtual Machines), and middleware (such as CORBA, Java 2 Enterprise Edition, and SOAP/Web services) enables albeit to a limited extent component-based software development. An important trend in modern systems engineering is model-based design, which relies on the use of explicit models to describe development activities and their products. It aims at bridging the gap between application software and its implementation by allowing predictability and guidance through analysis of global models of the system under development. The first model-based approaches, such as those based on ADA, synchronous languages [9] and Matlab/Simulink, support very specific notions of components and composition. More recently, modeling languages, such as UML [10] and AADL [11], attempt to be more generic. They support notions of components which are independent from a particular programming language, and put emphasis on system architecture
as a means to organize computation, communication, and implementation constraints. Software and system componentbased techniques have not yet achieved a satisfactory level of maturity. Systems built by assembling together independently developed and delivered components, often exhibit pathological behavior. Part of the problem is that developers of these systems do not have a precise way of expressing the behavior of components at their interfaces, where inconsistencies may occur. Components may be developed at different times and by different developers with, possibly, different uses in mind. Their different internal assumptions, further exposed by concurrent execution, can give rise to emergent behavior when these components are used in concert, e.g. race conditions, and deadlocks. All these difficulties and weaknesses are amplified in embedded system design in general. They cannot be overcome, unless we solve the hard fundamental problems raised by the definition of rigorous frameworks for component-based design. V. E NCOMPASSING H ETEROGENEITY Superficial classifications may distinguish between hardware and software components, or between continuous-time (analog) and discrete-time (digital) components, but heterogeneity has two more fundamental sources: the composition of subsystems with different execution and interaction semantics; and the use of analytical and computational models. Heterogeneous execution, interaction and abstraction At one extreme of the semantic spectrum are fully synchronized components, which proceed in lock-step with a global clock and interact in atomic transactions. Such a tight coupling of components is the standard model for most synthesizable hardware and for hard real-time software. At the other extreme are completely asynchronous components, which proceed at independent speeds and interact non-atomically. Such a loose coupling of components is the standard model for most multi-threaded software. Between the two extremes, a variety of intermediate and hybrid models exist (e.g., globally-asynchronous locallysynchronous models). To better understand their commonalities and differences, it is useful to decouple execution from interaction semantics. An additional source of heterogeneity is inherent to the multifaceted nature of autonomous robot systems. Autonomous robot systems must meet various types of properties ranging from functional properties to extra-functional ones e.g. related to resources such as time and power and memory. Depending on the application area, emphasis is put on correctness criteria such as safety, security, availability etc. Autonomous robot software systems designers need abstractions that represent a system at varying degrees of detail and for which different methods and tools are applicable. This is an additional source of heterogeneity that does not occur in pure software or hardware systems. We need tractable theories encompassing heterogeneity, in particular to relate application software models to their implementations. Such theories must provide the means for preserving, in the implementation, all essential properties of
the application software. They should also allow effective separation of orthogonal facets, and qualification of the tradeoffs between interfering facets. VI. ACHIEVING C ONSTRUCTIVITY The system construction problem can be formulated as follows: “build a system meeting a given set of requirements from a given set of components.” This is a key problem in any engineering discipline; it lies at the basis of various systems design activities, including modeling, architecting, programming, synthesis, upgrading, and reuse. The general problem is by its nature intractable. Given a formal framework for describing and composing components, the system to be constructed can be characterized as a fixpoint of a monotonic function which is computable only when a reduction to finite-state models is possible. Even in this case, however, the complexity of the algorithms is prohibitive for real-world systems. What are the possible avenues for circumventing this obstacle? We discuss two approaches for circumventing inherent complexity of system design: 1) Constructivity for satisfying given properties; 2) Compositionality and Composability. A. Constructivity for satisfying given properties This approach includes methods for building systems satisfying given properties for particular types of components and architectures. Hardware synthesis techniques, software compilation techniques, algorithms (e.g. for scheduling, mutual exclusion, clock synchronization), architectures, as well as protocols contribute solutions for specific contexts. It is essential to extend the correct-by-construction paradigm by studying more generally the interplay between architecture and properties. We stress that many practical, interesting results require little computation and aim to guarantee correctness by construction. B. Compositionality and Composability This approach includes methods for incremental construction of correct systems from correct components. They are particularly useful for the integration of heterogeneous models. Incremental system construction relies on two kinds of rules: 1) Compositionality rules infer global system properties from local properties of the subsystems (e.g. inferring global deadlock-freedom from the deadlock-freedom of the individual components); 2) Composability rules guarantee that along the system construction process, all essential properties of the subsystems are preserved. Compositionality has been extensively studied for safety properties. The focus shifts from compositionality results for functional properties, to extra-functional properties such as performance and robustness. The key issue is the construction of components performing as desired under circumstances that deviate from the normal,
expected operating environment. Such deviations may include extreme input values and platform failures. Accordingly, robustness requirements include a broad spectrum of properties, such as safety (resistance to failures) and availability (accessibility of resources). Robustness is a transversal issue in system construction, cutting across all design activities and influencing all design decisions. The current state of the art in building robust autonomous robot software systems is still embryonic. A long-term and continuous research effort is necessary to develop a framework for the rigorous construction of robust autonomous robot software systems. VII. C ONCLUSION We argued that autonomous robot software design is still in the ad-hoc phase, and constrained by limitations that are introduced by many manual steps, such as code optimization and system integration, which proceed mostly by “trial and error” (i.e. test and tweak). Current models are inadequate, because they address only isolated aspects of autonomous robot systems, and their interactions are not well understood. For example, current code generation techniques, while producing functionally correct code, do not optimize resource consumption nor satisfy hard resource constraints. We need a mathematical basis for autonomous robot systems modeling and analysis which integrates both abstractmachine models and transfer-function models in order to deal with computation and physical constraints in a consistent, operative manner. Based on such a theory, it should be possible to combine practices for autonomous robot software systems engineering to guarantee functional requirements, with best-effort systems engineering to optimize performance and robustness. The theory, the methodologies, and the tools need to encompass heterogeneous execution and interaction mechanisms for the components of a system, and they need to provide abstractions that isolate the subproblems in design that require human creativity from those that can be automated. R EFERENCES [1] M. Montemerlo, J. Pineau, N. Roy, S. Thrun, and V. Verma, “Experiences with a mobile robotic guide for the elderly,” in Proceedings of the AAAI National Conference on Artificial Intelligence, Edmonton, Canada, 2002, AAAI. [2] Aurelie Clodic, Sara Fleury, R. Alami, R. Chatila, Grard Bailly, Ludovic Brthes, Maxime Cottret, Patrick Dans, Xavier Dollat, Frdric Elise, Isabelle Ferran, Matthieu Herrb, Guillaume Infantes, Christian Lemaire, Patrick Lerasle, Jrme Manhes, Patrick Marcoul, Paul Menezes, and Vincent Montreuil, “Rackham: An interactive Robot-Guide,” in IEEE International Symposium on Robot and Human Interactive Communication (demonstration session) (RO-MAN), University of Hertforshire, Hatfield, UK, 06/09/06-08/09/06, http://www.ieee.org/, 2006, pp. 502– 509, IEEE. [3] “The discipline of embedded systems design,” IEEE Computer, vol. 40, no. 10, pp. 36–44, 2007. [4] R. Alami, R. Chatila, S. Fleury, M. Ghallab, and F. Ingrand, “An architecture for autonomy,” IJRR, 1998. [5] I.A. Nesnas, A. Wright, M. Bajracharya, R. Simmons, and T. Estlin, “Claraty and challenges of developing interoperable robotic software,” in International Conference on Intelligent Robots and Systems (IROS), Nevada, Oct. 2003, invited paper. [6] A Shakhimardanov and E Prassler, “Comparative evaluation of robotic software integration systems: A case study,” Intelligent Robots and Systems, Jan 2007.
[7] Joseph Sifakis, “A framework for component-based construction extended abstract,” in SEFM, 2005, pp. 293–300. [8] T.A. Henzinger and J. Sifakis, “The embedded systems design challenge,” in FM: Formal Methods, Lecture Notes in Computer Science 4085, pp. 1–15. Springer, 2006. [9] N. Halbwachs, Synchronous Programming of Reactive Systems, Kluwer Academic Publishers, 1993. [10] The Unified Software Development Process, Number ISBN 0-20157169-2. Addison Wesley Longman, 1998. [11] Feiler P.H., Lewis B.A., and Vestal S, “The sae architecture analysis & design language (aadl) a standard for engineering performance critical systems,” in IEEE International Symposium on Computer-Aided Control Systems Design, 2006, pp. 1206 – 1211.