CoTeSys — Cognition for Technical Systems - CiteSeerX

1 downloads 69 Views 6MB Size Report
ception, reasoning, learning, and planning turn technical systems into systems that “know what they are doing”. The cognitive capabilities will result in systems of ...
CoTeSys — Cognition for Technical Systems Martin Buss∗ , Michael Beetz# , Dirk Wollherr∗ ∗

Institute of Automatic Control Engineering (LSR), Faculty of Electrical Engineering and Information Technology # Intelligent Autonomous Systems, Department of Informatics Technische Universit¨at M¨unchen D-80290 M¨unchen, Germany www.cotesys.org (E-mail: [email protected], [email protected], [email protected])

Abstract The C OT E S YS cluster of excellence1 investigates cognition for technical systems such as vehicles, robots, and factories. Cognitive technical systems (CTS) are information processing systems equipped with artificial sensors and actuators, integrated and embedded into physical systems, and acting in a physical world. They differ from other technical systems as they perform cognitive control and have cognitive capabilities. Cognitive control orchestrates reflexive and habitual behavior in accord with longterm intentions. Cognitive capabilities such as perception, reasoning, learning, and planning turn technical systems into systems that “know what they are doing”. The cognitive capabilities will result in systems of higher reliability, flexibility, adaptivity, and better performance. They will be easier to interact and cooperate with.

of motor actions [2, 3]3 . Since brains of humans and non-human primates have successfully developed superior information processing mechanisms, C OT E S YS studies and analyzes cognition in (not necessarily human) natural systems and transfers the respective insights into the design and implementation of cognitive control systems for technical systems. However, these cognitive abilities are essential skills for “reasonable” collaboration. The robot must be capable to understand human actions and intentions, and quickly conceive a plan to support the human. Collaboration between human and robot are considered reasonable, if a given task is accomplished faster or in a less strenuous way when co-operating rather than acting alone. One example, where efficient collaboration becomes vital, are welfare scenarios, where robots are to take over tasks without further guidance in order to disburden service personnel and allow more human-human interaction.

1 Motivation and Basic Approach People deal easily with everyday situations, uncertainties, and changes — abilities, which technical systems currently lack. Unlike artificial systems, humans develop and learn how to extract and incorporate new information from the environment. Animals have survived in our complex world by developing brains and adequate information processing strategies. Brains cannot compete with computers on tasks requiring raw computational power [1]2 . However, they are extremely well-suited to deal with ill-structured problems that involve a high degree of unpredictability, uncertainty, and fuzziness. They can easily cope with an abundance of complex sensory stimuli that have to be transformed into appropriate sequences 1 CoTeSys is funded by the German Research Council DFG as a research cluster of excellence within the “excellence inititive” from 2006-2011. CoTeSys partner institutions are: Technische Universit¨at M¨unchen (TUM), Ludwig-Maximilians-Universit¨at (LMU), Universit¨at der Bundeswehr (UBM), Deutsches Zentrum f¨ur Luft- und Raumfahrt (DLR), and Max-Planck-Institute for Neurobiology (MPI), all in M¨unchen. 2 References on this subject are too numerous to cover the state of art in this paper. Given references are only samples and have not claim for completeness

To this end, cognitive scientists investigate the neurobiological and neurocognitive foundations of cognition in humans and animals and develop computational models of cognitive capabilities that explain their empirical findings. These computational models will then be studied by the C OT E S YS engineers and computer scientists with respect to their applicability to artificial cognitive systems and empirically evaluated in the context of the C OT E S YS demonstrators, including humanoid robots, autonomous vehicles, and cognitive factories. C OT E S YS structures interdisciplinary research on cognition in three closely intertwined research threads, which perform fundamental research and empirically study and implement cognitive models in the context of the demonstration testbeds, see Figure 1: 3 The Cognitive Systems Project, www.foresight.gov.uk, aims to provide a vision for the future development of cognitive systems through an exploration of recent advances in neuroscience and computer science. The Society of Neuroscience’s home page, http://apu.sfn.org, contains much useful information on the latest developments in the subject. It hosts Brain Facts, an accessible primer on the brain and nervous system.

have inspired research in cognitive psychology in particular the formation of causal theory with young children. Functional MRI images of rat brains have shown neural activation patterns of place cells similar to multimodal probability distributions in robot localization using Bayesian filters [5].

Figure 1: C OT E S YS research strategy: Three research disciplines (cognitive and life sciences, information processing and mathematical sciences, and engineering sciences) work synergetically together to explore cognition for technical systems. Research is structured into three groups of research areas: cognitive foundations, cognitive mechanisms, and demonstration scenarios. Cognitive mechanisms to be realized include perception, reasoning and learning, action selection and planning, and joint human/robot action.

Systemic Neuroscience, Cognitive Science, and Neurocognitive Psychology — Develop computational models of cognitive control, perception, and motor action based on experimental studies at the behavioral and brain level. Information processing technology — Investigate and develop algorithms and software systems to realize cognitive capabilities. Particularly relevant are modern methods from Control and Information Theory, Artificial Intelligence including learning, perception, and symbolic reasoning. Engineering technologies — The areas of mechatronics, sensing technology, sensor fusion, smart sensor networks, control rules, controllability, stability, model/knowledge representation, and reasoning are important to implement robust cognitive abilities in technical systems with guaranteed performance constraints. In recent years, these disciplines studying cognitive systems have crossfertilized each other in various ways.4 Researchers studying human sensorimotor control have found convincing empirical evidence for the use of Bayes estimation and cost function enabled control mechanisms in natural movement control [4]. Bayesian networks and the associated reasoning and learning mechanisms 4 Indeed, Mitchell has pointed out in a recent presidential address at the National Conference on Artificial Intelligence the next revolution is expected to be caused by the synergetic cooperation of the computing and the cognitive sciences.

The conclusions that C OT E S YS draws from these examples are that (1) successful computational mechanisms in artificial cognitive systems tend to have counterparts with similar functionality in natural cognitive systems; and (2) new consolidated findings about the structure and functional organization of perception and motion control in natural cognitive systems show us much better ways of organizing and specifying computational tasks in artificial cognitive systems. However, cognition for technical systems is not the mere rational reconstruction of natural cognitive systems. Natural cognitive systems are impressively well adapted to the computational infrastructure and the perception and action capabilities of the systems they control. Technical cognitive systems have computational means, perception and action capabilities with very different characteristics. Learning and motor control for reaching and grasping provide a good case in point [6]. While motor control in natural systems takes up to 100ms to receive motion feedback, high end industrial manipulators execute feedback loops at 1000Hz with a delay of 0.5ms. In contrast to robot arms, control signals for muscles are noisy and muscles take substantial amounts of time to produce the required force. On the other hand, antagonistic muscle groups support the achievement of equilibrium states. Thus, where in natural systems predictive models of motion are required because of the large delay of feedback signals, robot arms can perform the same kind of motions better by using fast feedback loops without resorting to prediction [7, 8]. Given these differences, we cannot expect that generally all information processing mechanisms optimized for the perceptual apparatus, the brain, and the limbs of humans or non-human primates will apply, without modification, to the control of CTSs.

2 The Cognitive Perception/Action Loop C OT E S YS investigates the cognition for technical systems in terms of the cognition-based perception-action closed loop. Figure 2 depicts the system architecture of a cognitive system with multi-sensor perception of the environment, cognition (learning, knowledge, action planning), and action in the environment by actuators. All research within C OT E S YS is dedicated to real-time performance of this control loop, in the real world. On the higher cogni-

tive level, the crucial components comprise environment models, learning and knowledge management, all in realtime and tightly connected to physical action. The midand long-term research goals in C OT E S YS are to significantly increase the functional sophistication for robust and rich performance of the perception-action loop. Cognitive system architecture

Learning & reasoning

Knowledge & models

Planning & Cognitive Control

Sensors

Actuators

Perception

Action

Human Environment / Production Process

Figure 2: The cognitive system architecture: The perceptionaction closed loop.

The mapping of the technical system operation onto the perception-action cycle depicted in Figure 2 might suggest that we functionally decompose cognition into modules where one module performs motor action, another one reasoning, and so on. In order to achieve the needed synergies, the coupling of the different cognitive capabilities must be much more intense and interconnected as depicted in Figure 3. For example, the system can learn to plan and plan to learn. It can learn to plan more reliably and efficiently and also plan in order to acquire informative experiences to learn from. Or, perception is integrated into action to perform tasks that require handeye coordination. Futher, perception often requires action to obtain information that cannot be gathered passively. This C OT E S YS view on the tight coupling of the individual cognitive capabilities is important because it implies the requirement of close cooperation between C OT E S YS’s different research areas. Learning/ Reasoning

Knowledge/ Models

Perception

Planning/ Action

Figure 3: The cognitive system architecture: The interplay of the cognitive capabilities. C OT E S YS investigates the perception-action loop within

a highly interdisciplinary research endeavor starting with

discipline-specific views of the loop components in order to obtain a common understanding of key concepts, such as perception, (motor) action, knowledge and models, learning, reasoning, and planning. Perception is the acquisition of information about the environment and the body of an actor. In cognitive science models, part of the information received by the receptors is processed at higher levels in order to produce taskrelevant information. This is done by recognizing, classifying, and locating objects, observing relevant events, recognizing the essence of scenes and intentional activities, retrieving context information, and recognizing and assessing situations [9, 10]. In control theory, perception strongly correlates with the concept of observation — the identification of system states that are needed to generate the right control signals. Artificial intelligence, a subfield of computer science, is primarily concerned with perception and action; perception is often framed as a probabilistic estimation problem and the estimated states are often transformed into symbolic representations that enable the systems to communicate and reason about what they perceive [11]. (Motor) Action is the process of generating behavior to change the world and to achieve some objectives of the acting entity. To produce action, primate brains use a quasi-hierarchy ranging from elementary motor elements at lower cortical levels to complex “action” sequences and plans at higher levels [12, 13]. Natural cognitive systems use internal forward models to predict the consequences of motor signals to account for delays in the computation process and filtering out uninformative incoming sensory information [14]. This cognitive science view can be contrasted to control theory, where behavior is specified in terms of control rules. Control rules for feedback control are derived from accurate mathematical dynamical system models. The design of control rules aims at control systems that are controllable, stable, and robust and can thereby provably satisfy given performance requirements. Action theories in artificial intelligence typically abstract from many dynamical aspects of actions and behavior in order to handle more complex tasks [15]. Powerful computational models have been developed to rationally select the best actions (based on decision theory criteria), to learn skills and action selection strategies from experience, and to perform action aware control [16, 17]. Knowledge (Models) in cognitive science is conceived to consist of both declarative and procedural knowledge [18, 19]. Declarative knowledge is recognizing and understanding factual information known about objects, ideas, and events in the environment. It also contains the inter-relationsships between objects, events, and entities

in the environment. Procedural knowledge is information regarding how to execute a sequence of operations. In cognitive science various models have been proposed as part of computational models of motor control and learning to explain behavior of human and primate behavior in empirical studies. Most prominent are the forward and backward models of actions for the prediction of the actions’ effects and sensory consequences and for the optimization of skills [20–22]. Graphical models have been proposed to explain the acquisition of causal knowledge with younger children [23]. In control systems, various mathematical models, such as differential equations or automata that capture the evolution of dynamical systems, are used. Research in artificial intelligence has produced powerful representations for joint probability distributions and symbolic knowledge representation mechanisms. It has developed the mechanisms to endow CTSs with encyclopedic and common sense knowledge. Learning is the process of acquiring information, and, respectively, the reorganization of information that results in new knowledge [24]. The learned knowledge can relate to skills, attitudes, and values and can be acquired through study, experience, or being taught, the cognitive science view. Learning causes a change of behavior that is persistent, measurable, and specified. It is a process that depends on experience and leads to longterm changes in behavior. In control theory, adaptive control investigates control algorithms in which one or more of the parameters varies in real time, to allow the controller to remain effective in varying process conditions. Another key learning mechanism is the identification of parameters in mathematical models. In artificial intelligence, a large variety of information processing methods for learning have been developed [25]. These mechanisms include classification learners, such as decision tree learners or support vector machines, function approximators, such as artificial neural networks, sequence learning algorithms, and reinforcement learners that determine optimal action selection strategies for uncertain situations. The learning algorithms are complemented by more general approaches such as data mining and integrated learning systems (see the research programmes of the DARPA IPTO office http://www.darpa.mil/ipto/). Reasoning is a cognitive process by which an individual or system may infer a conclusion from an assortment of evidence, or from statements of principles [26]. In the cognitive sciences reasoning processes are typically studied in the context of complex problem solving tasks, such as solving student problems, using protocol analysis methods (“think aloud”) [27]. In the engineering sciences specific reasoning mechanisms for prediction tasks, such as Bayesian filtering, are employed and studied [28]. Other reasoning tasks are solved in the sys-

tem design phase by the system engineers, where control rules are proven to be stable. The resulting systems have no need for execution time reasoning, because of their guaranteed behavior envelope. Artificial intelligence has developed a variety of reasoning mechanisms, including causal, temporal, spatial, and teleological reasoning, which enables CTSs to solve dynamically changing, interfering, and more complex tasks. Planning is a process of generating (possibly partial) representations of future behavior, prior to the use of such plans, to constrain or control current behavior. It comprises reasoning about the future in order to generate, revise, or optimize the intended course of action. In the artificial intelligence view plans are considered to be control programs that can be executed, be reasoned about, and be manipulated [29].

3 The Integrated System Approach to CTSs The demonstrators are of key importance for the C OT E S YS cluster. Demonstrators and demonstration scenarios are designed to challenge fundamental as well as applied research in the individual areas. They define the milestones for the integration of cognition into technical systems. The C OT E S YS researchers integrate the developed computational mechanisms into complete control systems and embed them within the demonstrators. The research areas specify the kinds of experiments they intend to perform in the context of the demonstrators. They also specify metrics to evaluate the progress. Thus, the demonstrators become cross area research drivers that enforce researchers to collaborate and produce software components that successfully function in integrated cognitive systems. The demonstrators also transfer basic research efforts into applied ones thereby promoting cooperation with industry. The focus on demonstrators and integrated system research is also important as a research paradigm [30, 31]. The cognitive capabilities of CTSs enable them to reason about the use of their information processing mechanisms: they can check results, debug them, and apply better suited mechanisms if default methods fail. Therefore, their information processing mechanisms do not need to be hard coded completely. They should still be correct and complete but through dynamic adaptation rather than static coding. This is important because in all but the simplest cases completeness and correctness come at the cost of those problems becoming unsolvable — computationally intractable at best. For example, computing a scene description from a given camera image is an illstructured problem, checking the validity of statements in

a given logical theory is undecidable, computing a plan for achieving a set of goals is intractable for all but the most trivial action representations [32]. We will explain the interaction between demonstrator research and the other research areas using the cognitive factory as an example. The same kinds of interactions between demonstration scenarios and the other research areas will be realized by the cognitive vehicle and the cognitive humanoid robot demonstration scenarios. The Cognitive Factory – as an Example for the Interaction between the Demonstrators and the other Research Areas. The steadily increasing demand for mass customization, decreasing product life cycles, and global competition require production systems with an unprecedented level of flexibility, reliability, and efficiency. The equipment of production systems with cognitive capabilities is essential to satisfy these requirements, which must be addressed to strengthen the high-end production in developed economies.

Figure 4: The cognitive machine shop demonstration scenario. C OT E S YS will investigate a real world production scenario as its primary demonstration target for cognitive technologies in factory automation. An example production chain includes an industrial robot, autonomously cooperating robots, fixtures, and conveyors to handle and process these parts. In addition, it contains an assembly station where human workers and robotic manipulators jointly perform complex and dynamically changing tasks of assembling the parts.

The demonstrator challenges the cognitive capabilities of technical systems in important ways by posing two key research questions:

1. How do performance, flexibility, reliability, and self-adaptability of flexible manufacturing systems further improve if augmented by cognitive capabilities?

2. On what types production techniques (mass production, rapid prototyping, individualized production) does cognitive control have the highest potential impact and how can this be achieved? Early experiments suggest that the most promising production technologies for the application of cognitive technologies are rapid prototyping and individualized production. In these production contexts cognitive technologies allow for the automatic and flexible interleaving of multiple and heterogeneous production processes that can be performed simultaneously. State-of-the-art production plans are replaced by plans that are very similar to those controlling autonomous mobile robots [33, 34]: they specify percept-driven, concurrent, and context-specific behavior including failure monitoring and in particular recovery instead of the more constrained specification of production without such functions. We also expect that cognitive technology will enable the realization of a new generation of machine shops that consist of very general machines that reconfigure themselves according to the needs of production tasks. The range of reconfiguration mechanisms includes the autonomous reconfiguration of part feeders, the rearrangement of part feeders and local storage units, and the automatic use of different end effectors by robot arms. The reconfigurability will enable machine shops to be much more general and flexible. However, to achieve this flexibility and generality the machines must control themselves using comprehensive perceptual feedback, the machines have to calibrate and teach themselves, and they have to reason about whether it is more appropriate to learn an efficient and tailored production routine or use a more general and inefficient routine that does not require resources for the learning step. To support the reconfigurability the machines learn capability models by themselves and configuration-specific performance models for production steps. Another aspect where cognitive technologies improve the performance of existing automation technology is robustness. Current production control systems make the assumptions that the evolution of the production process is only caused by the machines of the manufacturing system and that there are very limited ways in which a production process can fail. If we cannot make these assumptions, for example because human workers act simultaneously in a shared environment, then pallets can be removed or added by people from the stock, the order of pallets on the conveyor belt can be changed, and work pieces can be modified by other agents. To still work reliably under such circumstances the production system is required estimate the complete state of the environment and the production processes instead of just recognizing specific

predefined triggering events, such as a pallet arriving at a particular manufacturing station. Another equally challenging consequence is that production plans have to be written such that they specify the production process for a large range of situations and such that the plans can be automatically revised in order to deal with previously unanticipated situations. Again this capability is realized by transferring successful ideas from the plan-based control of robotic agents into the domain of automatic factory control [35].

els of the workers’ actions by observing them. The predictive models are then used for synchronizing the joint actions. To adapt to their co-workers, cognitive mechanisms will enable the machines to explain their behavior, for example why they have performed two production steps in a certain order. The machines are equipped with plan management mechanisms that allow them to automatically transform abstract advice into modifications of their own control programs.

A third aspect in which our concept of the cognitive factory goes well beyond that of flexible and intelligent manufacturing is that the cognitive factory is also equipped with autonomous/cognitive (at a later stage mobile) robotic assistants (see Figure 5). This robot is equipped with two industrial strength manipulators, color stereo vision, and laser range sensors that can be positioned with the robot’s effectors. The tasks of this factory assistant include • removing and rearranging pallets on the conveyor belt in order to resolve deadlocks between concurrent production processes, • assisting the assembly robot for manipulation tasks that the assembly robot cannot do by itself (such as turning a large work piece), and • serving as a mobile sensing platform that can be used to support state estimation and as a mobile inspection device. In the near future it is planned to have a higher number of mobile robots with cognitive functions in the factory, also able to cooperative closely with human workers. Another cognitive aspect of this demonstrator is that it uses sensor networks in order to be aware of the operations in individual machines, robots, and transportation mechanisms. Using sophisticated data processing capabilities, integrated data mining and learning mechanisms, the machines learn to predict the quality of the outcome based on properties of the work piece and their parameterization. They form situation specific action models and use them to optimize production chain processing. Another station in the cognitive factory mounts parts into the car body. The weight of the parts and the complexity of the step requires joint human robot action. Heavy parts and tools will be handled by industrial robots and mobile platforms will provide parts on the fly, such that human workers will be relieved from repetitive and strenuous operations and can focus on tasks that require high-level reasoning. The robot learns informative predictive mod-

Figure 5: B21 robot assistant in the simulated cognitive factory.

Unlike other projects engineering the factory of the future, such as “Intelligent Manufacturing Systems” (www.ims.org), where innovative strategies for improving the entire manufacturing process from are investigates mostly based on existing production technologies, C OT E S YS focuses on the aspect of human-machine collaboration. It is believed, that innovative joint manipulation together with a deeper understanding of goals by the machine will revolutionize the production process. Apart from more efficient and less monotonous traditional assembly tasks, the production line gains a higher degree of flexiblity capable to adapt to individual needs on a specifict part. This affects the production range from customized mass products to highly efficient factory driven prototype production.

4 Research Areas Research on neurobiological and neurocognitive foundations of cognition — Basic research investigates the neurobiological and neurocognitive foundations of cognition in technical systems by empirically studying cognitive capabilities of humans and animals at the behavioral and brain level. Researchers will investigate, in human subjects, the cognitive control of multi-sensory perception-action couplings in dynamic, rapidly changing environments following an integrative approach by combining behavioral and cognitive-

neuroscience methodologies. The research task is to establish experimentally how these control functions are performed in the brain, in order to provide (1) neurocognitive “models” of how these functions may be implemented in technical systems and (2) guidelines for the effective design of man-machine interfaces considering human factors. One of the key results for the research areas studying cognitive mechanisms will be a comprehensive model of cognitive control combining mathematical and neural-network models with models of symbolic, production systems-type information processing. In contrast to existing models that are limited to static, uni-modal (visual) environments and simple motor actions the C OT E S YS model will cover cognitive control in dynamic, rapidly changing environments with multi-modal event spaces. Research on perceptual mechanisms designs, implements, and empirically analyzes perceptual mechanisms for cognition in technical systems. It integrates, embeds, and specializes the mechanisms for their application in the demonstration scenarios. The challenge for the area is to develop fast, robust and versatile perception systems that allow the C OT E S YS demonstrators to operate in unconstrained real-world environments; to endow cognitive technical systems with perception systems that acquire, maintain, and deliver task-relevant information through multiple sensory modes rather than vast sensor data streams. Besides lower level perceptual tasks, the C OT E S YS perception modules will be capable of recognizing, classifying, and locating a large number of objects, of conceiving and assessing situations, contexts and intentions, and interpreting intentional activities based on perceptual information. Perceptual mechanisms at this performance level must themselves be cognitive. They have to filter out irrelevant data, focus attention based on an understanding of the context and the tasks they are to execute. The perceptual capabilities investigated are not limited to the core perceptual capabilities. They also include post-processing reasoning such as the acquisition of environment models and diagnostic reasoning mechanisms that enable CTSs to automatically adapt to new environments and to debug and repair themselves. Research on Knowledge and Learning — The ultimate goal of the C OT E S YS cluster is the realization of technical systems that know what they are doing, which can assess how well they are doing, and improve themselves based on this knowledge. To this end, research on knowledge and learning will design and develop a computational model for knowledge processing and learning especially designed to be implemented on computing platforms which are embedded into sensor-equipped technical systems acting in physical environments. This model

— implemented as a knowledge processing and learning infrastructure — will enable technical systems to learn new skills and activities from potentially very little experience, in order to optimize and adapt their operations, to explain their activities and accept advice in joint humanrobot action, to learn meta-knowledge of their own capabilities and behavior, and to respond to new situations in a robust way. The research topics that define the C OT E S YS approach to knowledge and learning in CTS include the following: Firstly, the development of a probabilistic framework as a means for combining first-order representations with probability. This framework provides a common foundation for integrating perception, learning, reasoning, and action while accommodating uncertainty. Secondly, a model of “Action Meta-Knowledge” is developed, which considers actions as information processing units that automatically learn and maintain various models of themselves, along with the behavior they generate. These models are used for behavior tuning, skill learning, failure recovery, self-explanation, and diagnosis. Thirdly, a comprehensive repertoire of sequence learning methods partly based on theories of optimal learning algorithms. Finally, an embedded integrated learning architecture employing multiple and diverse learning mechanisms capable of generalizing from very little experience. Research on action selection and planning addresses the action production aspects of cognition in technical systems. These aspects include realization of motion and manipulation skills, context-specific selection of the appropriate actions, commitment to courses of activity based on foresight, and specific action capabilities enabling competent joint human-robot action. To generate high performance and safe, action planning and control for locomotion, manipulation and full body motion is integrated. The planning and control system should be capable of working with minimal, nontechnical, and qualitative descriptions of tasks. High performance and safe operation will enable close cooperation with humans. Another focus is to enable cognitive robots to accomplish complex tasks in changing and partly unknown environments; to manage several tasks simultaneously, to resolve conflicts between interfering tasks, and to act appropriately in unexpected and novel situations. They even have to reconsider their course of action in the light of new information. Hence, the long term vision is to develop action control methods and a design methodology to be embedded into self-organizing cognitive architectures. Research on human factors studies cognitive vehicles, robots, and factories from a human factors and cogni-

c

Prof. Ulbrich, TUM

c

Prof. Hirzinger, DLR

c

Prof. Beetz, TUM

Figure 6: Demonstrator platforms used in the planned scenarios for cognitive humanoid robots. At the left are two humanoid robots (Johnnie and Lola) to be used for walking and full body motion research. Next is the upper body Justin is used for investigating highly dexterous manipulation capabilities. Its hand serving a coffee set is shown next to the right. On the right is a mobile robot with industrial strength arms that serves as the initial platform for the AssistiveKitchen scenario.

tive psychological point of view. Particular emphasis is placed on the interpretation of the environment and the communication with humans enabling human-machine collaboration in unstructured environments. The stateof-the-art in all aspects of human-machine communication will be advanced in order to equip cognitive systems with highly sophisticated communication capabilities. To achieve these goals neurobiology and technology are to inspire each other and thereby develop the following aspects of cognitive technical systems: advanced input/output technology, such as speech, gesture, motion, and gaze recognition is created to construct intuitive user interfaces and dialogue techniques, as well as sophisticated methods to evaluate the multi-modal interaction of humans and systems. The highest and most complex level involves emotion, action, and intention recognition, with which cognitive systems become more human-like. To pursue these goals novel computational user models of cognitive architectures and appropriate experimental evaluation methods are investigated. Similar activities like the above mentioned are pursued in a number of national and international projects, such as SFB588 “Humanoid Robots” in Karlsruhe 5 , the EU projects “Cogniron” 6 and “RobotCub” 7 and others. In contrast to these initiatives, C OT E S YS spans its consortium over a vast variety of research disciplines, supplementing traditional disciplines like electrical and mechanical engineering and computer science, with partners from medicine, psychology, sports sciences, and others. This broad interdiciplinary spectrum fosters exchange of ideas and concepts between traditional scientific borders. This gives a unique opportunity to sketch an overall picture of cognition in natural and technical systems and extract the best of both findings to merge into a more concise sytem.

5 www.sfb588.uni-karlsruhe.de 6 www.cogniron.org 7 www.RobotCub.org

5 Demonstrators and Scenarios The C OT E S YS demonstrators provide the other areas with demonstration platforms and challenges in the form of demonstration scenarios. The research results from the other research areas will be integrated, specialized, embodied, and validated in three scenarios: 1. Cognitive mobile vehicles: aerial vehicles for exploration and mapping, terrestrial offroad vehicles, and collaborative rescue missions for autonomous aerialterrestrical vehicle teams. 2. Cognitive humanoid robots: the two-legged humanoid robots J OHNNIE and L OLA are equipped with lightweight arms and multi-fingered hands from DLR. They constitute the main platforms and their control systems are extended to perform full body motion. The demonstration scenarios will feature complex everyday activity, complex full body motion, and sophisticated manipulation of objects. 3. Cognitive factory: a production line for individualized manufacturing of car bodies is considered. Cognitive aspects include skill acquisition, process planning, self-adaptation, and self-modelling. The production line includes autonomous mobile robots with manipulators to achieve the necessary flexibility of machine usage.

c

Prof. W¨unsche, UBM

c

Prof. Hirzinger, DLR

Figure 7: Two autonomous vehicles serving as demonstrators in the C OT E S YS cluster: MuCAR-3 and DLR blimp

The AssistiveKitchen with a cognitive robotic assistant. One of the demonstration scenarios for the humanoid robot demonstrators is the A SSISTIVE K ITCHEN [36] with robotic assistant, where the sensor-equipped kitchen is to observe the actions of the people in the kitchen, to provide assistance for the activities, and to

monitor the safety of the people. In addition, an autonomous mobile robot with two manipulators is to acquire skills in performing complex kitchen activities such as setting the table and cleaning up through a combination of imitation- and experience-based learning. The scenario is set up in a sensor equipped laboratory, which is shown in Figure 8. The sensor-equipped kitchen environment consists of RFID tag readers placed in the cupboards for sensing the identities of the objects placed there. The cupboards also have contact sensors that sense whether the cupboard is open or closed. A variety of wireless sensor nodes equipped with accelerometers and/or ball motion sensors are placed on objects or other items in the environment. Several small, non-intrusive laser range sensors track the motions of the people acting there.

Figure 8: Overview of the A SSISTIVE K ITCHEN.

The sensor network will be made cognitive by distributing cognitive mechanisms through the network and thereby obtaining devices that can estimate hidden states, recognize local activities, abstracting them, learning models of them, and storing abstract information and knowledge locally. Using the distributed recognition, learning, and knowledge processing capabilities, the environment can adapt itself locally and avoid flooding the whole system with irrelevant data.

6 Conclusions The excellence cluster C OT E S YS unites a large number of researchers from a variety of different disciplines in order to understand cognitive mechanisms in humans and animals, and transfer those findings to technical systems. This worldwide unique composition of the research consortium fosters intesive interdisciplinary exchange and transfers ideas and concepts between traditional research disciplines. The goal of C OT E S YS is to build technical systems that perceive their environment, reflect upon it, and act accordingly. Such abilities are crucial for technical systems to act in human environments. For efficient

collaboration, systems must adapt to the human and react to its actions. Reaction to non-deterministic systems like human beings requires highly robost and flexible cognitive architectures. These abilities are considered the key prerequisite to employ robots in human environments and thus for welfare systems. References [1] H. Moravec, “When will computer hardware match the human brain?,” Journal of Evolution and Technology, vol. 1, 1998. [2] R. Sarpeshkar, “Brain power - borrowing from biology makes for low power computing,” IEEE Spektrum, vol. 43, no. 5, pp. 24–29, 2006. [3] N. Shadbolt, “Brain power,” IEEE Intelligent Systems and Their Applications, vol. 18, no. 3, pp. 2–3, 2003. [4] K. P. K¨ording and D. M. Wolpert, “Bayesian decision theory in sensorimotor control,” TRENDS in Cognitive Sciences, vol. 10, no. 7, pp. 319–326, 2006. [5] W. E. Skaggs, B. L. McNaughton, and K. M. Gothard, “An information-theoretic approach to deciphering the hippocampal code,” in Advances in Neural Information Processing Systems 5, [NIPS Conference], (San Francisco, CA, USA), pp. 1030–1037, Morgan Kaufmann Publishers Inc., 1993. [6] R. Shadmehr and S. P. Wise, The Computational Neurobiology of Reaching and Pointing: A Foundation for Motor Learning. Cambridge, Mass.: Bradford Book, 2005. [7] W. Barfield, C. Hendrix, O. Bjorneseth, K. Kaczmarek, and W. Lotens, “Comparison of human sensory capabilities with technical specifications of virtual environment equipment,” PRESENCE, vol. 4, no. 4, pp. 329–356, 1995. [8] F. K. B. Freyberger, M. Kuschel, R. L. Klatzky, B. F¨arber, and M. Buss, “Visual-haptic perception of compliance: Direct matching of visual and haptic information,” in Proceedings of the IEEE International Workshop on Haptic Audio Visual Environments and their Applications (HAVE), (Ottawa, Canada), 2007. [9] R. Morris, L. Tarassenko, and M. Kenward, Cognitive Systems - Information Processing Meets Brain. San Diego, California: Elsevier Academic Press, 2005. [10] P. Auer, A. Billard, H. Bischof, I. Bloch, P. Boettcher, H. Blthoff, H. B. on, H. Christensen, T. Cohn, P. Courtney, A. Crookell, J. Crowley, S. Dickinson, C. E. t, and J.-O. Eklundh, “A research roadmap of cognitive vision,” Tech. Rep. V5.0 23-8-05, The European Research Network for Cognitive Computer Vision Systems, 2005. [11] S. Thrun, W. Burgard, and D. Fox, Probabilistic Robotics. Cambridge: MIT Press, 2005. [12] M. I. Jordan and D. M. Wolpert, Computational Motor Control. Cambridge: MIT Press, 1999. [13] M. A. Arbib, E. J. Conklin, and J. C. Hill, From Schema Theory to Language. Oxford University Press, 1987. [14] D. M. Wolpert and M. Kawato, “Multiple paired forward and inverse models for motor control,” J. Neural Networks, vol. 11, no. 7/8, pp. 1317–1329, 1998. [15] T. Dean and M. Wellmann, Planning and Control. San Mateo, CA: Morgan Kaufmann Publishers, 1991. [16] R. Sutton and A. Barto, Reinforcement Learning: an Introduction. MIT Press, 1998. [17] S. Russell and E. Wefald, Do the right thing: studies in limited rationality. Cambridge, MA: MIT Press, 1991. [18] D. Willingham, M. Nissen, and P. Bullemer, “On the development of procedural knowledge,” Journal of experimental psychology. Learning, memory, and cognition, vol. 15, no. 6, pp. 1047–1060, 1989. [19] G. Dobbie and R. Topor, “On the declarative and procedural semantics of deductive object-oriented systems,” Journal of Intelligent Information Systems, vol. 4, no. 2, pp. 193–219, 1995. [20] M. Kawato, “Internal models for motor control and trajectory planning,” Current Opinion in Neurob., vol. 9, no. 6, pp. 718–727, 1999. [21] R. Miall and D. Wolpert, “Forward models for physiological motor control,” J. Neural Networks, vol. 9, no. 8, pp. 1265–1279, 1996.

[22] D. M. Wolpert, K. Doya, and M. Kawato, “A unifying computational framework for motor control and social interaction,” Phil. Trans. of the Royal Society, vol. 358, no. 1431, pp. 593–602, 2003. [23] D. M. Sobel, J. B. Tenenbaum, and A. Gopnik, “Children’s causal inferences from indirect evidence: Backwards blocking and bayesian reasoning in preschoolers,” Cognitive Science: A Multidisciplinary Journal, vol. 28, no. 3, pp. 303–333, 2004. [24] T. Mitchell, “The discipline of machine learning,” Tech. Rep. CMU-ML-06-108, Carnegie Mellon University, 2006. [25] M. Beetz, M. Buss, and D. Wollherr, “Cognitive technical systems - what is the role of artificial intelligence?,” in Proceedings of the 30th German Conference on Artificial Intelligence (KI-2007), 2007. [26] J. Pearl, Causality: Models, Reasoning, and Inference. Cambridge University Press, 2000. [27] H. S. A. Newell, Human Problem Solving. Upper Saddle River, New Jersey: Prentice Hall, 1972. [28] S. Thrun, D. Fox, and W. Burgard, “Probabilistic methods for state estimation in robotics,” in Proceedings of the Workshop SOAVE’97, pp. 195–202, VDI-Verlag, 1997. [29] M. Beetz, “A roadmap for research in robot planning,” tech. rep., PLANET-II Technical Coordination Unit on Robot Planning, 2003. [30] S. Thrun, M. Beetz, M. Bennewitz, A. Cremers, F. Dellaert, D. Fox, D. H¨ahnel, C. Rosenberg, N. Roy, J. Schulte, and D. Schulz, “Probabilistic algorithms and the interactive museum tour-guide robot Minerva,” International Journal of Robotics Research, 2000. [31] S. Thrun, M. Montemerlo, H. Dahlkamp, D. Stavens, A. Aron, J. Diebel, P. Fong, J. Gale, M. Halpenny, G. Hoffmann, K. Lau, C. Oakley, M. Palatucci, V. Pratt, P. Stang, S. Strohband, C. Dupont, L.-E. Jendrossek, C. Koelen, C. Markey, C. Rummel, J. van Niekerk, E. Jensen, P. Alessandrini, G. Bradski, B. Davies, S. Ettinger, A. Kaehler, A. Nefian, and P. Mahoney, “Stanley, the robot that won the DARPA grand challenge,” Journal of Field Robotics, 2006. [32] M. Bertero, T. Poggio, and V. Torre, “Ill-posed problems in early vision,” Tech. Rep. AIM-924, Massachusetts Institute of Technology, 1987. [33] M. Beetz, T. Arbuckle, M. Bennewitz, W. Burgard, A. Cremers, D. Fox, H. Grosskreutz, D. H¨ahnel, and D. Schulz, “Integrated planbased control of autonomous service robots in human environments,” IEEE Intelligent Systems, vol. 16, no. 5, pp. 56–65, 2001. [34] M. Beetz, “Structured Reactive Controllers,” Journal of Autonomous Agents and Multi-Agent Systems. Special Issue: Best Papers of the International Conference on Autonomous Agents ’99, vol. 4, pp. 25–55, March/June 2001. [35] M. Beetz, “Plan representation for robotic agents,” in Proceedings of the Sixth International Conference on AI Planning and Scheduling, (Menlo Park, CA), pp. 223–232, AAAI Press, 2002. [36] M. Beetz, J. Bandouch, A. Kirsch, A. Maldonado, A. M¨uller, and R. B. Rusu, “The assistive kitchen - a demonstration scenario for cognitive technical systems,” in Proceedings of the 4th COE Workshop on Human Adaptive Mechatronics (HAM), 2007.

Suggest Documents