Such robots vary from museum tour guides [3][4], robotic nurse maids [5], for kids ... A socially competent android requires a combination of drives and goals cou-.
Modulating behaviors using allostatic control Vasiliki Vouloutsi, St´ephane Lall´ee, and Paul FMJ Verschure 1
Universitat Pompeu Fabra (UPF). Synthetic, Perceptive, Emotive and Cognitive Systems group (SPECS)
2
http://specs.upf.edu Instituci´ o Catalana de Recerca i Estudis Avan¸cats (ICREA) Passeig Llus Companys 23, 08010 Barcelona, Spain http://www.icrea.cat
Abstract. Robots will be part of our society in the future. It is therefore important that they are able to interact with humans in a natural way. This requires the ability to display social competence and behavior that will promote such interactions. Here we present the details of modeling the emergence of emotional states, adaptive internal needs and motivational drives. We explain how this model is enriched by the usage of a homeostatic and allostatic control that aim at regulating its behavior. We evaluate the model during a human-robot interaction and we show how this model is able to produce meaningful and complex behaviors. Keywords: human-robot interaction, behavioral modulation, allostatic control
1
Introduction
As the introduction of robots into our society is slowly coming closer to reality, their ability to be able to interact with humans in a meaningful and intuitive way gains importance. Traditionally, the usage of robots is constrained in situations where little interaction with humans is required, such as environmental monitoring [1], looking for hazardous substances [2] etc. Recently, we observe a change in paradigm as robots with more social character seem to gain ground. Such robots vary from museum tour guides [3][4], robotic nurse maids [5], for kids with autism [6] to sociable partners [7]. It is therefore essential that we start developing robots that are not just tools for automated processes but rather social agents that are able to interact with humans. Humans have evolved to be experts in social interaction, attributing causality to entities, provided that they obey specific regularities consistent with the necessary contingency and contiguity conditions of causality [8]. In fact there is a large body of work that shows the propensity of humans to make social inferences and judgements even to shapes on a screen to explain their behavior [9]. Hence, if there is a social model that humans can attribute to the robot’s behavior, the robot can be considered socially competent [10] [11]. Such operationalization of
2
SPECS, Universitat Pompeu Fabra (UPF)
social competence nonetheless seems to exclude both the mechanisms that underlie such competence as well as a broader range of non-human social behaviors. Our goal is to address this set of social skills starting from the communicative and interactive non-anthropomorphic artifact Ada [12].
2
Making social robots
Studying social behavior cannot be uncoupled from the ability to socially perceive, which in turn requires a self. To support social perception a number of issues need to be addressed such as what is the analogy between self and other. We propose that the minimal requirements for a functional robot that can act as a social agent are: (i) intrinsic needs to socially engage; (ii) have an action repertoire that supports communication and social interaction; (iii) the ability to distinguish between self and non-self by realizing a ”Phenomenal Model of the Intentionality-Relation” (PMIR) [13]; (iv) the ability to evaluate how the self is situated in the world by assessing if the self’s needs and goals are satisfied; and finally (v) the ability to infer mental states of other social agents and use this information to modify the self’s behaviors and actions. Here, we report our results of point (i) and set the framework for further development of the rest of the points. We propose an Experimental Functional Android Assistant (EFAA) that has the following requirements: (i) intrinsic needs to socially engage, as successful interaction requires an agent that is socially motivated; (ii) action repertoire that supports communication and interaction, in a way that the agent is able to perform actions such as object manipulation, produce linguistic responses, recognize and identify a social agent, establish and maintain interaction etc. and finally (iii) the core ingredients of social competence: actions, goals and drives. We define as drives the intrinsic needs of the robot. Goals define the functional ontology of the robot and depend on the drives whereas actions are generated to satisfy goals. A socially competent android requires a combination of drives and goals coupled together with an emotional system. Drives and goals motivate the robot’s behavior and evaluate action outcomes and emotions aim at appraising situations (epistemic emotions) and define communicative signals (utilitarian emotions). Although emotions are considered a controversial subject and a general consensus is needed [14] on the definition of emotions, Ekman [15] has defined six basic emotions that are considered universal and can be found in most cultures: happiness, surprise, fear, anger, disgust and surprise. How and why emotions arise is still under discussion with many different views. Recently, the mechanisms in the neural circuitry of emotion have gained increasing interest and attention [16] [17]. An interesting work is that of [18], where they present the following basic emotions: SEEKING, FEAR, RAGE, LUST, CARE, PANIC and PLAY. The authors provide a detailed explanation of the neural mechanisms that serve these emotions, supporting the continuity of animal and human emotions, as similar neural mechanisms are found across mammalian species, shedding light to
Modulating behaviors using allostatic control
3
behavioral and physiological expressions associated with these emotions. Thus emotions exist for both assisting communication by expressing one’s internal state (external/utilitarian) and for organizing behaviors (internal/episthemic) [19] [16]. An organism is also endowed with internal drives. For Hull [20], a drive triggers behavior, but there is the belief that drives also maintain behaviors and direct them [21]. Like emotions, there are various opinions regarding the nature of a drive, however, it is generally accepted that an organism has multiple drives [22]. According to Cannon [23] and Seward [24] , drives are part of a homeostatic mechanism that aims at preserving their basic needs in steady states. Animals are able to perform real-world tasks by combining the homeostatic mechanism with an allostatic control, where stability is achieved through physiological or behavioral change. This way animals are able to adjust their internal state and at the same time achieve stability in dynamic environments [25]. We propose an affective framework for a socially competent robot that uses an allostatic control model as a first level of motivational drive and behavior selection combined with an emotion system. In the following sections we present the model in detail and display how complex behaviors emerge through humanrobot interaction.
3
EFAA agent: a humanoid robot that promotes interaction
In our framework, the robot’s behavior is guided by its internal drives and goals in order to satisfy its needs. Drives set the robot’ goals and contribute to the process of action-selection. The overall system is based on the Distributive Adaptive Control (DAC) architecture [26] which consists of four coupled layers: soma, reactive, adaptive and contextual. Each level’s organization is increasingly more complex starting from the soma which designates the body itself. The reactive layer is the first level of behavior control and it is modeled in terms of an allostatic process; in this level stimuli are hard-wired with specific actions. The adaptive layer is predicated on the reactive layer; here adaptive mechanisms are deployed to deal with the unpredictability of the environment. Finally the contextual layer develops the state space that was previously acquired by the adaptive layer to generate behavioral plans and policies. Our model mainly focuses on the reactive and adaptive layer of the DAC architecture setting a framework for higher cognitive processes such as state space learning. To validate our model we propose a setup in which consists a humanoid robot, namely iCub [27], a human partner and the tabletop tangible interface Reactable [28]. The interaction defined by this setup involves a human communicating with the iCub. In the beginning of the scenario, the robot is sleeping and the human has to wake it up in order to start the interaction. Once the robot is awake, it engages in different set of activities aimed at satisfying its needs, such as play games with the human using objects placed on the Reactable. An example of the proposed setup is illustrated at figure 1. We implemented the model of drives and emotions using IQR,
4
SPECS, Universitat Pompeu Fabra (UPF)
an open-source multilevel neuronal simulation environment [29] that is able to simulate biological nervous systems by using standard neural models.
Fig. 1. Example of the proposed scenario where the humanoid robot iCub interacts with a human and uses the Reactable objects as means of playing a game.
3.1
Emotions and drives
The behavior of the robot is highly affected by its internal drives. Inspired by the intelligent space Ada [12], an interactive entertainment space that promotes interactions with several people, the robot has the following goals that it aims at optimizing: Be social : the robots goal is to interact with people and regulate its behavior accordingly. Exploration: the need to be constantly stimulated. Survival : consists of two parts: physical and cognitive survival. As physical survival we define the need of the robot to occasionally rest, whereas cognitive survival is the need to reduce complexity so as to not get confused. Play: the robot’s need to engage the human with different games in order to form a more pleasant and interesting interaction. Security: the need to protect itself and avoid unwanted stimuli or events. The goal of the EFAA agent is to socially engage with humans and its drives and emotions are designed to propel such a social interaction. The main goal of the robot is to maximize its happiness by keeping its drives in a homeostatic level. A homeostatic control is applied at each drive and on top of each subsystem we employ an allostatic control that aims at maintaining balance through behavioral change. The emotions that emerge through the agent’s interaction with a human and the environment are the following: happiness, anger, sadness, fear, disgust and surprise. These emotions are compliant with Ekman’s emotions [15] that are considered to be basic from evolutionary, developmental and cross-cultural studies. The emotional system is responsible for exhibiting emotional responses that are consistent with the agent’s internal state and are expressed through facial expressions. The emergence of emotions depends on two main factors: the satisfaction of the drives and external stimuli such as different tactile contacts
Modulating behaviors using allostatic control
5
(poke, caress, grab) which affect poke, happiness and fear respectively. At a neuronal level, each emotion is expressed by a single neuron whose activity varies from 0 to 1. At an expression level, this number determines the intensity of each emotion and sets the facial expression of the robot. An example of two different intensities of the same emotion, namely happiness is depicted at figure 2.
Fig. 2. Example of the emotional expression of happiness. On the left, the intensity is set to 0.5 whereas on the right the intensity is set to 1.
3.2
Homeostatic and allostatic control
We propose three main states that define each drive: under homeostasis, homeostasis and over homeostasis. A homeostatic control is applied on each drive to classify it in homeostatic terms. A drive is in homeostasis when it is encountering the appropriate stimulus to satisfy its needs. The absence of a stimulus leads the drive in under homeostasis whereas the presence of an extensive stimulus leads the corresponding drive to a over homeostasis state. The allostatic control aims at achieving consistency and balance in the satisfaction of the drives through behavioral change. It is responsible for choosing which action to take, what behavior to trigger and avoid cases of conflict, like the case when two drives need to be satisfied at the same time and contradict each other (e.g. when energy and play need to be satisfied), by setting priorities. The allostatic control is constantly monitoring the environment and the drives in parallel, assessing only the relevant stimuli for each drive, for example the presence of a human for the social drive, the presence of objects on the table for the exploration drive or the presence of both objects on the table and the presence of a human for the play drive. The implementation of a combined homeostatic and allostatic control that runs in parallel, contradicts the paradigm of state machines, as the proposed system allows the robot to display more complex behaviors. The dynamics of the model are depicted in figure 3. 3.3
Behavioral modulation
The EFAA agent has to perform different actions in order to satisfy its drives. Such behavior is considered adaptive since it allows the system to achieve specific
6
SPECS, Universitat Pompeu Fabra (UPF)
Fig. 3. Overview of the parts involved at behavioral level. Inputs from the environment are fed into the drives control mechanism (a) where there is an assessment of the homeostatic value of each drive and on top we have the allostatic control that is monitoring the drives and the related stimuli. Depending of value of each drive, an appropriate behavior is being selected (b) and executed (c). At the same time, the level of satisfaction of each drive affects the emotions of the system (d) and in combination of the assessment of certain stimuli (e) emotions emerge in the emotion system. The most dominant emotion (f) is expressed (g) through the facial expressions of the EFAA.
goals, like the satisfaction of a specific drive in a dynamic environment. We have employed the following behaviors: Wake up: the procedure in which the robot transits from inactivity to being ”awake” and ready to interact. Waking up behavior also initializes its drives and emotions. Explore: the robot interacts with objects on the table. Look around : the robot is looking around in an explorative way in order to find relevant and salient stimuli. Track : once a salient stimulus is found, the robot shifts its attention focus to the salient stimulus. Play: the robot engages the human in an interactive game. The play behavior has two subscenarios: toss a dice and play a sequence game. Avoid : the robot informs the human that certain actions, objects or events are unwanted. Sleep: the robots drives and emotions stop. The robot will not try to satisfy its drives nor express its emotional state. During sleep, the robot’s drive’s are reset. Currently, most of these behaviors are at a single level, i.e. they do not underlie a set of behaviors to choose from with the exception of the play behavior. However this is setting the ground for a more thorough implementation of behavior selection where the EFAA agent can learn to pick the optimal behavior. Table 1 illustrates the interaction between drives, emotions, perceived stimuli and behavioral processes. Some of the suggested behaviors are considered reflexive, such as the waking up of the robot when it is touched while asleep. However certain behaviors are employed not to satisfy a drive, but rather to create the appropriate conditions for the satisfaction of a drive. A typical example of the
Modulating behaviors using allostatic control
7
adaptive control is satisfaction of the socialize drive: it requires a human to interact with. In case a human is already present and tracked, the robot enters in a social behavior (dialog, game). However, in case there is no human present, the robot will seek one by either looking around or by verbally expressing its need to have someone to play with. The look around behavior in this case is considered adaptive as it does not aim at directly satisfying the social drive but rather aims at meeting the preconditions that will satisfy it. Table 1. The perceived stimuli column refers to the presence or absence of certain stimuli that affect the drives and emotions system of EFAA. The drive column refers to the current drive that is affected by the inputs, emotion column refers to the emotions that emerge from a given situation and the behavior column denotes the kind of behavior that is triggered. Perceived stimuli Drive No human present Social Human present Social No objects present on table Exploration Objects on table Exploration Too many objects Cognitive survival Human caresses the iCub Human pokes the iCub Human grabs the iCub Security Human leaves unexpectedly Social Human touches the iCub when asleep (drive initialization) Human present and objects on table Play Robot interacts too long with human Physical survival
4
Emotion Sadness Happiness Sadness Happiness disgust Happiness Anger Fear Surprise Happiness -
Behaviour Look around Track Look around Explore Avoid Avoid Avoid Look around Wake up Play Sleep
System assessment
During a human-robot interaction, the robot proceeds in action-selection and triggers behaviors that aim at satisfying its internal drives. In figure 4 we present data obtained using the model described previously during a real-time humanrobot interaction. Our results show the interplay between drives, emotions, perceived stimuli and actions while they display some key features of the overall system. The emotions panel indicates the levels of each emotion over time. The red line represents the overall happiness of the robot during interaction. On approximately the 1000th cycle we observe the presence of many objects on the table (b) which in turn promotes the emotion of disgust. At the same time, too many objects on the table cause the cognitive survival drive to rise and trigger the ”avoid” behavior. On approximately the 3200th cycle (c) the robot perceived that it was grabbed which gave rise to fear. Poke rises the security drive which in turn also triggers the avoid behavior. At certain moments in the simulation, more than
8
SPECS, Universitat Pompeu Fabra (UPF)
Fig. 4. Overview of the drives and emotions system over time. On the upper panel we can see the stimuli that are perceived from the environment (the number of people present, the number of objects on the table and the input from the skin of the robot: if it has been caressed, poked or crabbed). The ”emotions” panel illustrates the emergence of different emotions (happiness, anger, surprise, sadness, disgust and fear). The next panel displays the drives values for survival (cognitive and physical), exploration, play, social and security whereas the actions panel indicates the emergence of the behaviors triggered in order to maintain the system in homeostasis.
a single emotion emerges, however only one is dominant. This emotion is the one with highest value and is the one displayed on the facial expressions of the robot. Another stimulus that affects the drives of the robot is that of the presence of a human. In deed we see that until the more or less 600th cycle, there is no human present. This causes the social drive to fall and rise once the human appears (a). This is a good example to show how certain behaviors cannot be triggered unless certain conditions are met. For example, to initiate the play activity, the robot needs a human to play with and play is triggered in the ”actions” panel once a human appears; a drive cannot be satisfied if the appropriate conditions are not met. An example where the human participant leaves the interaction scene
Modulating behaviors using allostatic control
9
(a) is depicted in figure 5. The play drive from the ”drives” panel constantly decreases as conditions are not met (human is not present). Nonetheless, the robot proceeds in an explorative behavior (b) and satisfies its exploration drive displaying a more adaptive behavior. Only once the human returns, the robot is able to satisfy its play drive and trigger the appropriate behavior. Part of the role of the allostatic control is to make sure that certain actions do not collide. A robot can look or track a stimulus while talking or playing a game however, it cannot play a game while sleeping. In the presented interaction, the physical survival of the robot gets low and needs to be satisfied at approximately the 2500th cycle (see the ”drives” panel in figure 4), however at that time the robot is already playing with the human and cannot go to sleep. Once the play action is finished, the robot is free to proceed into the sleep behavior.
Fig. 5. Example of the robot’s behavior during an interaction where the human leaves the scene (a). With the absence of humans, the robot starts exploring (b) objects on the table.
Our results indicate how emotions and drives are affected by certain perceptions. Although behaviors are triggered with the scope of maintaining the drives in homeostasis, however, they are bound to these perceptions. Nonetheless, we can observe the dynamics of the proposed system through the interplay of the external percepts, the robot’s emotions and drives and the emergence of certain behaviors in the attempt to keep the balance in the agent’s internal state.
10
5
SPECS, Universitat Pompeu Fabra (UPF)
Discussion
Nowadays there is an increased interest in developing social robots, that is robots that are able to interact and communicate with humans in a natural and intuitive way. However this raises an important question: what are the minimum prerequisites a robot should have in order to be considered a social agent? In this paper we argue that the minimal requirements for a functional robot that can act as a social agent are: (i) the need to be social, (ii) have a repertoire of actions that support communication and interaction, (iii) the ability to distinguish between self and others, (iv) the ability to infer mental states of other social agents and (v) the ability to assess its needs and goals and therefore evaluate how the self is situated in the world. Here we propose a system that has the intrinsic need to socially engage and interact with humans and is equipped with an action repertoire that can support communication and interaction. This system includes drives that help satisfy the robot’s intrinsic needs, emotions that assist the robot express its internal state (utilitarian) and organize behaviors (episthemic) and a set of actions that aims at satisfying its needs. In the proposed model we have defined the following drives: sociality, exploration, survival, security and play. Each of these drives is monitored by a homeostatic control that classifies the level of each drive into the following stages: under homeostasis, homeostasis and over homeostasis. On top of homeostasis we apply an allostatic control that is responsible for the maintenance of the system in balance by behavior selection and priority assignation in order to satisfy its needs. The model’s design in based on the reactive and adaptive layers of the Distributed Adaptive Control (DAC). The reactive layer is responsible for producing reflexive almost hard-wired responses while in the adaptive layer deals with the unpredictability of the world. However, the satisfaction of its needs highly depends on the environment and the current state of the world. As the allostatic control switches from a reactive to an adaptive level, it is not anymore motivated by direct drive satisfaction but it is aiming for matching requirements so that an action leading to a given goal (that is the final drive satisfaction) will be available. The satisfaction level of each drive defines the emotional state of the robot as well as certain external stimuli such as the robot being caressed, poked or grabbed by the human. The robot is able to exhibit six emotions: happiness, anger, sadness, disgust, surprise and fear, emotions that are considered to be basic from evolutionary and cross-cultural studies. The main goal of the robot is to maximize its happiness by keeping its drives in homeostasis. To do so, it is equipped with a set of different behaviors that it can trigger in order to satisfy its needs: wake up, explore, look around, track, play, avoid and sleep. Most of these behaviors are considered reflexive (such as wake up) and single layered, however there are also more complex behaviors such as play that trigger 2 sub-scenarios: play a dice game or play a memory task game. The suggested scenario involves the interaction of a humanoid robot, the iCub, with a human, using the tangible interface Reactable as means of playing games. The robot’s actions are triggered based on the suggested model. The data
Modulating behaviors using allostatic control
11
collected during a human-robot interaction suggest that there is a guided behavior emergence based on the satisfaction level of each drive and the perceptions of the environment. By monitoring drives in parallel (allostatic control) and trying to keep them in a homeostatic state(homeostatic control) we are able to produce different sets of behaviors. Although there is similar work, using emotional and motivational models applying the ”homeostatic regulation rule” for action selection [30], our model of homeostatic and allostatic control can act as the first level of the motivational engine and regulate the robots internal needs and drives via behavioral modulation opening the way for a more adaptive behavior. The allostatic control focuses on actions that could satisfy a drive, but the preconditions of which can be easily satisfied by direct execution of another behavior. This leads to a better adaptation and manipulation of the environment while still being able to satisfy only short-term goals. The long-term global satisfaction of drives, or within contexts that need reasoning about past experience are still to be investigated. Initial attempts to achieve such capabilities are to be linked tightly with cognitive components responsible for the different memory types (episodic, autobiographic) which implementation are described in [31].
6
Acknowledgments
This work is supported by the EU FP7 project EFAA (FP7-ICT- 270490).
References 1. Trincavelli, M., Reggente, M., Coradeschi, S., Loutfi, A., Ishida, H., Lilienthal, A.J.: Towards environmental monitoring with mobile robots. In: Intelligent Robots and Systems, 2008. IROS 2008. IEEE/RSJ International Conference on, IEEE (2008) 2210–2215 2. Distante, C., Indiveri, G., Reina, G.: An application of mobile robotics for olfactory monitoring of hazardous industrial sites. Industrial Robot: An International Journal 36(1) (2009) 51–59 3. Thrun, S., Beetz, M., Bennewitz, M., Burgard, W., Cremers, A.B., Dellaert, F., Fox, D., Haehnel, D., Rosenberg, C., Roy, N., et al.: Probabilistic algorithms and the interactive museum tour-guide robot minerva. The International Journal of Robotics Research 19(11) (2000) 972–999 4. Bennewitz, M., Faber, F., Joho, D., Schreiber, M., Behnke, S.: Towards a humanoid museum guide robot that interacts with multiple persons. In: Humanoid Robots, 2005 5th IEEE-RAS International Conference on, IEEE (2005) 418–423 5. Tapus, A., T ¸a ˘pu¸s, C., Matari´c, M.J.: Userrobot personality matching and assistive robot behavior adaptation for post-stroke rehabilitation therapy. Intelligent Service Robotics 1(2) (2008) 169–183 6. Robins, B., Dautenhahn, K., Te Boekhorst, R., Billard, A.: Robotic assistants in therapy and education of children with autism: Can a small humanoid robot help encourage social interaction skills? Universal Access in the Information Society 4(2) (2005) 105–120 7. Breazeal, C.: Toward sociable robots. Robotics and Autonomous Systems 42(3) (2003) 167–175
12 8. 9. 10. 11. 12.
13. 14. 15. 16. 17. 18. 19. 20. 21. 22. 23. 24. 25.
26. 27.
28.
29. 30.
31.
SPECS, Universitat Pompeu Fabra (UPF) Michotte, A.: The perception of causality. (1963) Premack, D., Premack, A.J.: Origins of human social competence. (1995) Breazeal, C.L.: Designing sociable robots. The MIT Press (2004) Reeves, B.: The media equation: how people treat computers, television, and new media. (1997) Eng, K., Douglas, R.J., Verschure, P.F.: An interactive space that learns to influence human behavior. Systems, Man and Cybernetics, Part A: Systems and Humans, IEEE Transactions on 35(1) (2005) 66–77 Gallese, V., Metzinger, T.: Motor ontology: the representational reality of goals, actions and selves. Philosophical Psychology 16(3) (2003) 365–388 Griffiths, P.E.: What emotions really are: The problem of psychological categories. University of Chicago Press (1997) Ekman, P.: An argument for basic emotions. Cognition & Emotion 6(3-4) (1992) 169–200 Scherer, K.R.: Neuroscience projections to current debates in emotion psychology. Cognition & Emotion 7(1) (1993) 1–41 LeDoux, J.: Rethinking the emotional brain. Neuron 73(4) (2012) 653–676 Panksepp, J., Biven, L.: The archaeology of mind (2011) Arbib, M.A., Fellous, J.M.: Emotions: from brain to robot. Trends in cognitive sciences 8(12) (2004) 554–561 Hull, C.: Principles of behavior. (1943) Duffy, E.: The concept of energy mobilization. Psychological Review 58(1) (1951) 30 McFarland, D.: Experimental investigation of motivational state. Motivational control systems analysis (1974) 251–282 Cannon, W.B.: The wisdom of the body. The American Journal of the Medical Sciences 184(6) (1932) 864 Seward, J.P.: Drive, incentive, and reinforcement. Psychological review 63(3) (1956) 195 Sanchez-Fibla, M., Bernardet, U., Wasserman, E., Pelc, T., Mintz, M., Jackson, J.C., Lansink, C., Pennartz, C., Verschure, P.F.: Allostatic control for robot behavior regulation: a comparative rodent-robot study. Advances in Complex Systems 13(03) (2010) 377–403 Verschure, P.F.: Distributed adaptive control: A theory of the mind, brain, body nexus. Biologically Inspired Cognitive Architectures (2012) Metta, G., Sandini, G., Vernon, D., Natale, L., Nori, F.: The icub humanoid robot: an open platform for research in embodied cognition. In: Proceedings of the 8th workshop on performance metrics for intelligent systems, ACM (2008) 50–56 Geiger, G., Alber, N., Jord` a, S., Alonso, M.: The reactable: A collaborative musical instrument for playing and understanding music. Her&Mus. Heritage & Museography (4) (2010) 36–43 Bernardet, U., Verschure, P.F.: iqr: A tool for the construction of multi-level simulations of brain and behaviour. Neuroinformatics 8(2) (2010) 113–134 Arkin, R.C., Fujita, M., Takagi, T., Hasegawa, R.: An ethological and emotional basis for human–robot interaction. Robotics and Autonomous Systems 42(3) (2003) 191–201 Pointeau, G., Petit, M., Dominey, P.: Successive developmental levels of autobiographical memory for learning through social interaction. Manuscript sumbitted for publication (2013)