When Emotion Does Not Mean Loss of Control - Semantic Scholar

11 downloads 0 Views 275KB Size Report
Magy Seif El-Nasr, John Yen, and Thomas R. Ioerger. FLAME — a fuzzy logic adaptive model of emotions. Autonomous Agents and Multi-Agent Systems,.
When Emotion Does Not Mean Loss of Control Ricardo Imbert and Ang´elica de Antonio Facultad de Inform´ atica, Universidad Polit´ecnica de Madrid, Campus de Montegancedo, s/n, 28660 Boadilla del Monte, Madrid, Spain {rimbert, angelica}@fi.upm.es

Abstract. The traditional consideration that intelligent behaviors can only be produced from pure reasoning fails when trying to explain most of human behaviors, in which the emotional component has a decisive weight. However, although many different efforts have been made to consider emotions in the rational process, emotion is still perceived by many research areas as a non-desirable quality for a computational system. This is not the case of the field of believable agents, where emotions are well respected, although they are sometimes associated to a certain loss of control. This paper presents the mechanisms proposed by a generic cognitive architecture for agents with emotionally influenced behaviors, called cognitiva, to maintain behaviors control without giving up the richness provided by emotions. This architecture, together with a progressive specification process for its application, have been used successfully to model 3D intelligent virtual agents.

1

Reconsidering the Role of Emotions in the Rational Process

Intelligent behavior and decision making have been traditionally related to the —complex— modeling of pure rational process. The consideration of emotion as an influent factor, sometimes decisive, has been systematically pushed aside —even nowadays— in many research areas. In fact, emotion has been observed as something rather irrational that plays down value to human rationality [1], something “non scientific” [2]. At best, in some fields such as that of believable agents, emotion, mood, personality or attitudes are perceived as useful qualities, although sometimes have been also considered synonyms of a certain loss of control and entropy growth, emphasizing only their passional perspective. The consequence is that most of the behaviors produced following this classical conception are far away from those observable, for instance, in human beings. The influence of emotions should not be understood as a distorting and purely passional element. It should be incorporated, based on a precise and well defined model. Reason and emotion are not antagonistic and irreconciliable concepts, but complementary strengths that act upon mental processes. Not in vain, recent theories [3] [4] suggest that emotions are an essential part of human intelligence, and are of paramount importance in processes such T. Panayiotopoulos et al. (Eds.): IVA 2005, LNCS 3661, pp. 152–165, 2005. c Springer-Verlag Berlin Heidelberg 2005 

When Emotion Does Not Mean Loss of Control

153

as perception, learning, attention, memory, rational decision-making and other skills associated to intelligent behaviors. Even more, it has been stated that an excessive deficit of emotion may be harmful to decision-making [5]. Emotion is essential to understand human cognition.

2

Related Work

The search for architectures combining rational and emotional behaviors has been a frequent challenge in the last two decades. Most of the solutions proposed hitherto follow one of two main emotional computational models, generally, the appraisal model (cf. [6], [7], [8], [9], [10]) or the motivational model (cf. [11], [12], [13], [14]). However, although all these models present many interesting virtues, they also suffer from some well-known drawbacks. Besides, the structure underlaying emotional architectures is, actually, very complex. Sometimes, emotional elements and mechanisms are interwoven with the restrictions and particularities of the application context and with the problem faced, making them very hard to be reused in a different context. (cf. [15], [16]). In other situations, emotional architectures are very generic, independent from any specific problem (cf. [13], [11]). The lack of adaptation to the particular necessities of the problem originates less-efficient, computationally demanding mechanisms. Frequently, to produce feasible solutions, their structure must be reconsidered, and some of their inherent characteristics simplified, losing some of their properties, and demanding a lot of effort. Current solutions are not as satisfactory as they should be because they: (1) do not detail mechanisms to control instinctive behaviors at will; or (2) do not detail the way in which perceptions must be considered to behave coherently with regards to the past action; or (3) fail, precisely, in the “attitude” with which they cope with complexity: they bet on specificity for better efficiency or generality for wider applicability.

3

A Cognitive Architecture to Manage Emotions

Our proposal is to model individuals using, not a conventional architecture with an addition resembling emotions, but a truly emotionally-oriented architecture. Emotions must not be understood only as an influential part on the behavior of the individual. In our opinion, explainable and elaborated emotion-based behaviors can only emerge when the whole architecture has an emotional vocation. The architecture that we propose is an agent-based one, called cognitiva. Agents are a common choice to model this kind of systems, since they present an structure and operation suitable to their needs. Considering an agent as a continuous perception-cognition-action cycle, we have restricted the scope of our proposal to the “cognitive” activity, although

154

R. Imbert and A. de Antonio

no constraint on the other two modules (perceptual and actuation) is imposed. This is the reason why this architecture will be sometimes referred to as “cognitive”. In cognitiva, emotions are not considered just as a component that provides the system with some “emotional” attributes, but all the components and processes of the architecture have been designed to deal naturally with emotions. cognitiva is a multilayered architecture: it offers three possible layers to the agent designer, each one corresponding to a different kind of behavior, viz reactive, deliberative and social (see Fig. 1). The interaction of these three layers with the other two modules of the agent, the sensors (perceptual module) and the effectors (actuation module), is made through two specific components, the interpreter and the scheduler, respectively.

Fig. 1. General structure of cognitiva

One of the main characteristics of cognitiva—and one of its strengths— is that, having been conceived as a generic, domain-independent architecture, not restricted to any emotional theory or model, it is accompanied by a progressive specification process to be applied for the design and adaptation of the abstract structures and functions proposed to the particular needs of the application context. Along this paper, we will analyze how cognitiva deals with emotions to provide both instinctive and conscious behaviors. Every component and function will be exemplified with the results obtained from an application of the architecture to the modeling of the characters in a particular domain (following the progressive application process commented above), a virtual environment called the 3D Virtual Savannah. The 3D Virtual Savannah represents an African savannah inhabited by two kinds of individuals, zebras and lions, whose behavior is controlled by intelligent virtual agents (IVAs), derived from cognitiva. The ultimate scope of this environment is to simulate the behavior of these virtual animals, restricting their actuation to a reduced set of actions, related to their movement (walking, running, fleeing) or to some minimal necessities to be satisfied (drinking, eating, hunting). This simple scenario has been used as test bed for cognitiva.

When Emotion Does Not Mean Loss of Control

4 4.1

155

Representing the State of the Agent Management of the Current State: Beliefs

Beliefs represent the information managed by the agent about the most probable state of the environment, considering all the places, objects and individuals in it. Emotional information about itself and about others is part of an IVA’s beliefs. The amount of the agent’s beliefs and their accuracy are of paramount importance to perform appropriate decision-making processes. However, a good representation and structure for beliefs is also fundamental. cognitiva defines a taxonomy of beliefs, depending on their object and their nature. On one hand, a belief may refer to a place in the environment, to objects located in the environment, and to other individuals. Besides, the agent maintains beliefs concerning the current situation, for instance, a belief of a zebra about the current situation in the 3D Virtual Savannah scenario may be the fact that it is being pursued by a lion. That is not information about the lion, nor about the zebra, but about the situation that is taking place. On the other hand, beliefs about places, objects and individuals may describe: – Defining characteristics (DCs), traits that mark out the fundamental features of places, objects or individuals. DCs will hardly change in time, and if they do, it will happen very slowly. For instance, the (x, y, z) dimensions of the savannah are DCs about this place; the position in the scenario of the lion’s den is a DC for an object; a DC about a lion (individual) is its personality. Among all the DCs that an IVA can manage, most of them will be strictly related to the context of application, but cognitiva prescribes the existence of a set of personality traits (P) for individuals. Personality traits define the general lines for the IVA behavior. For instance, zebras in the 3D Virtual Savannah have been provided with two personality traits, courage and endurance. – Transitory states (TSs), characteristics whose values represent the current state of the environment places, objects or individuals. Unlike the DCs, whose values are, practically, static in time, the TSs values have a much more dynamic nature. Some examples of TSs could be the number of inhabitants (TS of a place), the flow of the river (TS of an object), or the position in the scenario of a lion (TSs of an individual). cognitiva considers essential two kinds of TSs for individuals: their moods (M), which reflect the emotional internal state of the IVAs; and their physical states (F), which represent the external state of the IVAs (the state of their bodies or representations in the virtual environment). In the 3D Virtual Savannah, zebras have as moods happiness, fear and surprise, and as physical states, thirst, hunger and fatigue. – Attitudes (As), which determine the predisposition of the agent towards the environment’s components (places, objects and individuals). Attitudes are less variable in time than TSs, but more than DCs.

156

R. Imbert and A. de Antonio

Examples of attitudes selected for our scenario are the apprehension of an IVA about another. Attitudes are important to help the IVA’s decision making, action selection and, above all, to keep coherence and consistency in the agent’s behavior. 4.2

Management of the Past State: History

Behaviors that do not take into account past events are specially disappointing to human observers. cognitiva considers two mechanisms to maintain the agent’s past history information: – Accumulative effect of the past: this is an implicit mechanism, related to the way in which beliefs are managed. External changes in the environment or internal modifications in the agent’s internal state may produce an update of the agent’s beliefs. However, in the case of transitory states, this update is performed as a variation —on higher or lower intensity— on the previous value of the belief, avoiding abrupt alterations in the individual’s state. – Explicit management of the past state: an accumulative effect of the past events may not be enough to manage efficiently the past state, because it does not consider information related to the events themselves or to the temporal instant in which they took place. cognitiva maintains explicit propositions related to any significant event —to the IVA— that happened. Past history allows the agent to reason considering facts occurred in past moments. For instance, an inverse delta based mechanism has been developed to manage past events for the zebras, such as saw another animal, drank, grazed . . . 4.3

Management of the Desirable State: Concerns

Beliefs represent information about what the IVA thinks is the most probable state of the environment, including itself and the rest of the IVAs. However, they do not give any significant clue about the IVA’s preferences on those beliefs. For instance, a belief about a zebra’s current fear does not tell if that is an acceptable value for that zebra in that moment or not. cognitiva proposes the use of concerns (C) as a tool to express the desirable/acceptable values for the TSs of an IVA anytime, in particular, for emotions and physical states. Concerns restrict the range of values of the TSs of the agent, expressing the acceptable limits in a certain moment. With this aim, concerns provide two thresholds, lower and upper, for every TS. All the values among them will be considered as desired by the agent; those values out of this range will be unpleasant, and the IVA will be inclined to act to avoid them and let them move to the desired range. 4.4

The Personal Model. Relationships Among Beliefs

Among all the beliefs managed by the IVA, there is a small group specially related to the emotional behavior. This set, that has been called the agent’s

When Emotion Does Not Mean Loss of Control

157

personal model, is composed by the beliefs that the agent has about itself, and that are related to the emotions. More precisely, this personal model consists on personality traits, moods, physical states, attitudes and concerns. The elements of the personal model in the context of the 3D Virtual Savannah have been modeled with a fuzzy logic representation. Fuzzy logic linguistic labels are nearer to the way in which humans qualify these kind of concepts (it is usual to hear “I am very happy”, instead of “My happiness is 0.8 ”). Besides, fuzzy logic is a good approach to manage imprecision. As the rest of components of cognitiva deal with the elements of the personal model, the whole design and implementation generated follows a fuzzy logic approach, including relationships among personal model elements. Relationships among personal model elements are a key point in cognitiva. Many of these beliefs are conceptually closely related, and have a direct influence on each other, as it is shown in Fig. 2: – Personality traits exert an important influence determining emotions. For instance, in a similar situation, the value of the mood fear will be different for a courageous IVA than for a pusillanimous one. – The set of attitudes of an IVA has some influence on the emotions that it experiences. For instance, the presence of a lion in the surroundings will produce an increment on a zebra’s fear, because of its attitude of apprehension towards lions. – Personality traits, in turn, have influence on attitudes. The value of a zebra’s attitude apprehension towards a lion is different depending on its value for the personality trait courage: a cowardly zebra will feel absolute rejection towards a lion, whereas a courageous one just will not like it. – Physical states have also influence on emotions. For instance, when a zebra is very thirsty, its happiness will decrease. – Finally, personality traits exert some influence on concerns. Thus, although an IVA may want to have, in a certain moment, a very high upper threshold for its mood fear (the IVA wants to risk more for a while and be more fearresistant), in fact, and depending on its personality traits, that value will be more or less high. All these relationships have been designed and implemented for the 3D Virtual Savannah through special fuzzy rules and fuzzy operators. The result is a set of fuzzy relationships, including the following: courage DECREASES much fear courage DECREASES few apprehension apprehension DECREASES some happiness apprehension INCREASES much fear ...

158

R. Imbert and A. de Antonio

C ?   πκ      πµ /M P ? O _?? ?? ?? ?? ??φµ ?? ? pat ? πα ?? αpat µ ???  A

F

Fig. 2. Generic relationships among beliefs of the IVA personal model, together with the dependency functions determined in cognitiva. P = Personality traits; C = Concerns; M = Moods; A = Attitudes; F = Physical states.

5

Controlling the State of the Agent

Emotions, in particular moods, may be a strong force to drive the IVA’s behavior, but does not mean that an agent’s behavior must be necessarily subjugated to their changing values. Emotions are part of the state of the agent. If their values are properly updated and their effects are justified, the outcomes of the emotionally based behavior will not be unpredictable, but coherent responses. cognitiva provides some mechanisms to update and control the internal state of the agent —current, past and desirable— and, in particular, to control the values of the components of the personal model. The dynamics of the architecture follow a continuous cycle, represented in the Fig. 3, that leaves no room for chaotic behaviors. The following sections describe how beliefs, past history and concerns are updated to maintain coherence, and how their values are handled to provide proper actions. 5.1

Updating the State of the Agent as a Consequence of Perceptions

The cognitive module described by cognitiva receives from the perceptual module (the agent’s sensors) perceptions of the environment. This input may not be directly manipulable by most of the processes of the cognitive module and must be interpreted (for instance, sensors might provide measures about light wavelengths, but the cognitive module could only be able to manage directly colors). In other situations, many inputs may be irrelevant for the IVA, and should be filtered (when a lion is chasing a zebra, the zebra does not mind anything but the lion and the escape route). cognitiva provides a component, called interpreter, which acts as an interface between sensors and the rest of the cognitive module, receiving the perceptions coming from the perceptual module, filtering and discarding those non-

When Emotion Does Not Mean Loss of Control

159

Fig. 3. Internal components and processes of cognitiva

interesting to the agent, and translating them into percepts 1 , inteligible by the rest of the components and processes of the cognitive module. On one hand, the interpreter directs the interpreted percepts to the convenient processes in every layer of the architecture, to be managed reactively, guided by goals, or guided by interactions with other agents. On the other hand, the interpreter also updates the value of past history events and beliefs. Most of that updating may be more or less automatic, and needs no further processing. For instance, if a zebra perceives that the lion has moved, the interpreter will update automatically the new position of the lion. However, that is not the case for moods. And moods are the core of emotional behavior. The new value of moods depends on their old value and on the perceptions, but also on what was expected to happen and to which degree that occurrence was desired. Moods need a new factor to be conveniently generated. With this aim, cognitiva includes the mechanism of the expectations, inspired on the proposal of Seif El-Nasr [18], which has been adapted, in turn, from the OCC Model [19]. Expectations capture the predisposition of the agent toward the events — confirmed or potential. In cognitiva, expectations are valuated on: – Their expectancy: Expressing how probably the occurrence of the event is expected. – Their desire: Indicating the degree of desirability of the event. 1

Name proposed by Pierce [17], in the context of visual perception, to design the initial interpretative hypothesis of what is being perceived.

160

R. Imbert and A. de Antonio

Through expectations, the interpreter has enough information to update moods from perception: – When the event occurrence has not yet been confirmed. Moods will be updated depending on the degrees of expectancy and desire for the event. For example, if the last time that a zebra A saw another zebra B, this one was being chased by a lion, the zebra A may elaborate the expectation “zebra B has been hunted”. That is an expected undesirable event, whose occurrence has not been confirmed, what produces a sensation of distress, increasing the value of its “fear” and decreasing the value of its “happiness”. – When the event occurrence has already been confirmed. Again, depending on the degrees of expectancy and desire for the event, moods will be updated. For instance, if the zebra A finds the dead body of zebra B, its fears have been confirmed, and its distress transforms into sadness, decreasing considerably the value of its “happiness”. – When the event non-occurrence has already been confirmed. The degree of expectancy and desire of the event will determine the updating of moods. For instance, if the zebra A sees again zebra B alive, the expectation about zebra B being hunted vanish, and distress would give way to relief by increasing “happiness” and “surprise”. This is how expectations are used to update moods, but, what is the origin of those expectations? Each time some internal reasoning process, in any of the three layers of the architecture, proposes an action to be executed, it must be properly sequenced with other previous and simultaneous action proposals. This should not be a responsibility of the actuation module (effectors), since this module should not need to have any information about the origin and final goal of the actions it receives. cognitiva proposes a component, the scheduler, to act as interface between the cognitive module and the effectors, managing an internal agenda in which action proposal are conveniently sequenced and ordered. Once it has decided which is/are the most adequate action/s to be executed, it sends it/them to the effectors. Every action will have a set of associated expectations. The scheduler takes them and informs the interpreter about what is expected to occur in the future, according to the action that is being executed. Expectations provide coherence to the IVA’s behavior since they close its internal reasoning cycle, maintaining a reference for future perceptions according to its current actuation. 5.2

Updating the Desired State of the Agent to Control Actuation

The scheduler organizes the actions according to the only information it has about them: their priority, established by the reasoning process that generated them, and their concurrence restrictions, which allow the scheduler to send simultaneously compatible actions. It is reasonable that actions proposed by the reactive layer have a higher priority in the queue of actions to be executed than those coming from the

When Emotion Does Not Mean Loss of Control

161

deliberative o social layers. Even more, it makes sense that deliberative or social action executions are interrupted when a reactive action emerges. Then, does not that mean that reactive, instinctive, passional actions will always take the control of the actuation, leaving out higher level behaviors? Is not that, in fact, losing control? Tito Livio relates in its famous work Ab Urbe Condita the story of the ancient Roman hero Mucius Scaevola, who preferred to introduce voluntarily his arm inside the pyre for sacrifices, before betraying Rome. How is it possible to hold back the reaction of retiring the arm from fire, to avoid the pain (a reactive, more prioritized action), and execute instead a goal-based behavior (to assure the survival of Rome)? The mechanism proposed in cognitiva is the use of concerns to control the IVA’s actuation. A reaction, besides some triggering conditions, the operator to be executed, the consequences of its execution, and some other parameters, such as its priority or its expiry time, has to be provided with justifiers, i.e., emotional restrictions that must be satisfied to execute the action. Justifiers are expressed in terms of restrictions related to the value of the IVA’s concerns, that is, restrictions on the desirable state of the agent. For instance, a justifier to trigger a reaction to retire the arm because of the pain produced by the fire will be: pain > upper threshold concern(pain)

(1)

Whenever some agent wants to be able to stand a bit more pain, it first must raise the value of the upper threshold of this concern. If pain does not surpass the value of the upper threshold, the reaction will not be justified and it will not be triggered. Depending on the personality traits of the individual, which have some influence on concerns (see Fig. 2), the real new value for that upper threshold will be higher or lower. That is why a normal person will not resist for long with the arm on the fire, while a hero as Mucius Scaevola was able to stand without resign. Thus, higher processes of the architecture (deliberative and social) can adjust the value of the thresholds of the agent’s concerns to control the instinctive actuation whenever it is not desirable. Coming back to the scenario of the 3D Virtual Savannah, if two zebras, one courageous (in the Fig. 4, the black striped one, the farthest one from the user point of view), and another one easily frightened (in the Fig. 4, the red striped one, the closest to the viewer), feel thirsty and they are not near any source of water, their deliberative layer will produce a plan to arrive to the only river they know (Fig. 4 (1)). However, if arriving near the river, they perceive a lion, their fear will raise and a reaction of escape will be triggered (Fig. 4 (2)). Once they are far enough and their fear has descended under the value of the upper threshold of its corresponding concern, still they will need to quench their thirst. However, the new information included in their beliefs, the position of the lion, prevents them from generating a potentially successful plan if they do not

162

R. Imbert and A. de Antonio

consider assuming some risk. As far as thirst has increased after the running, they decide to increase their fear tolerance (the upper threshold of their concern about fear), each one according to their possibilities (their personality traits). They come back to the surroundings of the river, perceive the lion and, again, their fear raises. But, this time, the level of the fear of the brave zebra (the black one) does not surpass its fear tolerance upper threshold, and it advances cautiously in the presence of the lion (Fig. 4 (3)) and gets to drink (Fig. 4 (4)). The other zebra, less courageous, cannot raise enough its fear tolerance upper threshold, and, every time the fear surpasses it, the red zebra escapes frightened, without attaining its goal.

Fig. 4. Snapshots of the use of the concerns to control reactions in the 3D Virtual Savannah scenario

6

Adapting the Generic Architecture to an Specific Context

cognitiva has been generically conceived, without restrictions about its internal emotional and reasoning models, the format of its components, the specific design of its functionalities or the application context. However, this architecture is provided with a progressive specification process, which allows to apply it in a variety of problems and contexts, reusing designs and implementations, without loss of efficiency. As a general outline, this process consists on two phases of specification. The first one is a funcional specification phase, in which the developer details the

When Emotion Does Not Mean Loss of Control

163

particular structure and format of every component of the architecture, specifying also the design of all the functions proposed by cognitiva. For instance, it was in this phase where we selected and designed a fuzzy logic representation for beliefs and their relationships. As a result of this functional specification phase, a full operative, context independent module of the abstract architecture is obtained. The second phase of this process, the contextual specification phase, deals with the particularization of the concrete values needed for every architectural element, according to the application context. In this phase, for instance, we decided that we needed an agent of the kind “zebra”, whose behavior would be influenced by two personality traits: courage and endurance. The advantage of this systematical adaption approach, against the traditional architectures (too specific or too general) is that: 1. The major effort is devoted to the development of the functional specification, which is context independent. A functional specification may be the basis for many different contextual specifications, whose development is far less expensive, in terms of effort (in our case, the development of the contextual specification of the 3D Virtual Savannah meant an effort seven times lower than that of the functional specification). 2. It differentiates the technical and the domain designs, allowing to separate both kinds of designer roles.

7

Conclusions

Human behaviors are rarely exclusively explainable through pure reasoning. There exist other emotional factors that influence decisively on them, that must also be considered. However, the efforts until today to build architectures including those emotional factors have not yet succeeded, and emotion is still frequently observed as an undesirable quality to be included in computational systems. This paper presents some of the mechanisms proposed in cognitiva, an architecture with generic mechanisms and structures to build agents with emotionally influenced behaviors, dealing with the problem of the loss of control in the emotionally based behavior generation. In particular, cognitiva deals with the control of instinctive behaviors through the use of concerns. Concerns, not explicitly considered by appraisal models, are, in a certain way, conceptually present in most of the motivational models, under the name of “drives” [20] or, just, “concerns” [21]. However, concerns have been extended in cognitiva to allow, not only to control the IVA’s reactive behaviors, but also to justify and explain them. Moreover, the feedback mechanism proposed by the motivational models to link the behaviors produced by the agent to those drives is intricate, and does not allow a clear, understandable management of the emotions produced by future events. The proposal of some appraisal models, such as the OCC model [19], is more intuitive and explainable, including expectations with this aim. cognitiva

164

R. Imbert and A. de Antonio

particularizes the OCC model’s expectations, considering not only a degree of expectancy of the perceived events, but also their desirability. Finally, together with the architecture, a progressive process of specification is proposed, which allows facing particular contexts of application without giving up any of the architecture’s properties and at a reasonable cost. cognitiva places the key in a controlled adaptivity to achieve the efficiency of the specific architectures together with the applicability of the general ones. This is a common development strategy in both software and knowledge engineering that, however, has not been considered by any previous emotional architecture to fight against the complexity of this kind of systems.

References 1. Darryl N. Davis and Suzanne J. Lewis. Computational models of emotion for autonomy and reasoning. Informatica (Special Edition on Perception and Emotion Based Reasoning), 27(2):159–165, 2003. 2. Rosalind W. Picard. Affective computing. Technical Report 321, MIT Media Laboratory, Perceptual Computing Section, November 1995. 3. Joseph LeDoux. The Emotional Brain. Simon and Schuster, New York, 1996. 4. R. Adolphs, D. Tranel, A. Bechara, H. Damasio, and Antonio R. Damasio. Neurobiology of Decision-Making, chapter Neuropsychological Approaches to Reasoning and Decision-Making, pages 157–179. Springer-Verlag, Berlin, Germany, 1996. 5. Antonio R. Damasio. Descartes’ Error. Emotion, Reason, and the Human Brain. Gosset/Putnam Press, New York, 1994. 6. Clark Elliott. I picked up catapia and other stories: A multimodal approach to expressivity for “emotionally intelligent” agents. In W. Lewis Johnson and Barbara Hayes-Roth, editors, Proceedings of the First International Conference on Autonomous Agents (Agents’97), pages 451–457, New York, 5–8, 1997. ACM Press. 7. Daniel Rousseau and Barbara Hayes-Roth. Improvisational synthetic actors with flexible personalities. Technical Report KSL 97-10, Knowledge Systems Laboratory, Computer Science Dept., Stanford University, Stanford, CA, 1997. 8. Alexander Staller and Paolo Petta. Introducing emotions in the computational study of norms. In Proceedings of the AISB’00 Sympoisum on Starting from Society -The Application of Social Analogies to Computational Systems, pages 101–112, Birmingham, UK, 2000. 9. Helmut Prendinger and Mitsuru Ishizuka. Designing and evaluating animated agents as social actors. IEICE Transactions on Information and Systems, Vol.E86D(8):1378–1385, 2003. 10. Jonathan Gratch and Stacy Marsella. Evaluating the modeling and use of emotion in virtual humans. In Nicholas R. Jennings, Carles Sierra, Liz Sonenberg, and Milind Tambe, editors, Proceedings of the Third International Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS 2004), pages 320–327, New York, 2004. ACM Press. 11. Dolores Ca˜ namero. Modeling motivations and emotions as a basis for intelligent behavior. In W. Lewis Johnson and Barbara Hayes-Roth, editors, Proceedings of the First International Symposium on Autonomous Agents (Agents’97), pages 148–155, New York, 1997. ACM Press.

When Emotion Does Not Mean Loss of Control

165

12. Hirohide Ushida, Yuji Hirayama, and Hiroshi Nakajima. Emotion model for life-like agent and its evaluation. In Proceedings of the Fifteenth National Conference on Artificial Intelligence and Tenth Innovative Applications of Artificial Intelligence Conference (AAAI’98/ IAAI’98), pages 8–37, Madison, Wisconsin, United States, 1998. 13. Steve Richard Allen. Concern Processing in Autonomous Agents. PhD thesis, Faculty of Science of The University of Birmingham, School of Computer Science. Cognitive Science Research Centre. The University of Birmingham, UK, 2001. 14. Carlos Delgado-Mata and Ruth Aylett. Emotion and action selection: Regulating the collective behaviour of agents in virtual environments. In Nicholas R. Jennings, Carles Sierra, Liz Sonenberg, and Milind Tambe, editors, Proceedings of the Third International Joint Conference on Autonomous Agents and Multi Agent Systems (AAMAS 2004), pages 1302–1303, New York, 2004. ACM Press. 15. Sandra Clara Gadanho. Learning behavior-selection by emotions and cognition in a multi-goal robot task. Journal of Machine Learning Research, 4:385–412, 2003. 16. Etienne de Sevin and Daniel Thalmann. An affective model of action selection for virtual humans. In Proceedings of Agents that Want and Like: Motivational and Emotional Roots of Cognition and Action Symposium at the Artificial Intelligence and Social Behaviors 2005 Conference (AISB’05), University of Hertfordshire, Hatfield, UK, 2005. 17. Charles Sanders Pierce. Collected Papers. The Belknap Press of Harvard University Press, Cambridge, 1965. 18. Magy Seif El-Nasr, John Yen, and Thomas R. Ioerger. FLAME — a fuzzy logic adaptive model of emotions. Autonomous Agents and Multi-Agent Systems, 3(3):219–257, 2000. 19. Andrew Ortony, Gerald Clore, and Allen Collins. The Cognitive Structure of Emotions. Cambridge University Press, Cambridge, UK, 1988. 20. Juan D. Vel´ asquez. When robots weep: Emotional memories and decision-making. In Proceedings of the Fifteenth National Conference on Artificial Intelligence and Tenth Innovative Applications of Artificial Intelligence Conference (AAAI’98/ IAAI’98), Madison, Wisconsin, United States, 1998. American Association for Artificial Intelligence. 21. Ian Paul Wright. Emotional Agents. PhD thesis, Faculty of Science of The University of Birmingham, School of Computer Science. Cognitive Science Research Centre. The University of Birmingham, UK, 1997.