On the Role of Emotion in Embodied Cognitive Architectures: From ...

2 downloads 2020 Views 520KB Size Report
Abstract. The computational modeling of emotion has been an area of growing interest in cognitive robotics research in recent years, but also a source of ...
Cogn Comput (2009) 1:104–117 DOI 10.1007/s12559-009-9012-0

On the Role of Emotion in Embodied Cognitive Architectures: From Organisms to Robots Tom Ziemke Æ Robert Lowe

Published online: 6 February 2009  Springer Science+Business Media, LLC 2009

Abstract The computational modeling of emotion has been an area of growing interest in cognitive robotics research in recent years, but also a source of contention regarding how to conceive of emotion and how to model it. In this paper, emotion is characterized as (a) closely connected to embodied cognition, (b) grounded in homeostatic bodily regulation, and (c) a powerful organizational principle—affective modulation of behavioral and cognitive mechanisms—that is ‘useful’ in both biological brains and robotic cognitive architectures. We elaborate how emotion theories and models centered on core neurological structures in the mammalian brain, and inspired by embodied, dynamical, and enactive approaches in cognitive science, may impact on computational and robotic modeling. In light of the theoretical discussion, work in progress on the development of an embodied cognitive-affective architecture for robots is presented, incorporating aspects of the theories discussed. Keywords Affect  Cognitive architectures  Cognitive robotics  Computational modeling  Embodied cognition  Emotion  Grounding  Homeostasis  Motivation  Organisms

T. Ziemke (&)  R. Lowe Informatics Research Centre, School of Humanities & Informatics, University of Sko¨vde, PO Box 408, 54128 Sko¨vde, Sweden e-mail: [email protected] R. Lowe e-mail: [email protected]

123

Introduction The study of the relation between emotion and cognition has a long, but mixed history in science and philosophy [59]. As Damasio [19] pointed out, while in the late nineteenth century emotion was considered to be of central importance to mind by influential thinkers such as Darwin, James and Freud, throughout most of the twentieth century it has commonly been viewed as the very antithesis of reason, and therefore largely ignored in the sciences of the mind. In the last 10–20 years, however, there has been a steadily growing interest in emotion in the cognitive sciences, driven in particular by a wealth of neuroscientific insights into affective, motivational and emotional mechanisms and their role in cognition [18, 20, 21, 37, 54, 66, 67]. For example, in a recent review of the relations between emotion and cognition, with a focus on cognitive neuroscience studies of the human amygdala, Phelps [63] identifies five types of interaction that have been well studied and documented by now: (1) emotional learning, i.e., how stimuli acquire emotional properties; (2) emotion and memory, in particular how emotion influences the formation and recollection of episodic memory; (3) emotion’s influence on attention and perception, facilitated by the amygdala’s extensive connectivity with sensory processing regions; (4) emotion in processing social stimuli, e.g., the recognition of emotional facial expressions; and (5) changing emotional responses, in particular the influence of higher cognitive functions on emotional processing. Phelps’ [63] overall conclusions are that in fact ‘‘mechanisms of emotion and cognition appear to be intertwined at all levels,’’ and that indeed the scientific ‘‘understanding of human cognition requires the consideration of emotion.’’

Cogn Comput (2009) 1:104–117

Insights into the underpinnings of emotion, as well as its role in natural cognition in humans and other animals, have more recently also resulted in a growing body of work on computational models of emotion in artificial intelligence (AI), cognitive systems and robotics research [11, 28, 34, 74]. This paper aims to contribute to this work by addressing in further detail the role that emotional/affective mechanisms play in natural cognitive architectures, and might play in artificial ones, in cognitive robotics in particular. In a nutshell, the view put forward here is one of emotion as (a) closely connected to embodied cognition, (b) grounded in homeostatic bodily regulation, and (c) a powerful organizational principle—affective modulation of behavioral and cognitive mechanisms—that is ‘useful’ in both biological brains and robotic cognitive architectures. These three principles are not just intimately related but mutually reinforcing in the service of adaptive behavior in natural and artificial cognitive systems. The remainder of this paper is structured as follows: The section ‘‘Background: Organisms and Emotions in Cognitive Science’’ provides some historical and conceptual background that will be useful in motivating and framing the work discussed in this paper. The section ‘‘Who Needs Emotion, and What for?’’ then addresses in some more detail why and how emotion is relevant to natural cognitive systems, and why it should be relevant to artificial ones as well—in particular cognitive robots. The section ‘‘Emotion in Embodied Cognitive Architectures’’ focuses more specifically on different conceptions regarding the role of emotion in brain-based, embodied cognitive architectures—natural and artificial ones. The section ‘‘Toward an Embodied Cognitive-Affective Architecture’’ describes our own computational modeling work in progress toward an integrated embodied cognitive-affective architecture. The final section then summarizes and discusses the work presented here, and also briefly addresses some open research questions.

Background: Organisms and Emotions in Cognitive Science From a historical perspective, the renewed and growing interest in emotion in the cognitive sciences is part of a larger shift from the so-called computer metaphor for mind and the view of cognition as mainly taking place ‘in the head’ toward theories and models of embodied cognition that emphasize the interaction of agents and their environments [12, 13, 31, 61, 62, 73, 76, 86]. However, much work on embodied cognition has been mainly focused on sensorimotor embodiment and the grounding of cognition in perception and action. This is particularly true for most research on autonomous agents and robots in embodied AI,

105

or ‘New AI,’ which replaced the computational functionalism of traditional AI with a robotic functionalism [32, 81], and thus has been predominantly focused on physical grounding and sensorimotor embodiment, i.e., the grounding of cognitive computation in robotic perception and action [10, 33, 61, 62, 71], without much regard for affective or organismic embodiment, i.e., the organismic roots of natural cognition [3, 4, 23, 78–80]. Damasio [19], on the other hand, identified what he called ‘‘the prevalent absence of a notion of organism in the sciences of mind and brain’’ as a problem, which he elaborated as follows: ‘‘It is not just that the mind remained linked to the brain in a rather equivocal relationship, but that the brain remained consistently separated from the body and thus not part of the deeply interwoven mesh of body and brain that defines a complex living organism.’’ [19, p. 84]. In a similar vein, Panksepp [56] argued: ‘‘As long as psychology and neuroscience remain more preoccupied with the human brain’s impressive cortico-cognitive systems than subcortical affective ones, our understanding of the sources of human consciousness [and cognition] will remain woefully incomplete’’ [56, p. 58]. Damasio further argued that nature has ‘‘built the apparatus of rationality not just on top of the apparatus of biological regulation, but also from it and with it’’ [18, p. 128]—a point we will get back to later in the discussion of the role of emotion in embodied cognitive architectures. In line with these arguments, which were originally not specifically directed at AI or cognitive robotics research, there is now also a growing interest in computational/ robotic models of the biological underpinnings of affect, motivation and emotion that goes beyond sensorimotor embodiment by also acknowledging the organismic, bioregulatory roots of embodied cognition [3, 4, 23, 84, 85]. Parisi [58], for example, recently argued for an internal robotics, pointing out that: … behaviour is the result of the interactions of an organism’s nervous system with both the external environment and the internal environment, i.e., with what lies within the organism’s body. While robotics has concentrated so far on the first type of interactions (external robotics), to more adequately understand the behaviour of organisms we also need to reproduce in robots the inside of the body of organisms and to study the interactions of the robot’s control system with what is inside the body (internal robotics) [58]. As the reader might have noticed by now, the term ‘emotion’ has not yet been defined explicitly here, and to some degree it is used as overlapping or interchangeable with terms such as ‘affect’ and ‘motivation’—both in this paper and in much of the literature. While the term ‘affect’

123

106

is relatively easy to generally define, to some degree, as including drives, motivations, emotions, feelings, and moods, providing a more specific definition of ‘emotion’ is more difficult. One the one hand, ‘emotion’ is commonly used interchangeably with ‘affect’ in the above broad sense [59]. On the other hand, it is also commonly used in a narrower sense: According to Rolls’ [67] succinct definition, for example, ‘‘emotions are states elicited by rewards and punishers.’’ Others refer to sets of basic emotions, such as Ekman’s [25] six basic emotions: anger, disgust, fear, happiness, sadness, surprise (for an extended list see [26]). Another interesting perspective is Panksepp’s view of basic limbic emotional action systems shared by mammalian brains, including seeking, fear, rage, lust, care, panic (separation distress), and play [54, 57]. We will here follow Pessoa [59] in acknowledging that emotion (much like cognition, in fact) is difficult to define clearly, and that trying to provide yet another explicit definition might not be helpful. Hence, the focus here instead is on providing a selective review of theories and concepts related to affect and emotion, and their role in embodied cognitive architectures. As indicated above, the focus will be on what Panksepp [57] called ‘‘a multi-tiered affectively embodied view of mind’’ (referring to Damasio’s work and his own), i.e., the embodiment of affect, emotion and cognition and their grounding in homeostatic biological regulation. Furthermore, the focus is here on what Arbib and Fellous [2] referred to as the ‘internal’ aspects of emotion, i.e., its role in the behavioral organization of individual agents (e.g., attention, learning, action selection, decision-making), rather than the ‘external’ aspects of emotion (expression and recognition) involved in social coordination and communication.

Who Needs Emotion, and What for? Do robots need emotions? A pragmatic answer would be that robots, as currently conceived and constructed, simply do not have any needs (of their own) in the first place—and thus of course neither need emotions, nor energy, nor sensors, actuators, etc. A more relevant question then may be whether or not we, the human designers and users of robots, need or want robots to have or at least express emotions. From the scientific perspective of building computational and/or robotic models of emotion and cognition, the answer, again, is relatively simple: to the degree that robots constitute useful synthetic models of emotional mechanisms, or building blocks thereof, modelers will make use of such models in their work. From an engineering perspective the question rather is: Does building models of emotional/affective mechanisms into our robots make them more natural, more useful, or more efficient?

123

Cogn Comput (2009) 1:104–117

Again, the abovementioned distinction of Arbib and Fellous [2] between internal (individual) aspects of emotion and external (social) ones might be useful here. It seems that there are good arguments that in the latter case, in human–robot social interaction in particular, emotion expression does help to make human–robot interaction more natural [7, 8]—however, as mentioned above, the role of emotion in social interaction is not the focus of this paper, so we leave this topic aside. The internal aspects of emotion, on the other hand, i.e., its role(s) in the behavioral organization of the individual cognitive agent, are the focus of this paper. Some neuroscientific evidence regarding the central role of emotion in the organization of natural (human) cognition has already been mentioned in the introduction. A complementary answer to the question of who needs emotion, and what for, comes from Kelley [35]: Emotions are necessary for the survival of the individual and the species. Therefore, a simple answer to the title of this book [‘‘Who needs emotions?’’ [28]] is that all organisms on earth need emotional systems, in their broadest biological definition. Emotional systems enable animals to more effectively explore and interact with their environment, eat, drink, mate, engage in self-protective and defensive behaviors, and communicate. Thus, a robot designed to survive in the world as successfully as its living counterparts undoubtedly would require an equivalent system, one that instills urgency to its actions and decisions—in short, one that motivates and directs [35]. The evolutionary continuity implied by Kelley (as well as Panksepp and Damasio, cf. previous and next section) is also emphasized by Petta [60] who views emotion as ‘‘a flexible adaptation mechanism that has evolved from more rigid adaptational systems, such as reflexes and physiological drives.’’ Petta further emphasizes in particular the (cognitive) role of emotional appraisal: The flexibility of emotion is obtained by decoupling the behavioral reaction from the stimulus event. The heart of the emotion process thus is not a reflexlike stimulus-response pattern, but rather the appraisal of an event with respect to its adaptational significance for the individual, followed by the generation of an action tendency aimed at changing the relationship between the individual and the environment [60, p. 257]. As Prinz [65] discusses in more detail, the tradition of cognitive or appraisal theories of emotion is commonly presumed to be at odds with the tradition of perceptual or somatic theories of emotion that identifies emotions with physiological changes, or the brain’s perception thereof. However, Prinz suggests that ‘‘this division is spurious.

Cogn Comput (2009) 1:104–117

Emotions are states that appraise by registering bodily changes’’—a position he refers to as ‘‘embodied appraisal theory’’ [65, p. 78]. Similarly, Damasio’s theory holds that emotions on the one hand fulfill a survival-related (bioregulatory, homeostatic, and adaptive) function, and, on the other hand, constitute the basis of high-level cognition, self and consciousness [20–22]. More specifically, according to Damasio, emotions are ‘‘bioregulatory reactions that aim at promoting, directly or indirectly, the sort of physiological states that secure not just survival, but survival regulated into the range that we … identify with well-being’’ [22, p. 50]. Accordingly, emotional responses ‘‘alter the state of the internal milieu (using, for example, hormonal messages disseminated in the bloodstream); the state of the viscera; the state of the musculoskeletal system, and they lead a body now prepared by all these functional changes into varied actions or complex behaviours’’ [22, p. 51]. Prinz’s and Damasio’s theories are somewhat controversial in that, contrary to the typically anti-representational stance of much work on embodied cognition, such as enactive theories of cognition that also emphasize the biological grounding of cognition [30, 73, 76], they attribute emotions with a representational role in embodied cognition. Prinz [65] does so very explicitly: … emotions can represent core relational themes without explicitly describing them. Emotions track bodily states that reliably co-occur with important organism–environment relations, so emotions reliably co-occur with important organism–environment relations. Each emotion is both an internal body monitor and a detector of dangers, threats, losses, or other matters of concern. Emotions are gut reactions; they use our bodies to tell us how we are faring in the world [65, p. 69]. Similarly, Damasio has argued that the essence of feelings of emotion lies in the mapping of bodily emotional states in the body-sensing regions of the brain, such as somato-sensory cortex [20, 22]. The existence of such mental images of emotional bodily reactions is also crucial to Damasio’s concept of the ‘‘as if body loop,’’ a neural ‘‘internal simulation’’ (which uses the brain’s body maps, but bypasses the actual body), whose cognitive function and adaptive value he elaborates as follows: Whereas emotions provide an immediate reaction to certain challenges and opportunities … [t]he adaptive value of feelings comes from amplifying the mental impact of a given situation and increasing the probabilities that comparable situations can be anticipated and planned for in the future so as to avert risks and take advantage of opportunities [22, pp. 56–57].

107

Damasio’s view of feelings can be contrasted with Panksepp’s [57] position that ‘‘core emotional feelings … reflect activities of massive subcortical networks that establish rather global states within primitive body representations that exist below the neocortex’’ [57, p. 64]. Dynamical systems theories of emotions have tended to avoid the concept of ‘representation’. Freeman [29], for example, suggests that the view of the brain as a passive stimulus processor/representer that produces emotional responses consistent with a sequential sense-think-act classical cognitivist conception should rather be replaced by the pragmatist view of emotions as being generated from within. For Freeman ‘‘humans and other animals maintain a stance of attention and expectation’’ [29, p. 97] and emotions are initiated in intentional dynamics. The hub of the emotional activation system lies in the self-organizing dynamics of a limbic space-time loop rather than predominantly in areas of sensory cortex. In this view, there is ‘‘[no] representation like a map, a look-up table, or a fixed memory store’’ [29, p. 103]. In a similar vein, Lewis [39] has criticized appraisal theories of emotion with respect to their emphasis on a cognitive causal precedence, i.e., where emotions are considered secondary to appraised events that are somehow external to, and independent from, the perceiving organism. His dynamic systems approach instead views emotions and cognitive appraisals as inseparable phenomena engendered through the interactions of microconstituent processes of emotion and appraisal that give rise to stable macro-states from which emotional learning can occur. While generally supportive of the dynamic systems position that Lewis holds, the view of their existing micro-constituents of emotion and appraisal has been criticized by Colombetti and Thompson [14] on the grounds that ‘‘it may ultimately prove unproductive even to try to differentiate distinct ‘appraisal constituents’ and ‘emotion constituents,’ which then ‘interact’ in the formation of an emotional interpretation. Rather we suspect that there may be no appraisal constituent that is not also an emotion constituent, and vice versa’’ [14, p. 200]. As already alluded to above, the eschewing of any representationalist language altogether is consistent with the general enactive view of emotions [14, 15, 50, 73, 75]. Our own position, in line with some of the above arguments, is that cognition and emotions are inseparable in that ‘‘emotions cannot be seen as mere ‘coloration’ of the cognitive agent, understood as a formal or un-affected self, but are immanent and inextricable from every mental act’’ [75, p. 61]. However, this is not to say that representationalist language necessarily needs to be discarded altogether. In Lowe et al. [40], for example, a point has been made about the complementarity of the embodied appraisal position of Prinz [65] and the dynamic systems

123

108

theory of Lewis [39]. In the latter, the emphasis is on finding a common language for emotions in the disciplines of psychology and neuroscience rather than on physical and somatic bodily effects (although these are acknowledged to be important—Lewis, personal communication). If stable emotion-appraisals realized through neural activity can be considered to permit learning, Prinz’s embodied appraisals relating physiological states to the core relational themes of Lazarus [36] might be that which can be learned. It may also be the case that aspects of embodiment (sensorimotor and non-neural somatic states) might be part and parcel of the dynamic stabilizing process. The interrelationship between emotion and cognition will be addressed more thoroughly in the next section. Finally, it might be worth noting that, while from a philosophical perspective the use of terms like ‘‘mental images’’ or ‘‘representations’’ in theories of embodied cognition and emotion is controversial, from a scientific and/or engineering perspective the question of terminology is more secondary. For example, whether or not humans and/or other animals make use of as-if body loops as mentioned earlier in this section, and whether or not cognitive robots could or should make use of such mechanisms, is more or less independent of whether or not such mechanisms should be considered ‘‘representations.’’ There is, nevertheless, a difference between, on the one hand, strongly representationalist approaches to cognitiveaffective robotics, i.e., using boxes, standing in for ad hoc mechanisms, that label states as ‘emotions,’ ‘feelings,’ etc., that amount to ‘shallow’ means of modeling affective states [69] and, on the other hand, using mechanisms that are argued to be constitutive of representational and/or emotional phenomena, where the latter approach offers greater scope for emergence and flexibility in robot behavioral performance, which is thus of interest also from an engineering perspective. The next section will address in more detail what kind of computational architectures have been postulated as models of affective/emotional mechanisms and their role in embodied cognition.

Emotion in Embodied Cognitive Architectures If we accept the premise that emotion/affect are important not just for biological cognitive systems, but at least potentially also for artificial ones, then the question arises as to how such mechanisms can be adequately modeled. More specifically, if emphasis is placed on the inseparability of cognitive and affective phenomena, how can such an inter-dependence be practically broken down, thereby providing scope for computational modeling? Traditional AI-style cognitive architectures tend to be realized in a multi-tiered format with three or four levels

123

Cogn Comput (2009) 1:104–117

and modules that are more or less functionally independent. The choice of particular tiers is then the subject of contention. Such a perspective lends itself to an engineering approach, given that layers can be neatly separated and new layers may be built on top of existing ones to extend the functional capabilities of the artificial cognitive system. Given the foundation of a static architecture, emotions and other affective processes can be conveniently expressed and located. As Arbib and Fellous [2] point out, the role of emotion can be situated and analyzed at different levels in such architectures. Ortony et al. [53], for example, analyze the interactions of affect (value), motivation (action tendencies), cognition and behavior at three levels of reactive, routine, and reflective information processing in a threelevel architecture for unanticipated tasks in unpredictable environments. In a similar vein, Sloman [70] distinguishes between reactive, deliberative, and meta-management levels, and identifies emotion mainly with reactive alarm systems that are capable of (a) detecting situations that might require a global re-direction of resources, and (b) communicating such needs to other, including higher-level, systems. Arbib and Fellous [2] suggest a combination of these two three-level schemes into one with four levels: ‘‘reactive, routine, reflective–deliberative, and reflective– meta-management.’’ The above examples are mainly based on integrating emotional appraisals into architectures with relatively independent, functionally encapsulated information processing levels. Translating more strongly biologically grounded conceptions of emotion and its role in embodied cognition, such as those of Damasio or Panksepp, into embodied cognitive architectures is more challenging due to the fact that there is no simple mapping between brain areas and their functionalities ([59]; cf. below). As mentioned in the previous section, according to Damasio, nature (evolution) has built the apparatus of cognition ‘‘not just on top of the apparatus of biological regulation, but also from it and with it.’’ How this view might be translated into computational cognitive architectures for behavioral organization and integration in embodied cognitive systems is the focus of this section. Damasio [21] distinguishes between the use of the term emotions in a broad sense, which includes other affective mechanisms such as pleasure and pain behaviors, drives, and motivations—in line with Kelley’s above ‘‘broadest biological definition’’—and what he refers to as ‘‘emotionsproper.’’ The relation between the different types of mechanisms—according to Damasio constituting different levels of automated homeostatic regulation that are organized in a tree-like hierarchy—is illustrated in Fig. 1. The relation between different levels of homeostatic regulation is, according to Damasio [21], characterized by what he calls the ‘‘nesting principle’’:

Cogn Comput (2009) 1:104–117

109

drives and motivations

pain and pleasure behaviors immune responses basic reflexes metabolic regulation

Fig. 1 Hierarchy of levels of automated homeostatic regulation. Adapted from Damasio [21, p. 32]

Some of the machinery of the immune system and of metabolic regulation is incorporated in the machinery of pain and pleasure behaviours. Some of the latter is incorporated in the machinery of drives and motivations (most of which revolve around metabolic corrections and all of which involve pain or pleasure). Some of the machinery from all the prior levels— reflexes, immune responses, metabolic balancing, pain or pleasure behaviours, drives—is incorporated in the machinery of the emotions-proper [21]. When it comes to how emotion might be integrated in layered control architectures for cognitive robots, one possible starting point is the work of Prescott et al. [64] who analyzed similarities between behavior-based robotic subsumption architectures [9, 10] and the layered organization of the mammalian brain. Figure 2 illustrates a view

Fig. 2 Behavioral organization of defensive behaviors as a subsumption architecture (as an example of the layered architecture of the mammalian brain). Adapted from Prescott et al. [64]

cognitive analyses

of the hierarchical organization of defensive behaviors in the rat in the form of a subsumption architecture (where higher levels can suppress and override lower ones). The levels span from low-level reflexive mechanisms, over midbrain-mediated mechanisms (such as flight-or-fight responses), and amygdala-mediated conditioned responses, to cortical cognitive mechanisms. In this scheme, emotion is mainly associated with the role of the amygdala. The amygdala is the brain structure most commonly considered to play a central role in affective/emotional mechanisms [37, 38, 59, 63], but it may also be viewed as part of an integrated system where the interplay between amygdala and areas of prefrontal cortex are of central functional importance. Arbib and Fellous [2], for example, elaborate: The amygdala can influence cortical areas via feedback from proprioceptive, visceral or hormonal signals, via projections to various ‘arousal’ networks, and through interaction with the medial prefrontal cortex … [as illustrated in Fig. 3]. The prefrontal cortex, in turn, sends distinct projections back to several regions of amygdala, allowing elaborate cognitive functions to regulate the amygdala’s roles in emotion. … Because of the tight interactions between amygdala and prefrontal cortex, it is likely that our ability to generalize and abstract is directed by (and influences, in turn) some aspects of our emotional state. How this is done, and how robots could take advantage of it remains an open question [2, p. 556].

frontal cortex

context

hippocampus & septum

complex neutral stimuli

sensory cortex

neutral stimuli

thalamus

response suppression

amygdala

cognition feelings emotions (-proper)

conditioned emotional responses

midbrain & hypothalamus

species-specific responses, e.g. freeze / flight / fight

sudden distal stimuli

hindbrain

‘startle’ responses

noxious or contact stimuli

spinal cord

reflexive withdrawal

species-specific threat stimuli

sensory input

motor, autonomic & endocrine output

123

110

Cogn Comput (2009) 1:104–117

sensory cortex

mPFC

dlPFC

Behaviors

working memory

cognitive

arousal thalamus

hypothalamus

amygdala

hippocampus

external stimulus

basal forebrain brainstem locus coeruleus

bodily feedback behavior

hormones proprioception

Neural computations

NC1

NC2

NC3

NC4

Fig. 3 Interactions between amygdala, cortical, and subcortical areas in the mammalian brain (mPFC/dlPFC: medial/dorsolateral prefrontal cortex). Adapted from Arbib and Fellous [2], based on LeDoux [38]

123

Brain areas

A1

A2

network 1

A3

network 2

A4 network 3

Fig. 4 Pessoa’s conceptual proposal for the relation between brain areas (A1–A4), networks of areas, the (multiple) neural computations they contribute to (NC1–NC4), and the cognitive-affective behaviors that result from the interaction of those neural computations. Adapted from Pessoa [59, p. 154]

Reflexes

Drives

Instincts Motivations

Cognitions

cingulate cortex (working) PAG fear hunger-thirst memory prey acquisition RF knee jerk sex-attack attention mating spinal cord learning NTS frontal septum cortex amygdala

hypothalamus

CPG

To further complicate matters for the computational modeling of emotion, neural ‘systems,’ i.e., networks of interacting brain areas, rather than immutable neural structures that have an invariant association with specific emotional/affective states, can be envisioned as manifesting in integrated activity among/between key neural structures according to particular contexts. Pessoa [59], for example, in a recent review paper on the emotion–cognition relationship, questions the prevalent view of brain organization ‘‘that there is a considerable degree of functional specialization and that many regions can be conceptualized as either ‘affective’ or ‘cognitive,’’’ which he considers ‘‘problematic for a number of reasons.’’ In contradistinction to this view, he argues that ‘‘complex cognitive-emotional behaviors have their basis in dynamic coalitions of networks of brain areas, none of which should be conceptualized as either ‘affective’ or ‘cognitive.’ Central to cognitive emotional interactions are brain areas with a high degree of connectivity, called hubs, which are critical for regulating the flow and integration between regions’’ [59, p. 148]. He further discusses the amygdala, which is involved in a number of ‘affective’ functions, in particular fear processing [37, 43], but also a number of ‘cognitive’ functions, including attention and associative learning, as an example of such a connector hub and thus a ‘‘strong candidate for integrating cognitive and emotional information’’ [59, p. 152]. His view of the relation between brain areas, neural computations, and behaviors is illustrated in Fig. 4. This view is also largely compatible with Arbib and Fellous’ [2] proposal that emotion is closely connected to the operation of neuromodulators [27], i.e., ‘‘endogenous substances … released by a few specialized brain nuclei that have somewhat diffuse projections throughout the brain and receive inputs from brain areas that are involved at all levels of behavior from reflexes to cognition,’’ as

potential for neuromodulation action specificity

Fig. 5 Arbib and Fellous’ [2] view of behavioral organization with respect to potential for neuromodulation and action specificity— mapping brain and nervous system structures to (examples of) reflexes, drives, instincts and motivations, and cognitions (CPG: central pattern generators, PAG: periaqueductal gray, RF: reticular formation, NTS: nucleus of the solitary tract). The ellipses represent zones of neural recruitment during emotional expression and experience, whose neural substrate is argued to be intimately linked to that of neuromodulation. Adapted from Arbib and Fellous [2]—for details see also Fellous [27]

illustrated in Fig. 5. Based on Kelley [35], Arbib and Fellous [2] discuss three main neuromodulatory systems involved in emotion: (1) dopamine, which ‘‘plays essential roles all the way from ‘basic’ motivational systems to

Cogn Comput (2009) 1:104–117

working memory systems essential for linking emotion, cognition and consciousness,’’ (2) serotonin, which has been implicated in, among other functions, behavioral state regulation and arousal, mood, motor pattern generation, learning and plasticity, and (3) opioids, which are ‘‘found particularly within regions involved in emotional regulation, responses to pain and stress, endocrine regulation and food intake’’ [2, p. 558]. In more philosophical/theoretical terms, the view of affect/emotion as playing a central role in behavioral organization in embodied cognizers is also closely related to Barandiaran and Moreno’s [3] notion of emotional embodiment. They adopt Edelman’s [24] distinction between the sensorimotor nervous system (SMNS) and the nervous system of the interior (INS), including autonomic nervous system, neuroendocrine system, limbic system, and related structures. Main functions of the INS include homeostasic regulation of internal organs, bodily readiness for action, and the production of value signals for the SMNS. Following Damasio [18, 20] and Lewis [39], Barandiaran and Moreno [3] suggest that it is the complex interplay between INS and SMNS that gives rise to emotional embodiment, ‘‘an often neglected aspect of organismic embodiment.’’ They further elaborate: The interaction between the INS and the SMNS becomes … of fundamental importance for neural and behavioral organization to the extent that the adaptive regulatory capacity of the INS over the SMNS will be recruited by the latter to regulate its own autonomy. … The adaptive web of dynamic dependencies that are created within and between the NS and its coupling with the metabolic body and with the environment is what we shall call cognitive organization … Our main hypothesis is, therefore, that the specificity of cognitive dynamics … is given by a particular kind of dynamic organization within the NS and between the NS and the internal and external environment, i.e., the adaptive preservation of a web of dynamic sensorimotor structures sustained by continuous interactions with the environment and the body (specially through the interaction between SMNS and INS) [3, pp. 180–181]. Moreno et al. [49] further emphasize that in complex organisms higher levels of cognitive autonomy are connected with hierarchical levels of control of metabolic organization through the nervous system. They point out that: … the increasing process of autonomization in the evolution of vertebrates goes together with the fact that their metabolic organization is fully and precisely controlled by their brain. Their characteristic

111

agency has been made possible by the development of the nervous system, which evolved as a powerful regulatory mechanism to control and integrate complex underlying processes. There is a strong association between the evolution of highly integrated and complex bodies and the evolution of cognitive autonomy… In other words, systems with higher degrees of autonomy show an increase in the capacity to create and/or take over complex and larger environmental interactions, because of a more intricate organization of their constitutive identity. Their autonomy is also based on a circular, recursive organization, but this also includes many hierarchical levels and many embedded regulatory controls [49]. The next section will describe in some more detail our own computational modeling work on developing an embodied cognitive-affective architecture for robots, which is motivated by some of the theories and models discussed above.

Toward an Embodied Cognitive-Affective Architecture This section discusses work in progress in our lab that is part of a larger European cognitive robotics project called ICEA—Integrating Cognition, Emotion and Autonomy (www.iceaproject.eu)—bringing together neurophysiologists, computational neuroscientists, cognitive modelers, roboticists, and control engineers. The project as a whole is too complex to describe in much detail here, but one of the primary aims is to develop a cognitive systems architecture integrating cognitive, emotional, and autonomic/homeostatic mechanisms, based on the architecture and physiology of the mammalian brain. The general approach taken in the project is to computationally model, at different levels of abstraction, different brain structures and their interaction, ranging from cortical areas over the amygdala to areas such as hypothalamus and brainstem which deal with ‘low-level’ mechanisms for drives, bioregulation, etc. Building on the previous Psikharpax project [48], among others, the rat is used as the starting point for developing different rat-inspired robotic and simulation platforms used to model a range of behavioral, emotional, and cognitive capacities, such as survival-related behaviors (e.g., foraging, energy management, and fear-related responses), spatial navigation, different types of learning, as well as emotional decision-making and planning (not necessarily limited to what is documented for the rat though). The twofold hypothesis behind the research in the ICEA project—much in line with the theoretical

123

112

Cogn Comput (2009) 1:104–117

ma t ic Soulat ion

Sim

ic ma t So ulat ion

f

Ef

Energy Autonomy

ive

Pre-So

tut

c at i m ects

M

Motivational Autonomy sti

- reactive sensorimotor activity - 2nd order feedback loop

od

n Co

- value-based learning : basic working memory associative learning reward/punishment prediction

Mental Autonomy

gs elin ns Fe otio Em s tion tiva Mo rives D

2-resource problem

- extended working memory - interoception - internal simulation of behavior (planning)

Go-no-Go

Robot Capabilities (McFarland, 2008)

Internal Organization

Iowa Gambling Task

Relevant Task Domain (essential essential mechanisms) mechanisms

s

niz

ga Or

e flex

Re

Metabolism p

on

ati

Ap

ro ac h-

Avo i da n

Be ce

ha

vi

or

al

O

rg Se qu an enc iz ed B a ehav iors tio

n

Mu lt i-S e qu

ence d Beha

viors

Fig. 6 Cognitive-affective architecture schematic involving different levels of homeostatic regulation and behavioral organization in robotic agents. The left-hand side relates essential mechanisms to organism-integrated organization with respect to adequate to superior

performance on particular behavioral tasks. The right-hand side relates levels of robot autonomy potentially achievable through adherence to the schema

discussions above—is that (1) the emotional and bioregulatory mechanisms that come with the organismic embodiment of living cognitive systems also play a crucial role in the constitution of their high-level cognitive processes, and (2) models of these mechanisms can be usefully integrated in artificial cognitive systems architectures. This would constitute a significant step toward more autonomous robotic cognitive systems that reason and behave, externally and internally, in accordance with energy and other self-preservation requirements, and thus sustain themselves over extended periods of time (e.g., energetically autonomous robots that generate their own energy from biological material by means of microbial fuel cells, i.e., a simple form of robotic metabolism, cf. [47]). Our own lab’s current work-in-progress in ICEA is concerned with, among other things, the development of a minimal enactive/embodied cognitive-affective architecture for behavioral organization and integration in robotic

agents.1 As illustrated schematically in Fig. 6, this is strongly inspired by Damasio’s view of multiple levels of homeostatic regulation (cf. above) and other theories/concepts discussed in this paper. It also incorporates aspects of self-organized views on emotion–cognition and their emergence [29, 39, 55], but emphasis is placed on the relevance of organismic/affective embodiment—neural and non-neural bodily activity integrated with sensorimotor activity—to the self-organized process. The schematic follows a three-tiered approach involving the arguably artificial separation of constitutive organization, inspired by Damasio’s nested tree of homeostatic processes (cf. Fig. 1), into internal and behavioral

123

1

The term ‘enactive’ is here meant in the broad sense of viewing cognition as grounded in self-maintenance (cf. e.g., Vernon et al. [77]: ‘‘The only condition that is required of an enactive system is effective action: that it permit the continued integrity of the system involved.’’), not in the narrower sense involving a specific commitment to autopoietic organization [30].

Cogn Comput (2009) 1:104–117

organizational domains. These domains, in interaction with a given environment, fully constitute and simultaneously constrain the constitutive/homeostatic organization of organisms (see Froese and Ziemke [30], as well as Barandiaran and Moreno [4], for similar/alternative organismic ‘partitioning’). Affective-cognitive states are emergent from, but also instigative of, the organism’s interactions with its environment; there need be no causal precedence with respect to the generation of affectivecognitive activity and the perception of emotionally ‘significant’ external stimuli (similar to [29]). In this sense relatively atemporal homeostatic states, e.g., metabolism, regulation of blood glucose levels, constrain the types of affective-cognitive states permissible but are in turn constrained, or perhaps entrained, by more temporally extended states2 entailing the integration of somatic and sensorimotor information over time—both in the extended present and in the longer-term future. The affective-cognitive state thus engendered can be considered a whole organism response rather than confined to the internal (e.g., as located in cortico-limbic circuitry), the behavioral, or the social-constructive. Early work on this approach has been described in detail elsewhere [40–43, 51]. In the introduction section of this paper, we described our view of emotion as (a) closely connected to embodied cognition, (b) grounded in homeostatic bodily regulation, and (c) a powerful organizational principle. However, in order to elaborate further what the embodied cognitive, homeostatically regulated and organizational nature of the cognitive-affective architecture in Fig. 6 can mean to researchers interested in biological organisms on the one hand and roboticists interested in practical applications on the other, we are compelled to link levels of organizational sophistication inherent in the schematic to particular biologically inspired mechanisms and to task-related capabilities, respectively. The neurobiologically inspired computational models thus far developed include relatively abstract models of a robotic metabolism (cf. above), hypothalamic energy regulation (see [41] for preliminary details), dopaminergic modulation, amygdala, a cortical hierarchy, and the interaction of these mechanisms. These are specifically tested on a number of behavioral decisionmaking tasks suitable for robotic agents, such as tworesource problems, go-no-go tasks, and a robotic version of the Iowa gambling task (commonly used in emotional decision-making experiments with human subjects, e.g., in 2

Such dynamic states might be considered to have a longer temporal trajectory not easily captured by the narrow ‘negative feedback’ sense of homeostasis. The term ‘allostasis’ has been offered to describe more complex regulatory processes which for some advocates of the term constitutes a form of homeostasis, but for others allostasis represents a different type of regulation [72]. Also see Lowe et al. [42] for a discussion.

113

Damasio’s work; cf. [44]). On the left-hand side of Fig. 6, it is indicated which internal-behavioral organizational levels are of relevance to the aforementioned behavioral tasks. On the right-hand side, the schematic is directly linked to levels of autonomy achievable by robots as described by McFarland [45]. It should be noted, that the essential mechanisms are not necessarily sufficient for even adequate performance on the task but are those that we hypothesize to be indispensable (hence ‘essential’). The justification for choice of mechanisms and relevance of the particular tasks as tests of levels of robot autonomy and affective-cognitive competence is based on evidence obtained from the relevant literature and detailed more thoroughly in Lowe et al. [44]. The approach is rooted in biological inspiration—our model used on the two-resource problem, for example, is inspired by glucose regulation of hypothalamic activity [5] as opposed to the more ethologyinspired ‘cue-deficit’ model of McFarland and Spier [46]. In general, we may say that the particular tasks referred to in Fig. 6 entail continuity regarding behavioral requirements for adequate and superior performance, and overlap with respect to the hypothesized underlying cognitive mechanisms. Consistent with our emphasis on organismic/ affective embodiment, homeostatic regulation and integrated organization, our approach is to abstractly model biological systems such that those internal (neural and nonneural) mechanisms identified as necessary to task performance are captured but that their activity is intimately tied to behavioral feedback and structural change (development/learning). An example of the particular approach we espouse can be found in the work carried out by Alexander and Sporns [1]. In their model, inspired by an identified ‘reward prediction’ circuit in the mammalian brain, internal and behavioral regulation are intimately related. A dopaminergic system is able to instigate behavioral responses based on the timing of the acquisition of a primary rewarding stimulus (‘food’ resource). Where the expected time of reward acquisition— as cued by the visual perception of the stimulus—is not met, a reward prediction error is registered and the dopaminergic system is enacted invoking ‘value-dependent’ learning in prefrontal cortex and motor cortex analogue structures thereby altering the behavioral response. After successfully modeling mammalian dopaminergic phasic responses for reward predictions in a completely disembodied computational model, Alexander and Sporns [1] used the model as a robot controller and discovered interesting patterns of foraging behavior. To define what is cognitive and what is affective in such a robotic controller seems arbitrary in such circumstances since all aspects of robot activity, through behavior to internal regulation, are intimately temporally linked and it is such fine-tuned integration that permits great flexibility and autonomy. Such a model might be viewed as

123

114

motivationally autonomous by McFarland [45] but the model is limited when viewed with respect to Damasio’s nested hierarchy of homeostatic regulation given its failure to capture the regulatory complexity of dopaminergic activity or of energy homeostasis (energy autonomy). Alexander and Sporns’ model is also limited with respect to its ability to transfer dopaminergic phasic response to stimuli predictive of reward acquisition—it is seemingly not capable of real classical conditioning. The integration of sub-nuclei of the amygdala (e.g., the basolateral nucleus) with the prefrontal cortex has already been described as being of great significance to emotional activity and behavior. This integration is described with respect to performance on the Iowa Gambling Task [6] and is considered in alternative hypotheses to that of Bechara and Damasio’s somatic marker hypothesis as to how human subjects carry out the task. Integrating mechanisms relevant to prefrontal cortex (e.g., orbitofrontal cortex or ventromedial prefrontal cortex) and the amygdala appears to be key to the engendering of emotion–cognition-like processes. Fully functional integration of our computational models permitting regulation across all three tiers of our cognitive-affective architecture is required for emotion– cognition processes to be fully embodied and this is the ultimate goal of our group’s research effort. Central to this integration is identifying which cognitive-affective mechanisms and their means of interaction are necessary and sufficient for adequate to superior performance on the decision making tasks identified and what is thus the minimal architecture applicable (pitted at a high level of abstraction with respect to the underlying neurobiological details).

Discussion and Conclusions For a journal such as Cognitive Computation, which according to its own mission statement is devoted to ‘‘biologically inspired computational accounts of all aspects of natural and artificial cognitive systems’’ (cf. www.springer.com/12559), important questions include: which mechanisms need to be included in accounts of natural and artificial cognition, how much biological detail is required in scientific accounts of natural cognition, and how much biological inspiration is useful in the engineering of artificial cognitive systems? This paper has discussed arguments that, contrary to the traditional view in cognitive science, cognition and emotion are actually closely interrelated, or in fact inseparable. It has further been argued, in line with, for example, Damasio [19, 20], that the notion of organism should play a much stronger role in the sciences of mind and brain than it currently still does, in particular in cognitive robotics and

123

Cogn Comput (2009) 1:104–117

AI [30]. Accordingly, the view of emotion that has been presented here is one of embodied cognition and emotion as grounded in multiple levels of affective and homeostatic bodily regulation, including motivations, drives, metabolic regulation, etc. While from the perspective of scientific modeling, a better understanding of these mechanisms, and their contributions to cognition, clearly is desirable, it remains to be seen to what degree such a multi-level view of homeostasis and emotion also can be meaningfully and usefully transferred to cognitive systems engineering and robotics. That robots and organisms could not necessarily be expected to have or require exactly the same levels of homeostatic regulation should be quite clear from the above discussions of theoretical accounts of affectively embodied cognition and the underlying biological mechanisms. For example, metabolic regulation in organisms and robotic energy management are obviously quite different although there might very well be useful common principles to be extracted from our understanding of natural cognitive systems and transferred to the engineering of artificial ones [47]. Similarly, it is quite clear that from an engineering perspective several of the so-called self-X properties of living systems (where X = maintenance, preservation, monitoring, repair, adaptation, etc.) would be highly desirable in autonomic systems technology, although we currently do not really know if they could be, or necessarily should be, reproduced to a high degree of biological detail. The discussion has here focused on biologically based, in particular brain-based, embodied cognitive architectures, and the role that emotional/affective mechanisms play in such systems. While research in affective and cognitive neuroscience in the last 10–20 years has provided many insights into the neural underpinnings of these mechanisms, e.g., the role of specific brain areas, such as the amygdala or orbitofrontal cortex, in emotional processes, there still is a limited systems-level understanding of the way different brain areas interact in producing the neural computations underlying cognitiveemotional behaviors [59]. The fact that computational modeling of the underlying mechanisms can be carried out at different levels of biological detail is definitely a strength in this context, given that it allows to shift between different levels of abstraction in scientific explanation, which is important in interdisciplinary fields such as the cognitive and affective sciences—although admittedly from a cognitive systems engineering perspective it is not always clear which degree of biological inspiration or which level of biological detail is the most appropriate in, for example, robotic implementations. Hence, the required level of biological inspiration and detail in computational or robotic models of cognitiveemotional interactions, as well as their grounding in

Cogn Comput (2009) 1:104–117

affective embodiment and homeostatic mechanisms, definitely remains an open research question, the answer to which most probably will vary from case to case, depending on the specific scientific and engineering purposes that motivate the development of such models. Other open research issues include the following: •







The role of emotion (expression and recognition) in social interactions (cooperative or adversarial), has not been addressed here in any detail, due to the focus on behavioral organization in individual agents. It might be worth noting though that much work remains to be done in understanding the interplay of what Arbib and Fellous [2] referred to as the ‘internal’ (individual) and ‘external’ (social) aspects of emotion. This applies in particular to the interaction, between humans and different types of interactive technology, e.g., human–robot interaction or also the interaction with simulated/animated virtual characters, e.g., in computer games. In this type of research the expression and recognition of emotion are typically more or less completely separated from the ‘having’ of emotions, i.e., the role of emotion in the regulation of one’s own behavior, which this paper has focused on. Open questions include, for example, the role of bodily differences for the human user’s capacity to relate emotionally to, for example, robots or computer game characters [52], which could have completely different body plans. The brain’s interoception of homeostatic bodily states has been hypothesized by both Damasio [21] and Craig [16, 17] to play a crucial role in emotional (self-) awareness. Craig [16] points out the compatibility between Damasio’s view that ‘‘self-awareness emerges from an image of the homeostatic state of the body’’ and his own view of interoception as providing ‘‘a distinct cortical image of homeostatic afferent activity that reflects all aspects of the physiological condition of all tissues of the body’’ [16, p. 500]. According to Craig ‘‘primates and especially humans [have] a phylogenetically unique thalamo-cortical extension of these pathways to the insular cortex’’ [17], whereas Panksepp is more skeptical about cortical involvement (cf. above), and empirical neurophysiological evidence is limited. Computational modeling of these pathways could possibly help to resolve the issue, but it is unclear at this point exactly what such models might look like. Closely related to the previous point, as Seth [68] discusses in more detail in his contribution to this inaugural issue, there are several other interesting connections between emotion and consciousness [82, 83]. This includes the actual conscious subjective experience (or ‘qualia’) of emotional feelings, of which

115

there are only few, if any, convincing computational models at this point. Acknowledgments This work has been supported by a European Commission grant to the FP6 project ‘‘Integrating Cognition, Emotion and Autonomy’’ (ICEA, FP6-IST-027819, www.ICEAproject.eu) as part of the European Cognitive Systems initiative. Much of this paper has resulted from discussions with other members of the project consortium. The authors would also like to thank the reviewers, Kevin Gurney, Amir Hussain, and India Morrison for useful comments on a draft version of this paper.

References 1. Alexander WH, Sporns O. An embodied model of learning, plasticity, and reward. Adapt Behav. 2002;10(3–4):143–59. 2. Arbib M, Fellous J-M. Emotions: From brain to robot. Trend Cognit Sci. 2004;8(12):554–61. 3. Barandiaran X, Moreno A. On what makes certain dynamical systems cognitive: A minimally cognitive organization program. Adapt Behav. 2006;14(2):171–85. 4. Barandiaran X, Moreno A. Adaptivity: From metabolism to behavior. Adapt Behav. 2008;16:325–44. 5. Barnes MB, Beverly JL. Nitric oxide’s role in glucose homeostasis. Am J Physiol Regulat Integr Comp Physiol. 2007;293: R590–1. 6. Bechara A, Damasio AR. The somatic marker hypothesis: A neural theory of economic decision. Games Econ Behav. 2005;52:336–72. 7. Breazeal C. Designing sociable robots. Cambridge, MA: MIT Press; 2002. 8. Breazeal C. Emotion and sociable humanoid robots. Int J Human Comput Interact. 2003;59:119–55. 9. Brooks RA. Achieving artificial intelligence through building robots. Technical report memo 899, MIT AI Lab; 1986 10. Brooks RA. Cambrian intelligence. Cambridge, MA: MIT Press; 1999. 11. Can˜amero L, editor. Proceedings of the symposium on agents that want and like: Motivational and emotional roots of cognition and action. UK: AISB; 2005. ISBN: 1-902956-41-7. 12. Clark A. Being there. Cambridge, MA: MIT Press; 1997. 13. Clark A. An embodied cognitive science? Trend Cognit Sci. 1999;9:345–51. 14. Colombetti G, Thompson E. Enacting emotional interpretations with feelings. Behav Brain Sci. 2005;28:200–1. 15. Colombetti G, Thompson E. The feeling body: Towards an enactive approach to emotion. In: Overton WF, Mu¨ller U, Newman JL, editors. Developmental perspectives on embodiment and consciousness. New York: Lawrence Erlbaum Associates; 2008. p. 45–68. 16. Craig AD. Interoception: The sense of the physiological condition of the body. Curr Opin Neurobiol. 2003;13(4):500–5. 17. Craig AD. Human feelings: Why are some more aware than others? Trend Cognit Sci. 2004;8(6):239–41. 18. Damasio AR. Descartes’ error: Emotion, reason, and the human brain. New York: GP Putnam’s Sons; 1994. 19. Damasio AR. Emotion in the perspective of an integrated nervous system. Brain Res Rev. 1998;26:83–6. 20. Damasio AR. The feeling of what happens: Body, emotion and the making of consciousness. London: Vintage; 1999. 21. Damasio AR. Looking for Spinoza: Joy, sorrow and the feeling brain. Orlando, FL: Harcourt; 2003.

123

116 22. Damasio AR. Emotions and feelings: A neurobiological perspective. In Manstead A, Frijda N, Fischer A, editors. Feelings and emotions—The Amsterdam symposium. UK: Cambridge University Press; 2004 23. Di Paolo EA. Organismically-inspired robotics: Homeostatic adaptation and natural teleology beyond the closed sensorimotor loop. In: Murase K, Asakura T, editors. Dynamical systems approach to embodiment and sociality. Adelaide, Australia: Advanced Knowledge International; 2003. p. 19–42. 24. Edelman J. The remembered present. New York: Basic Books; 1989. 25. Ekman P. Universals and cultural differences in facial expression of emotion. In: Cole J, editor. Nebraska symposium on motivation. Lincoln, Nebraska: University of Nebraska Press; 1972. p. 207–83. 26. Ekman P. Basic emotions. In: Dalgleish T, Power M, editors. Handbook of cognition and emotion. Sussex, UK: Wiley; 1999. 27. Fellous J-M. Neuromodulatory basis of emotion. The Neuroscientist. 1999;5:283–94. 28. Fellous J-M, Arbib M, editors. Who needs emotions? The brain meets the robot. New York: Oxford University Press; 2005. 29. Freeman W. How brains make up their minds. New York: Columbia University Press; 2000. 30. Froese T, Ziemke T. Enactive artificial intelligence. Artif Intel. 2009;173:466–500. 31. Gibbs R. Embodiment and cognitive science. New York: Cambridge University Press; 2006. 32. Harnad S. Minds, machines, and Searle. J Exp Theoret Artif Intel. 1989;1(1):5–25. 33. Harnad S. The symbol grounding problem. Physica D. 1990;42:335–46. 34. Hudlicka E, Can˜amero L, editors. Architectures for modeling emotion: Cross-disciplinary foundations. Papers from the 2004 AAAI symposium. Menlo Park, CA: AAAI Press; 2004. 35. Kelley AE. Neurochemical networks encoding emotion and motivation: An evolutionary perspective. In: Fellous J-M, Arbib MA, editors. Who needs emotions? The brain meets the robot. New York: Oxford University Press; 2005. 36. Lazarus RS. Emotion and adaptation. New York: Oxford University Press; 1991. 37. LeDoux JE. The emotional brain. New York: Simon & Schuster; 1996. 38. LeDoux JE. Emotion circuits in the brain. Annu Rev Neurosci. 2000;23:155–84. 39. Lewis MD. Bridging emotion theory and neurobiology through dynamic systems modeling. Behav Brain Sci. 2005;28:169–245. 40. Lowe R, Herrera C, Morse T, Ziemke T. The embodied dynamics of emotion, appraisal and attention. In: Paletta L, Rome E, editors. Attention in cognitive systems. Theories and systems from an interdisciplinary viewpoint. Berlin: Springer; 2007. 41. Lowe R, Philippe P, Montebelli A, Morse A, Ziemke T. Affective modulation of embodied dynamics. In: The role of emotion in adaptive behaviour and cognitive robotics, Electronic proceedings of SAB workshop, Osaka, Japan; 2008. Available from: http://www.his.se/icea/emotion-workshop/. 42. Lowe R, Morse A, Ziemke T. An enactive approach for modeling cognition, emotion and autonomy: Predictive regulation at different levels of organizational complexity. Submitted for journal publication. 43. Lowe R, Humphries M, Ziemke T. The dual-route hypothesis: Evaluating a neurocomputational model of fear conditioning in rats. Connect Sci., accepted for publication (in press). 44. Lowe R, Morse A, Ziemke T. The Iowa gambling task: Key methodological issues for cognitive robotics to address. (forthcoming). 45. McFarland D. Guilty robots, happy dogs. New York: Oxford University Press; 2008.

123

Cogn Comput (2009) 1:104–117 46. McFarland D, Spier E. Basic cycles, utility and opportunism in self-sufficient robots. Robot Autonom Syst. 1997;20:179–90. 47. Melhuish C, Ieropoulos I, Greenman J, Horsfield I. Energetically autonomous robots: Food for thought. Autonom Robot. 2006;21:187–98. 48. Meyer J-A, Guillot A, Girard B, Khamassi M, Pirim P, Berthoz A. The Psikharpax project: Towards building an artificial rat. Robot Autonom Syst. 2005;50(4):211–23. 49. Moreno A, Etxeberria A, Umerez J. The autonomy of biological individuals and artificial models. BioSystems. 2008;91(2):309–19. 50. Morse A, Lowe R. Enacting emotions: Somato-sensorimotor knowledge. In: Perception, action and consciousness: Sensorimotor dynamics and dual vision, Bristol, UK; 2007. Available at: http://www.bris.ac.uk/philosophy/department/events/PAC_ conference/index.html/Conference.htm/Poster_Announcement. html. 51. Morse A, Lowe R, Ziemke T. Towards an enactive cognitive architecture. In: Proceedings of the first international conference on cognitive systems, CogSys 2008, Karlsruhe, Germany; April 2008. 52. Morrison I, Ziemke T. Empathy with computer game characters: A cognitive neuroscience perspective. In: AISB’05: Proceedings of the joint symposium on virtual social agents. UK: AISB; 2005. p. 73–9. 53. Ortony A, Norman D, Revelle W. Affect and Proto-affect in effective functioning. In: Fellous J-M, Arbib MA, editors. Who need emotions? New York: Oxford University Press; 2005. 54. Panksepp J. Affective neuroscience: The foundations of human and animal emotions. New York: Oxford University Press; 1998. 55. Panksepp J. The neurodynamics of emotions: An evolutionaryneurodevelopmental view. In: Lewis MD, Granic I, editors. Emotion, development, and self-organization: Dynamic systems approaches to emotional development. New York: Cambridge University Press; 2000. 56. Panksepp J. Affective consciousness and the origins of human mind: A critical role of brain research on animal emotions. Impuls. 2004;57:47–60. 57. Panksepp J. Affective consciousness: Core emotional feelings in animals and humans. Conscious Cogn. 2005;14:30–80. 58. Parisi D. Internal robotics. Connect Sci. 2004;16(4):325–38. 59. Pessoa L. On the relationship between emotion and cognition. Nat Rev Neurosci. 2008;9:148–58. 60. Petta P. The role of emotion in a tractable architecture for situated cognizers. In: Trappl R, Petta P, Payr S, editors. Emotions in humans and artifacts. Cambridge, MA: MIT Press; 2003. 61. Pfeifer R, Bongard J. How the body shapes the way we think: A new view of intelligence. Cambridge, MA: MIT Press; 2006. 62. Pfeifer R, Scheier C. Understanding intelligence. Cambridge, MA: MIT Press; 1999. 63. Phelps E. Emotion and cognition: Insights from studies of the human amygdala. Annu Rev Psychol. 2006;24(57):27–53. 64. Prescott TJ, Redgrave P, Gurney K. Layered control architectures in robots and vertebrates. Adapt Behav. 1999;7:99–127. 65. Prinz JJ. Gut reactions–A perceptual theory of emotion. Oxford: Oxford University Press; 2004. 66. Rolls E. The brain and emotion. Oxford: Oxford University Press; 1999. 67. Rolls E. Emotion explained. Oxford: Oxford University Press; 2005. 68. Seth A. Explanatory correlates of consciousness: Theoretical and computational challenges. Cognit Comput.; this volume. doi: 10.1007/s12559-009-9007-x. 69. Sloman A. Beyond shallow models of emotion. Cognit Process. 2001;2(1):177–98. 70. Sloman A. How many separately evolved emotional beasties live within us? In: Trappl R, Petta P, Payr S, editors. Emotions in

Cogn Comput (2009) 1:104–117

71.

72.

73. 74. 75.

76.

77.

78.

humans and artifacts. Cambridge, MA: MIT Press; 2002. p. 35– 114. Steels L, Brooks RA, editors. The artificial life route to artificial intelligence. Building situated embodied agents. New Haven: Lawrence Erlbaum; 1995. Sterling P. Principles of allostasis: Optimal design, predictive regulation, pathophysiology and rational therapeutics. In: Schulkin J, editor. Allostasis, homeostasis and the costs of adaptation. Cambridge: Cambridge University Press; 2004. Thompson E. Mind in life. Cambridge, MA: Harvard University Press; 2007. Trappl R, Petta P, Payr S, editors. Emotions in humans and artifacts. Cambridge, MA: MIT Press; 2003. Varela FJ, Depraz N. At the source of time: Valence and the constitutional dynamics of affect. J. Conscious. Stud.. 2005;12(8– 10):61–81. Varela FJ, Thompson E, Rosch E. The embodied mind: Cognitive science and human experience. Cambridge, MA: MIT Press; 1991. Vernon D, Metta G, Sandini G. A survey of artificial cognitive systems: Implications for the autonomous development of mental capabilities in computational agents. IEEE Trans Evol Comput. 2007;11(2):151–80. Ziemke T. Rethinking grounding. In: Riegler A, Peschl M, von Stein A, editors. Understanding representation in the cognitive sciences. New York: Plenum Press; 1999. p. 177–90.

117 79. Ziemke T. Are robots embodied? In: Balkenius C, Zlatev J, Breazeal C, Dautenhahn K, Kozima H, editors. Proceedings of the first international workshop on epigenetic robotics: Modelling cognitive development in robotic system; Lund University cognitive studies, vol. 85, Lund, Sweden; 2001. p. 75–83. 80. Ziemke T. What’s that thing called embodiment? In: Alterman R, Kirsh D, editors. Proceedings of the 25th annual conference of the Cognitive Science Society. Mahwah, NJ: Lawrence Erlbaum; 2003. p. 1305–10. 81. Ziemke T. Embodied AI as science: Models of embodied cognition, embodied models of cognition, or both? In: Iida F, Pfeifer R, Steels L, Kuniyoshi Y, editors. Embodied artificial intelligence. Heidelberg: Springer; 2004. p. 27–36. 82. Ziemke T. What’s life got to do with it? In: Chella A, Manzotti R, editors. Artificial consciousness. Exeter: Imprint Academic; 2007. p. 48–66. 83. Ziemke T. The embodied self: Theories, hunches and robot models. J Conscious Stud. 2007;14(7):167–79. 84. Ziemke T. On the role of emotion in biological and robotic autonomy. BioSystems. 2008;91:401–8. 85. Ziemke T, Sharkey NE. A stroll through the worlds of robots and animals. Semiotica. 2001;134(1–4):701–46. 86. Ziemke T, Frank R, Zlatev J, editors. Body, language and mind. Volume 1: Embodiment. Berlin: Mouton de Gruyter; 2007.

123