to construct an integration framework. Possible impor tant points include a hierarchical computational model, a multilayered architecture of cybintelligence, target-and-intent-driven computational methods, and task-oriented action integration approaches. Intelligence Enhancement in Perception and Motor Functionality Reconstruction Two potential applications of cybintelligence are intelligence enhancement in perception and reconstruction of motor functionality, both of which require further studies of the brain mechanisms related to perception and function. The reconstruction technique of neural functions could offer an important clue. Research on cybintelligence and coadaptation could also help by providing a systematic method for enhancing perception and reconstructing motor function.
C ybintelligence offers many prom-
ising applications, such as assisting, augmenting, and repairing human cognitive or sensory-motor functions, rehabilitation, neuroprosthetics, and animal robots. Simply put, it could enable an increase in our own normal capabilities or help lessen physical or cognitive impairments and disabilities.
Acknowledgments
This work was supported by the National Basic Research Program of China (number 2013CB329504). We thank Shaowu Zhang and Aung Si for their valuable suggestions on the term of cyborg intelligence (cybintelligence).
References 1. M. Lebedev and M. Nicolelis, “BrainMachine Interfaces: Past, Present and Future,” Trends in Neuroscience, vol. 29, no. 9, 2006, pp. 536–546. september/october 2013
IS-28-05-TandC.indd 33
2. M.E. Clynes and N.S. Kline, “Cyborgs and Space,” Astronautics, Sept. 1960, pp. 26–27, 74–76. Zhaohui Wu is a professor in the Department of Computer Science at Zhejiang University, China. Contact him at wzh@zju. edu.cn. Gang Pan is a professor in the Department
of Computer Science at Zhejiang University, China, and the corresponding author. Contact him at
[email protected]. Nenggan Zheng is an associate professor in the Qiushi Academy for Advanced Studies at Zhejiang University, China. Contact him at
[email protected].
Formal Minds and Biological Brains II: From the Mirage of Intelligence to a Science and Engineering of Consciousness Paul F.M.J. Verschure, Catalan Institute of Advanced Studies (ICREA) and Universitat Pompeu Fabra
In 1993, I published an analysis of the evolving trend in cognitive science and artificial intelligence to move away from the model of the disembodied rational mind to that of the embodied active brain.1 As operational definitions of these two views on the mind and brain, I focused on Allen Newell’s SOAR2 and Gerald Edelman’s neural Darwinism and the theory of neuronal group selection (TNGS).3 SOAR has been under development since the 1970s and aims to explain intelligence, human reasoning, and problem solving, whereas TNGS ultimately targets consciousness and focuses on its underlying neuronal processes with an emphasis on the mechanisms of variation and selection in the brain. www.computer.org/intelligent
The emphasis on the rational mind by traditional AI almost by definition left out the body, while any approach emphasizing the embodied brain would have difficulties explaining the rational mind—we can call this the mind-brain dilemma. Since the fall of, so called, Good Old Fashioned AI, we’ve seen the rise and subsequent fall of connectionism, artificial life, and new AI—largely due to their critical reliance on a metaphorical understanding of the brain, life, and embodiment as opposed to a scientifically grounded one. This suggests that the current scientific study and engineering of mind, body, and brain is devoid of a clear theoretical or conceptual framework. However, trends can be distinguished in the current emphasis on h ierarchical learning techniques in the description of information and feature extraction together with inference-based approaches to problem solving. Both trends are complementary in the sense that the f ormer allows the extraction of a state space from inputs derived from the world, whereas the latter addresses the challenge of how to generate action policies given such a state space. Unfortunately, these two areas have advanced largely disconnected from each other. In contrast, brains have evolved to address both of these problems simultaneously, so the question becomes how systems for perception, cognition, and action can be integrated—that is, the challenge is in the architecture, rather than in the single components. In parallel, the study of the brain has moved in the direction of “big data” in the form of the reprise of the so-called Human Brain Project in Europe and the large-scale brain activity mapping initiative in the US, imaginatively called BRAIN. Interestingly, these two initiatives are committed to the industrialscale pursuit of data without much theoretical guidance. Hence, both the 33
17/12/13 4:58 PM
study of the biological brain and the engineering of artificial ones face a similar grand challenge: formulating and validating a system-level theory of the body, brain, and mind nexus.
Cognitive and Brain Architectures The development of human-like cognitive architectures has been an important goal of AI and cognitive science since their inception.4 Classic approaches include the SOAR architecture2 mentioned earlier and Anderson’s ACT-R.5 In the case of SOAR, procedural long-term knowledge is represented as symbolic production rules that associate operators to problem spaces, whereas ACT-R combines production rules with pattern associators to form a hybrid (symbolic/subsymbolic) architecture. Other approaches, such as Langley’s Icarus,6 use notions of concepts and skills and the hierarchical relationship between objects to achieve problem-solving behavior. Architectures such as SOAR, ACT-R, and Icarus were developed largely independently of advances in the brain sciences. However, neurobiology has given rise to important notions that include the idea of layered or hierarchical control, distributed representation, redundancy through parallelism, behavior-based decomposition, and the centrality of embodiment, adaptation, and learning.1 Most importantly, the notion of “intelligence” that these cognitive architectures aim to explain, and which is still at the heart of “machine intelligence,” appears to dissolve at the level of its neuronal organization. Indeed, as opposed to a single factor underlying intelligence, called G, it appears to depend on a wide range of interacting neuronal processes,7 This raises the fundamental question whether the target of the field designated by “artificial intelligence” should be rephrased to align it with the 34
IS-28-05-TandC.indd 34
natural processes underlying perception, cognition, emotion, and action in a more general sense.
Distributed Active Control The most advanced brain-based cognitive architecture available today is the Distributed Adaptive Control (DAC) theory of mind and brain (see Figure 3) that has been successfully deployed in a large number of robot tasks, validated against a broad set of neuroscience and psychological data, and given rise to a successful and novel approach to neurorehabilitation deployed in clinics today. 8,9 DAC follows Claude Bernard and Ivan Pavlov in stating that brains evolved to act. But what does it take to act? DAC specifically states that the how of action is realized through five fundamental processes, called H5W for short: • Why—the motivation for action in terms of needs, drives, and goals; • What—the objects in the world that actions pertain to; • Where—the location of objects in the world and the self; • When—the timing of action relative to the dynamics of the world; and • Who—the hidden states of other agents. This H5W problem designates each W as a large set of subquestions of varying complexity; it’s the problem set that dominates brain design. Whereas AI has pursued the elusive construct of “intelligence” going back to Galton and Binet, DAC proposes that the unifying phenomenon we should focus on—both to explain mind and brain and to construct artificial systems—is consciousness. This phenomenon has become part of the current neuroscience agenda due to the initial but separate efforts of Nobel laureates Francis Crick and Gerald Edelman. www.computer.org/intelligent
The DAC theory proposes that consciousness is a key component of the solution to the H5W problem, especially “Who,” and that it emerged during the Cambrian explosion more than 500 million years ago when many animal species suddenly had to coexist. Essentially, the proposal is that the interaction with the multi-agent and social real world requires fast realtime action that depends on parallel control loops. The conscious scene in turn allows the serialization of this real-time processing and the optimization of these parallel control loops. In this way, real-time control can proceed based on what’s possible and expressed by probability distributions, while their cohesion and o ptimality is achieved through the conscious scene that defines what’s definite and actual for the acting subject. For instance, the human cerebellum, a brain structure that operates fully outside of consciousness, comprises about 15 million parallel loops,10 each controlling the timing of specific event triggers. Hence, such a massive level of parallelization leads to a new kind of credit assignment problem in which potentially many thousands of policies must be optimized in parallel in the face of a dynamic and ambiguous world and changing needs of the agent. Regardless of the validity of this H5W hypothesis on consciousness, it’s a concept that can drive a more integrated approach towards advanced machines. In particular, if we look at the state of the art of the study of consciousness, we can observe that it’s organized around five complementary dimensions, the majority of which are at the center of the contemporary study of advanced machines. More specifically, we can say the content of conscious states are • grounded in the experiencing physically instantiated self, co-defined IEEE INTELLIGENT SYSTEMS
17/12/13 4:58 PM
september/october 2013
IS-28-05-TandC.indd 35
Long-term memory Working memory
Goals Plans
Episodic memory
Value/utility
Perception
Internal states (valence/arousal)
Action shaping/ selection
Sensation
Allostatic control
Behaviors
Contextual
Sequence/ interval memory
Adaptive
Event memory
Reactive
I refer to these core principles as the Grounded Enactive Predictive E xperience (GePe) model11. GePe shows that consciousness effectively comprises the current study of advanced machines with respect to both state-space representation and the generation of action policies. In addition, it expands it toward new horizons, where more complex tasks will have to be mastered such as social interaction through H5W. Hence, I propose that the engineering of advanced machines, especially those required to interact with humans, will depend on an integrated, biologically based approach that targets the fundamental phenomenon of consciousness in the context of the system-level view of architecture. In changing the perspective from the divisive and ill-founded notion of intelligence to that of consciousness, a new synergy between AI and neuroscience can be advanced. Both AI and neuroscience are well p ositioned to establish fruitful collaborations in pursuing brain analysis and synthesis. Where the latter can provide candidate theories of fundamental biological design principles, the former can integrate these in artifacts to assess their validity at scales of complexity beyond direct empirical validation. Hence, the engineering effort isn’t only giving rise to new technologies,
Autobiographical memory/meta learning
Needs
Sensors
Effectors
Soma
in the sensorimotor coupling of the agent to the world, • maintained in the coherence between sensorimotor predictions of the agent and the dynamics of the interaction with the world, • combined in high levels of differentiation (each conscious scene is unique) with high levels of integration, and • realized through highly parallel and, distributed implicit factors, and metastable, continuous and unified explicit factors.
World Figure 3. The Distributed Adaptive Control (DAC) architecture for perception, cognition, and action. DAC proposes that the brain is based on four tightly coupled layers: soma, reactive, adaptive, and contextual. Across these layers, we can distinguish three functional columns of organization: exosensing (yellow; sensation and perception of the world), endosensing (green; detecting and signaling states derived from the physically instantiated self), and the interface between self and world through action (red). The arrows show the primary flow of information, mapping exo- and endosensing into action. Each higher level introduces more advanced memory-dependent mappings from sensory states to actions generated dependent on the agent’s internal state. See text for further explanation.
but, more importantly, the synthesis of artificial brains will become part of the validation process of a scientific theory of body, brain, and mind as www.computer.org/intelligent
the DAC theory exemplifies. We can call this emerging paradigm for the integration of science and engineering Vico’s loop after the 18th century 35
18/12/13 3:24 PM
Neapolitan philosopher Giambattista Vico, who famously proposed that we can only understand that what we create, or, “Verum et factum reciprocantur seu convertuntur.”
A
fter chasing the mirage of intelligence for the past 60 years, AI researchers haven’t made significant progress in a system-level understanding of mind as evidenced by our current inability to engineer advanced human-compatible autonomous sys tems. Conversely, neuroscience is chasing the dream of big data and running the risk of losing sight of its goal to understand the brain by sacrificing hypotheses and theory. I propose that by moving from intelligence to consciousness, we can find a new and integrated science and engineering of body, brain, and mind that will not only allow us to realize advanced machines but also directly address the last great outstanding challenge faced by humanity: the nature of subjective experience.
Acknowledgments
I thank Christian Moscardi and Xerxes A rsiwala for their help in preparing this manuscript. The development of DAC is supported by the FP7 ICT projects CEEDS (258749) and EFAA (270490).
References 1. P.F. Verschure, “Formal Minds and Biological Brains: AI and Edelman’s Extended Theory of Neuronal Group Selection,” IEEE Expert, vol. 8, no. 5, 1993, pp. 66–75. 2. A. Newell, Unified Theories of Cognition, Harvard University Press, 1994. 3. G.M. Edelman, Neural Darwinism: The Theory of Neuronal Group Selection, Basic Books, 1987. 4. P. Langley, J.E. Laird, and S. Rogers, “Cognitive Architectures: Research Issues and Challenges,” Cognitive 36
IS-28-05-TandC.indd 36
Systems Research, vol. 10, no. 2, 2009, pp. 141–160. 5. J.R. Anderson, Rules of the Mind, Lawrence Erlbaum Associates, 1993. 6. P. Langley et al., “A Design for the ICARUS Architecture,” SIGART Bulletin, vol. 2, no. 4, 1991, pp. 104–109. 7. A. Hampshire et al., “Fractionating Human Intelligence,” Neuron, vol. 76, no. 6, 2012, pp. 1225–1237. 8. P.F. Verschure, “The Distributed Adaptive Control Architecture of the Mind, Brain, Body Nexus,” Biologically Inspired Cognitive Architecture, vol. 1, no. 1, 2012, pp. 55–72. 9. P.F. Verschure, T. Voegtlin, and R.J. Douglas, “Environmentally Mediated Synergy between Perception and Behaviour in Mobile Robots,” Nature, vol. 425, no. 6958, pp. 620–624. 10. S. Herculano-Houzel, “The Human Brain in Numbers: A Linearly Scaledup Primate Brain,” Frontiers in Human Neuroscience, vol. 3, no. 31, 2009; www. ncbi.nlm.nih.gov/pubmed/19915731. 11. P.F.M.J. Verschure, “The Complexity of Reality and Human Computer Confluence: Stemming the Data Deluge by Empowering Human Creativity,” Proc. 9th ACM SIGCHI, ACM, 2011, pp. 3–6. Paul F.M.J. Verschure is the ICREA research professor and directs the Center for Autonomous Systems and Neurorobotics, where he heads the laboratory of Synthetic Perceptive, Emotive, and Cognitive Systems at Universitat Pompeu Fabra, in Barcelona, Spain. Contact him at
[email protected].
The Challenges of Closed-Loop Invasive Brain-Machine Interfaces Qiaosheng Zhang and Xiaoxiang Zheng, Zhejiang University
Brain-machine interfaces (BMIs) are emerging technologies that create www.computer.org/intelligent
novel communicating channels between the biological brain and artificial devices.1–6 BMIs provide methods to restore motor behaviors by directly translating a brain’s neural signals into controlled machine commands, bypassing the spinal cord and peripheral nervous system. Both invasive and noninvasive BMIs depend on signal recording, and much debate has focused on which option is the best. We’ll focus here on invasive BMIs, in which neural signals (spikes and local field potentials) are collected from implanted microelectrodes in the cortex or deep brain structures. This method has high spatial and temporal resolution, so invasive BMIs provide better performance when used to restore sophisticated movements or to control multifreedom-degree devices. We can divide invasive BMIs based on their development process: openloop decoding and closed-loop control. In open-loop BMIs, neural signals help predict the kinematic parameters: the animal subject doesn’t know he can control an artificial device through his neural activities. Closed-loop BMIs add sensory feedback to the subject as well as offer neural decoding: the human can control an artificial device more precisely with his neural activities.
Making Strides Invasive BMI researchers have made significant progress in the past two decades. John Chapin and his colleagues demonstrated real-time control of a robot arm via neural signals from rat motor cortex in 1999.1 This demonstration set off a new wave of invasive BMI research, starting the following year, when Miguel Nicolelis and colleagues reported that they could decode and use neural signals recorded by an intra-cortical microelectrode array from non-human primates to control movements of a robotic arm in one dimension.2 IEEE INTELLIGENT SYSTEMS
17/12/13 4:58 PM